I've been asked a few times around whether or not HCX can be used if the customer has a route-based VPN into VMware Cloud on AWS and is advertising the default route 0.0.0.0/0 into the SDDC. The short answer is Yes, this is supported and works.
The long answer as to why this question comes up is that when we advertise the default route of 0.0.0.0/0 into the SDDC then all traffic from the SDDC will flow via on-premises, this includes traffic destined for the internet. Some customers prefer to do this to ensure all outbound internet traffic routes via their perimeter firewall so they can ensure that all security and logging policies are applied. The confusion comes around HCX. Since HCX is unable to use an existing IPSEC VPN tunnel to send traffic from on-premises into the SDDC as per the KB article it needs to establish its own. The HCX-IX Interconnect and HCX Network Extension Appliances both establish an IPSEC VPN tunnel from on-premises to their peer appliances in the SDDC using UDP/4500. So the question is, if we are advertising 0.0.0.0/0 into the SDDC so all traffic traverses the IPSEC VPN tunnel back to on-premises, then how can the HCX-IX Interconnect and HCX Network Extension Appliances communicate?
In my test environment, I have a route-based VPN and I've advertised the default route into the SDDC from on-premises:
From within a test VM in the SDDC if I ping 8.8.8.8 you see my latency is higher than expected, this is because traffic has to go across the VPN to on-premises and egress out that way then return via the same path. When I also check what my public IP address is it returns my on-premises public IP:Now if I deployed HCX we see that my HCX-IX Interconnect and HCX Network Extension Appliances successfully establish an IPSEC tunnel from the on-premises appliances to the public IP addresses assigned to the appliances in the SDDC:
No comments:
Post a Comment