As hybrid cloud and multicloud architectures become the norm for enterprise networks, we examine three ways to connect workloads in AWS VPCs to Microsoft Azure VNets.
Hybrid cloud and multicloud architectures have become common for enterprise IT departments seeking greater network reliability, security, and cost-efficiency in support of optimal application performance. As more and more enterprise workloads migrate to the cloud, it’s only natural that organizations seek ways to connect AWS and Microsoft Azure, the top two hyperscalers, to future-proof their network and ensure the lowest latency between workloads.
Let’s say you’re like one of our customers, a global retail brand who hosts their eCommerce presence with AWS and Azure, deploying mirror applications in both clouds. To comply with your security policy, you backhaul some of your traffic to your data centre where that policy is applied – but not all of your traffic needs to be subject to the security policy. To save network resources in your data centre, you want to keep the remaining traffic at the edge of each cloud, decreasing latency between AWS and Azure.
In this case, there are three ways to connect an AWS environment to a Microsoft Azure one, each with its pros and cons. One method, the VPN tunnel, is far and away the most common, but as you might have guessed if you read this blog regularly, it’s not the best one.
1. Set up VPN tunnels
There are plenty of resources online about how you can set up a VPN tunnel over a public internet connection between AWS and Microsoft Azure. It’s a tried and true traditional method of connecting between clouds, but there are many disadvantages to connecting your cloud environments this way. Here are a few:
Con: limited throughput
For higher compute workloads, you’ll have to build numerous tunnels to support the throughput you need. You’ll also likely have to spend a lot of time managing ECMP (Equal-Cost Multi-Path Routing) or load balancing to make sure that the bandwidth you need stays available, and VPN tunnels don’t get congested and fail.
Con: unpredictable routing via the public internet
Anyone who’s ever logged onto their workstation with a VPN token knows firsthand that data transfer through the public internet, through a VPN tunnel, can be balky. That’s because routing protocols on the internet behave in complex and inconsistent ways; there’s only so much control you have over the routes your data packets traverse. Unpredictable routing means higher latency, which means poorer application performance.
Con: compromised security due to BGP route hijacking
Year on year, the incidence of cyberattacks continues to grow. Now more than ever, CIOs are being kept up at night by cybersecurity concerns.
The same trusting nature of Border Gateway Protocol (BGP) that makes the internet so scalable is exactly what makes it vulnerable to route hijacking attacks from threat-actors. BGP relies on Automomous Systems such as ISPs to announce routes to blocks of IP addresses. Shady actors can hijack these announcements and cause traffic to be redirected to “black holes” or, in the case of a 2018 Russian attack on the cryptocurrency site MyEtherWallet, to a phishing site that gathered account information to steal $152,000 USD.
Con: AWS and Azure data transfer fees
Perhaps the biggest downside to connecting cloud environments over VPN tunnels is the cost of data coming out of each environment and going through the public internet back to your servers inside a data centre or on-premises. This also applies when routing data between your private environments in the public cloud, when traffic flows from one cloud to another. These fees are applied per GB by the cloud providers on egress.
These fees can be prohibitive – in the case of one of our customers, potential egress fees totaled $43,000 USD per month before Megaport!
2. Build private lines
The second way you can connect your AWS and Azure environments is to build private lines to the two hyperscalers by buying dedicated circuits from your telco provider. These circuits will give you a private connection to the cloud providers with traffic that isn’t routed over the unpredictable, vulnerable public internet.
But there are also many disadvantages to this approach:
Con: more costly with long-term contracts
Your telco will likely lock you into 18-24 month contracts for your dedicated circuits, with 45-90 day installation windows. So if you’re looking to increase bandwidth capacity, it might take you months. If you’re looking to decrease bandwidth capacity, you’ll have to live with unused circuits because of those long-term contracts.
In the end, building private lines is likely the most costly option to connect between AWS and Microsoft Azure.
Con: latency still an issue due to backhaul traffic
Even with private circuits to each cloud, you’ll still need to backhaul traffic to your data centre or on-premises routing equipment. In other words, your data will still need to go out of AWS through your private connection back to your on-premises or colocation environment only to shoot back through your other private connection to your Microsoft Azure environment, if you want the workloads in both environments to exchange data. Consequently, latency will still be an issue even if your private circuits to AWS and Microsoft Azure will likely offer you more reliability than a VPN tunnel.
Con: additional capex for WAN capacity
Even with your own private connections to the hyperscalers, you’ll continue to need on-premises infrastructure or a significant colocation presence. And this, of course, means more capex to account for in your annual budget.
3. Set up private connectivity with a virtual router (like Megaport Cloud Router)
While the most common way to connect workloads to different cloud environments is to use a VPN tunnel, a way that’s becoming increasingly common is to set up private connectivity with a virtual router like Megaport Cloud Router (MCR).
With MCR, you can get private connectivity and the security, reliability, and lower costs that come with not having to send data through the public internet. Plus, you won’t have to:
- hairpin your traffic back to your on-premises environment
- sign on to long-term contracts with your telco provider
- add any extra equipment to turn up connectivity
- pay high AWS and Azure data transfer fees for egress data going through the internet.
If you want to scale your bandwidth needs, you can do it with a few clicks on Megaport’s global, on-demand Software Defined Network (SDN) or you can even automate changes to your capacity needs through our API. The MCR is set up in the physical location where the AWS and Microsoft Azure edges reside. In some cases, MCR and both cloud service providers are available on the same data centre campus.
Let’s say you’re our customer again – that global retail brand. Your eCommerce store is hosted with AWS US-East (Northern Virginia) with applications in US-East with Azure. You want to route directly between the two clouds but also retain the ability to manage a primary and secondary peer back to your data center in the Washington DC area to manage your security policy.
MCR simplifies this traffic routing back to the data center for the security check; you’re able to maintain a single peer between your data center and the MCR. As additional cloud links are added, additional peers aren’t required at the data center because you can easily manage these peers on your MCRs.
Furthermore, the latency between the two cloud environments, privately connected via our virtual router, is just a three-to-four-millisecond round trip. This lowest-latency path between AWS and Azure, enabled by the MCR’s direct connection, means optimal application performance.
To learn more about how Megaport Cloud Router can help you connect between AWS and Azure, click here.