• Frankie Stroud

Secure Data Centre to Cloud Connectivity

Connecting to the public cloud is a requirement for most companies that have any application, data and services located in their data centre that needs to integrate with and communicate with applications and data in public cloud provider data centres. Linking to the public cloud can be achieved by the internet with site to site internet based Virtual Private Networks (VPN) connections or over private networks with circuits leased or rented from a 3rd party network provider to the public cloud. A combination of both methods is also possible depending on your connectivity, redundancy and cost requirements.

This post will provide a design that uses the private circuit method as it is a typical method of connectivity when customers are looking to have more control over network performance, security, visibility and cost.

Target Design

The target design aims to achieve secure connectivity from a customer's "on-premise" data centre to all of the major public cloud providers. The design will include public cloud providers such as AWS, Azure, GCP, OCI and Alicloud, a 3rd party network provider, Megaport and secure multicloud network software vendor Aviatrix.

The design and the details of how the connectivity is achieved will also take into consideration the following customer requirements.

  • High bandwidth with dynamic bandwidth scaling

  • Predictable network performance, throughput and latency

  • Security and compliance requires security policy control and end to end encryption

  • Operational visibility

  • Minimise data transfer and transit costs

The design will outline the components of the "underlay" and the "overlay" to meet the requirements above. The "underlay" and "overlay" terminology is commonly used by vendors to distinguish between the underlying physical components that provide the end to end connectvity for example, router and switches and cabling. Whereas the "overlay" is a virtualised network built on top of the physical components, providing private communications over a shared network. An "overlay" typically provides end to end tunnels between points in the network with encryption for security of data in transit.

The diagram below illustrates the separation of the overlay and the underlay. For data centre to public cloud connectivity the design uses Aviatrix and leverages 3rd party network connectivity provider components and dynamically builds secure high performance encrypted tunnels between the Aviatrix gateways and the Aviatrix Edge.

High Level Design

The next sections of this design will cover the components of the underlay and overlay.

Building the Datacentre to Public Cloud Underlay

Public Cloud Locations and Points of Presence

Data centres can be classed as your own private data centre or co-location data centre, i.e., a provider who leases data centre physical and computer rack space to companies. Both these data centres types are typically referred to as "on-premise" data centres as opposed to the public cloud providers data centres. Public cloud providers also have global points of presence that allow for connectivity to their network. These are typically provided at co-location data centres and it is at these locations that customers can connect to the public cloud local to that co-location facility.

3rd party network providers use these co-location data centres to access directly to public cloud. These network providers will sell and manage customers with connectivity to the public cloud via their network. Some public cloud providers call this "Partner Connect". This is a common use case as the connectivty between the network provider and the public cloud has already been established and this simplifies and expedites the access for customers. These network providers can also connect a customer from their "on-premise" data centre and remove the requirement for customers to have a presence in all global locations. A customer may only require a few on-ramps to the network provider and from there (depending on the network provider) have global reach to all public clouds. Some of these network providers can in addition to the physical connectivity also offer customers the flexibility to dynamically increase and decrease their bandwidth requirements and also the ability to control their own routing capabilities between the cloud providers.

This design uses the Partner Connect model to connect the customer network to the public cloud. There are many network providers such as Equinix and Megaport that provide Network-as-a-Service (NaaS) solutions for connectivity. Megaport will, for the purpose of illustration of this design, be used as an example of a Partner Connect provider.

The following diagram provides a simplified and high level view of customer connecting to a Public Cloud Provider using a private service from a public cloud partner network provider.

Partner Connect to Cloud Provider

Partner Connect Provider Service

Megaport is a Network-as-a-Service provider (NaaS) and through their data centre to data centre fabric can extend connectivity from the customer to cloud providers. A customer in a Megaport enabled data centre can connect to the Megaport fabric and this provides access to a global network fabric with access to all the major cloud providers.

Megaport Enabled Datacentres

The on-ramp to the Megaport fabric is with a data centre fibre cross connect from customer network connecting in the Megaport enabled data centre to a Megaport Port. This can be a physcial 1 or 10Gbps fibre port or multiple of if using link aggregation.

Customer to Megaport

The Megaport fabric is extended to the cloud provider with high bandwidth circuits interfacing directly on the cloud provider's network access device. These points of interface are provided in each of the co-located data centres that Megaport and the public cloud provider use for cloud access.

Megaport Fabric to Cloud Provider

The connectivity from the Megaport fabric to each of the cloud providers is implemented with physical hardware redundancy and path diversity and customers can connect to the fabric using the diversity and physical redundancy they need to ensure continuous network service availability from the customer to the cloud provider.

Virtual Cross Connect

When a customer has a Port they can extend their network to multiple cloud locations and multiple public cloud providers with the Megaport fabric by using Megaport Virtual Cross Connects (VXC). These provide separate logical connections over a physcial network or fabric. To access a VXC the customer configures a Virtual LAN (VLAN) on the customer router and the combination of the VLAN mapped to a VXC represents the end to end network connection from the customer device to the public cloud provider.

The Megaport fabric also has routing capability with the Megaport Cloud Router (MCR). This gives customers routing closer to the public cloud and any traffic that needs to be routed between clouds will avoid the need for backhaul routing to the customer data centre. The MCR provides a VXC connection to the customer data centre and then VXC connections to each public cloud location as required. The diagram below illustrates a customer connecting from a single data centres location and the MCR splits out the connections to the cloud provider.

Megaport Cloud Router

Provider Specific Components

The cloud providers have similar connectivity constructs but the detail in configurations will differ when implementing from the partner network to cloud. The following diagram illustrate the connectivity that completes the final part of the underlay.

Partner Connect

Each public cloud has an access method to physcially connect the partner network.

  • Azure - ExpressRoute

  • AWS - Direct Connect

  • GCP - Interconnect

  • OCI - FastConnect

  • Alicloud - Express Connect

The access method for each of these is a layer 2 service that is required to terminate on a gateway located in each of the customers environments cloud environment.

  • Azure - VNG

  • AWS - VGW

  • GCP - CR

  • OCI - DRG

  • Alicloud - VBR

The following tables provides a desciption of the components illustrated in the diagram above: Partner Connect.




Azure layer 2 access circuit connecting partner networks to Azure VNET


Azure Microsoft Enterprise Edge providing connectivity for Expressroute connections to a Azure Virtual Network (VNET)

Private Peering

Customer configured logical connections as part of ExpressRoute service.


Virtual Network Gateway providing routing termination point for a VNET connecting to external networks over ExpressRoute.

Direct Connect

AWS layer 2 access circuit connecting partner network to AWS VPC.


Virtual Gateway Way providing routing point for VPC connecting to external networks over DirectConnect.

Private VIF

Virtual interface providing a layer 2 service connecting network components in AWS VPC. Equates to a VLAN.


Interconnect is a layer 2 access circuit connecting partner network to GCP VPC.


Cloud Router providing routing point for GCP VPC connecting to external networks over Interconnect.

VLAN Attachment

Layer 2 service providing separation of network connectivity.


FastConnect is a layer 2 access circuit connecting partner network to Oracle OCI VCN.


Dynamic Routing Gateway providing routing point for OCI VCN connecting to external networks over FastConnect.

Express Connect

ExpressConnect is a layer 2 access circuit connecting partner network to AliCloud VPC.

Private Circuit

Virtual interface providing a layer 2 service connecting network components to OCI VCN. Equates to a VLAN.


Virtual Border Router providing routing point for AliCloud VPC connecting networks over ExpressConnect


Megaport Cloud Router is a layer 3 virtual routing instance on Megaport fabric.


Port is the physical on-ramp to the Megaport fabric.


Virtual Cross Connect is a layer to services providing logical point to point connection over Megaport Fabric.


Virtual Network. Public cloud providers construct to represent a customer environment in which resources such as compute and storage can be deployed. This is a cloud environment specific to the customer and closely resembles a traditional network in a data centre.

With an overview of the the underlay components covered, the following section provides a description of the overlay and then will round off the design by pulling all the constructs together providing an end to end solution to deliver data centre to cloud connectivity.

Building the Datacentre to Cloud Overlay

Secure High Performance Connectivity and Visibility

In addition to meeting the physical connectivity requirements, many customers also require the security and operational visibility for their "on-premise" to public cloud communications. This can be driven by business policy, security policy or audit and compliance requirements. There are also some cloud provider imposed limits to route propagation that needs to be overcome. To meet these additional requirements, components will be incorporated that will provide secure encrypted high bandwidth connectivity and overcome route table limits from the"on-premise" data centre to the cloud provider .

The Aviatrix Secure Cloud Platform automates the build of a transit and spoke configuration in the cloud network and gives an "active mesh" of connected gateways. The mesh has the redundancy and performance required for networking in the public cloud. The customer datacentre is viewed as another spoke in the architecture and the deployment of an Aviatrix Edge extends the performance, encryption and visibility to the customer data centre.

The Aviatrix Edge is a virtual appliance running on VMware ESXi and offers bandwidth that has ability to scale to 10Gbps per Aviatrix Edge. It establishes the secure high performance encrypted overlay (HPE) with Aviatrix Gateways deployed in the customer's environment in the public cloud. The Aviatrix Edge does this with Aviatrix transit gateways for core cloud routing to application and data in spoke VPC/VNET/VCN in the public cloud

Secure High Performance Encrypted Connectivity

The following table decsribes the components illustrated in the diagram above: Secure High Performance Encrypted Connectivity.



Aviatrix Virtual Edge

Aviatrix virtualised edge gateway used for on-premise datacentres to connect to cloud with a high speed encrypted backbone.

Aviatrix Transit

Core multi-cloud software defined network routing platform providing secure cloud connectivity and security. A transit comprises of Aviatrix Gateways.

Aviatrix Spoke

VPC/VNET/VCN gateway to the Aviatrix multi-cloud active mesh fabric.


High Performance Encrypted tunnels providing high bandwidth secure end to end network connectivity.

The above sections have given an overview of both the underlay and overlay components. The next section pulls all the components from the underlay and the overlay to illustrate an end to end design from customer datacentre to public cloud.

Pulling It All Together

Secure Datacentre to Cloud Connectivity

Secure Datacentre to Cloud Connectivity shown above illustrates the end to end design connecting to all cloud public clouds. Customers may not connect to all public clouds but will typically connect to more than a single cloud.

To avoid over complicating the diagram some components have been omitted. For example using separate datacentres for each connection to a single cloud provider and separate datacentres for the MCR. Another point to note is each public cloud provider would have their own datacentre facilities and not necessarily a shared location as represented above. Also the diagram only illustrates a single Application/Data spoke but the actual design would have many spokes in each cloud. These would be connected to the Aviatrix Transits represented in the diagram.

The Aviatrix Edge establishes encrypted tunnels to all the Aviatrix Transit Gateways. The design also includes Aviatrix Transit peering. This is allows each cloud to connect to the cloud over the encrypted tunnels. The transit to transit configuration could have been formed using the cloud provider internet gateways and establish connectivity over provider peerings. However, the customer requirement for this solution is to utilse the private circuits provided by Megaport. The transit peering HPE tunnels are formed over the private network via the Megaport MCR. The diagram "Transit Peering over Private Network" below, illustrates transit peering between Azure and AWS. To avoid over complicating the diagram, the second Aviatrix Transit Gateways in Azure and AWS are not illustrated and these would form a full active mesh of connectivity in the implemented solution.

Transit Peering over Private Network


This post has provided an overview of the connectivity for a customer requiring their "on-premise" datacentre to connect with public cloud providers network using a 3rd party network provider. This connectivity is required when applications and services in the data centre are required to communicate with applications and services in the public cloud. Using private circuits from the data centre to the public cloud is best suited when there are high bandwidth, low latency, lower data transfer costs, security, audit and compliance requirements.

End to End encryption between environment does not need to comprimise bandwidth and performance. The Aviatrix Secure Cloud Platform provide the performance characteristics required and in addition increase the customers operational visibility even across shared network infrastructure. The Aviatrix platform is also provides public cloud to cloud high performance encryption that can be implemented over the private circuits and also via the public cloud providers internet gateways.

*Disclaimer. High Performance Encryption (HPE) is available on Aviatrix Platform in AWS, Azure, GCP and OCI . Encrypted tunnels are available in AliCloud but for the higher bandwidth using HPE is a feature being consider in AliCloud and may be part of future releases.

47 views0 comments

Recent Posts

See All