AWS Site-to-Site VPN: Scaling Past 1.25 Gbps Limits
The new 5 Gbps per tunnel capacity now makes AWS Site-to-Site VPN a viable primary link for enterprises previously forced into Direct Connect. You will learn how to navigate the architectural trade-offs between Virtual Private Gateways and Transit Gateways when scaling beyond the legacy 1.25 Gbps limit. We dissect the mechanics of the new VPN Concentrator model, which aggregates shared bandwidth across multiple sites, contrasting it with the dedicated throughput of Large Bandwidth Tunnels.
The discussion evaluates specific deployment patterns, including VPN CloudHub for hub-and-spoke topologies and global policy management via AWS Cloud WAN. By applying a decision framework based on site count and complexity, architects can avoid the pitfalls of mismatched infrastructure in an era where cloud migration demands precision over brute force.
Core AWS VPN Components and Architectural Definitions
AWS Site-to-Site VPN Architecture and Virtual Private Gateway Definitions
Virtual Private Gateways terminate standard IPsec tunnels up to 1.25 Gb per tunnel according to Amazon. Com/vpn/site-to-site-vpn/ data. Static or dynamic routing protocols establish encrypted paths between on-premises infrastructure and AWS VPCs. VGW endpoints lack support for Equilibrium Cost Multi-Path (ECMP) routing, which restricts aggregate throughput scaling compared to Transit Gateway alternatives. A VPN Concentrator functions as a shared aggregation point supporting multiple sites with constrained per-site bandwidth allocation. InterLIR analysis indicates that selecting VGW-based architectures for high-availability requirements introduces single-point-of-failure risks absent in ECMP-enabled designs. Gns. While cloudbased VPN services held 62.7% market share in 2025, legacy hardware c often force suboptimal architectural choices. The absence of IPv6 outer tunnel support on VGWs further restricts modernization paths for dual-stack environments. Engineers must map site count against the 1.25 Gb ceiling before committing to this termination model. Overlooking the ECMP limitation results in unmet bandwidth SLAs during traffic bursts. Migration to Transit Gateway becomes mandatory when requirements exceed single-tunnel capacity or demand advanced routing policies.
Scaling Enterprise Throughput with 5 Gbps Large Bandwidth Tunnels and ECMP
Https://aws. Amazon. Com/blogs/networking-and-content-delivery/aws-network-optimization-tips/ data shows aggregating tunnels via ECMP on a Transit Gateway achieves 50 Gb throughput. This mechanism combines mul arge Bandwidth Tunnels, each supporting 5 Gb, to bypass the 1.25 Gb ceiling of stand ls, each supporting 5 Gb, to bypass the 1.25 Gb ceiling of standard SitetoSite VPNs. Th architecture demands a Transit Gateway attachment because Virtual Private Gateways lack the necessary path-scaling capabilities for high-volume flows. Enterprises previously forced to deploy AWS Direct Connect for links exceeding 1 Gb now find native IPsec sufficient for pr imary connectivity. Operational complexity in route management and failover logic increases with this throughput gain. Operators must configure BGP to distribute prefixes evenly across all active tunnel paths, or asymmetric traffic patterns will degrade performance. Misconfigured BGP local preferences can cause suboptimal path selection, negating the bandwidth benefits entirely. InterLIR analysis indicates that without strict adherence to equal-cost path rules, network convergence times increase notably during partial outages.
| Feature | Standard Tunnel | Large Bandwidth Tunnel |
|---|---|---|
| Max Throughput | 1. | |
| ECMP Support | Yes (TGW) | Yes (TGW) |
| Gateway Type | VGW or TGW | TGW Only |
| Use Case | Branch offices | Data center replacement |
Network architects see a clear migration path away from hardware-dependent MPLS circuits. Accelerated VPN functionality optimizes traffic by routing it over the AWS global backbone rather than the public internet. Latency-sensitive applications benefit from reduced jitter without requiring physical fiber extensions into the cloud region. Deployment strategies must prioritize BGP tuning over raw bandwidth provisioning to realize these gains. Failure to align on-premises router configurations with AWS-side ECMP requirements results in unused capacity. The constraint remains the strict requirement for TGW, forcing VGW users to migrate architectures before scaling.
Comparing Per-Tunnel Bandwidth Limits: VGW vs Large VPN vs Concentrator
Decision Framework and Technical Specifications data shows pertunnel bandwidth caps at 1.25 Gb for standard VGW links but reach 5 Gb w . 25 Gb for standard VGW links but reach 5 Gb with Large VPN configurations. This dis parity dictates architecture selection for high-throughput hybrid migrations requiring substantial data transfer capacity. Standard Virtual Private Gateways constrain packet processing to 140,000 PPS, whereas Large Bandwidth Tunnels sustain 400,000 PPS according to Decision Framework and Technical Specifications data. Organizations deploying VPN Concentrator solutions face a hard ceiling of 100 Mb per site despite shared aggregate limit s. The table below contrasts these operational parameters across the current portfolio options.
| Feature | VGW Standard | TGW Standard | Large VPN | Concentrator |
|---|---|---|---|---|
| Max Tunnel Speed | 1.25 Gb | 1. | ||
| Packet Rate | 140,000 PPS | 140,000 PPS | 400,000 PPS | 10,000 PPS |
| ECMP Support | No | Yes | Yes | No |
| Topology | Hub-and-Spoke | Star | Star | Aggregated |
ECMP support remains absent in VGW designs, forcing reliance on single-tunnel throughput limits. Choosing a concentrator for branch offices sacrifices individual site performance for management simplicity. InterLIR analysis indicates that misaligning tunnel type with application packet profiles causes latent bottlenecks before bandwidth saturation occurs. Operators targeting low-latency financial trades should avoid concentrator architectures due to processing constraints. Ignoring packet-per-second metrics manifests as jitter during burst traffic events. Selecting the correct termination point prevents unnecessary hardware upgrades on-premises.
Mechanics of Throughput Scaling and Routing Protocols
ECMP Packet Distribution Mechanics Across VPN Tunnels
Aggregating tunnels via ECMP on a Transit Gateway achieves 50 Gb throughput. AWS Transit Gateway distributes traffic across these parallel paths using a hash of source IP, destination IP, protocol, and port fields. This per-flow mechanism guarantees packet order within a single conversation while utilizing all available tunnel capacity. Operators configuring BGP must assign unique Autonomous System Numbers to distinct on-premises endpoints to prevent path selection loops during convergence events. Strict bandwidth ceilings emerge in low-flow environments due to the hashing algorithm. A site generating only ten large database replication streams will fail to saturate a 5 Gb aggregate link regardless of tunnel count. Flow starvation occurs when active conversation counts remain below the number of available ECMP next-hops.
Network architects must validate application flow diversity before deploying high-bandwidth aggregates. Static routing configurations lack the dynamic path withdrawal capabilities inherent to BGP, risking blackholes during partial tunnel outages. Throughput scales with conversation count rather than tunnel quantity alone because the system relies on flow entropy.
Deploying Large Bandwidth Tunnels for Manufacturing Workloads
A manufacturing entity with 3 sites requires up to 5 Gb via ECMP using Large VPN plus TGW. This architecture replaces standard VGW links that cap per-tunnel throughput at lower ceilings unsuitable for industrial telemetry bursts. The mechanism relies on aggregating multiple Large Bandwidth Tunnels across a Transit Gateway to sustain high packet rates without fragmentation delays. These tunnels process up to 400,000 PPS, vastly exceeding the capacity of standard concentrators handling factory floor sensor floods. Customer Gateways with dynamic IPs are forbidden by this design. Static endpoint configurations complicate failover scripts during ISP outages. Raw throughput gain comes with a loss of flexible edge addressing schemes common in legacy plant networks. Uneven load distribution occurs where one link saturates while others remain underutilized if operators ignore this step. Troubleshooting such bottlenecks requires inspecting BGP path attributes rather than simple interface counters. Rigid edge requirements are the constraint exchanged for massive pipe availability.
BGP Route Propagation Errors and Tunnel Redundancy Limits
Exceeding the 10,000 PPS ceiling on Concentrator units triggers immediate packet loss during BGP convergence storms according to Decision Framework and Technical Specifications data. Standard Virtual Private Gateways lack Equal-Cost Multi-Path routing. All traffic forces through a single active tunnel even when paths exist. Route propagation delays directly saturate the available pipe because of this architectural constraint. Operators relying on dynamic updates for hundreds of prefixes risk blackholing valid traffic if the sole tunnel stalls. The VPN Concentrator shares bandwidth across up to 100 sites. One noisy neighbor can starve adjacent connections of necessary control-plane capacity. A single flapping peer consumes the entire processing budget of a concentrator when limits are ignored. This separation prevents local routing instability from cascading into a regional outage.
Strategic Application of VPN Solutions Across Enterprise Scenarios
Defining AWS VPN Architectural Constraints for Startups and SMBs

Scenario 1: according to Standard VPN with VGW for Startups and SMBs, a single VPC supporting two offices generating 200–400 Mbps sustained traffic fits the Standard VPN profile. This architecture relies on a Virtual Private Gateway to terminate IPsec tunnels without the complexity of a Transit Gateway. Cloudviz. Io analysis indicates that for single VPC scenarios, direct VGW attachment remains less expensive than introducing the Transit Gateway attachment layer. The 10 connections limit on the VGW creates a hard ceiling for branch expansion, forcing an early architectural migration if site count grows. A standard connection running continuously costs approximately $36 per month excluding data transfer, establishing a predictable baseline for small budgets. However, this cost efficiency trades off against feature availability; VGW lacks support for ECMP and IPv6 outer tunnels found in larger deployments. Operators must weigh the immediate savings against the operational debt of migrating to a Transit Gateway when exceeding ten sites or requiring advanced routing policies. The 100 dynamic routes constraint further restricts prefix advertisement, limiting network segmentation capabilities as the organization scales.
Meanwhile, scenario 3: as reported by Manufacturing with Three Factories, three sites generate sustained 3 Gb throughput with spikes reaching 5 Gb. Standard VPN architectures would require four parallel connections per factory using ECMP to match this capacity, creating twelve total tunnels that increase BGP state complexity. Large Bandwidth Tunnels consolidate this traffic into a single logical connection per site while maintaining redundancy through dual active tunnels. The mechanism leverages Transit Gateway hashing to distribute flows across the available path capacity without requiring manual load balancing configurations on-premises.
This configuration eliminates the operational overhead of managing twelve individual tunnel states and reduces the risk of asymmetric routing during failure events. However, this architecture mandates a Transit Gateway attachment, as Virtual Private Gateways cannot terminate high-bandwidth tunnel types. The cost implication involves higher hourly gateway charges compared to basic VGW deployments, trading capital expense savings for reduced operational complexity.
InterLIR assessment indicates that migrating from standard tunnels prevents control-plane saturation during SCADA telemetry bursts common in industrial environments. Operators must verify that on-premises edge devices support the increased packet processing rates required for 5 Gb line rates. Failure to validate hardware capabilities results in physical interface bottlenecks regardless of the AWS-side tunnel capacity provisioned.
Comparing Cost Efficiency: VPN Concentrator vs Individual Site-to-Site Links
Scenario 4: per Retail Chain with 100 Stores, consolidating branches onto a single VPN Concentrator attachment delivers a 60% cost reduction versus individual links. This architecture shifts the economic model from per-connection hourly billing to a shared aggregate capacity pool. Operators replace dozens of Standard VPN tunnels with one high-density termination point on the Transit Gateway. The mechanism aggregates traffic from up to 100 sites into a single logical interface, drastically simplifying the routing table. However, this efficiency forces a strict protocol mandate: static routing is unsupported, requiring every branch router to run BGP. The limitation is architectural rigidity; sites lacking BGP capability cannot join the concentrator cluster without hardware upgrades. The shared bandwidth ceiling means a single noisy neighbor can impact all connected stores during peak windows. Scenario 4: based on Retail Chain with 100 Stores, structured reviews yield immediate financial ROI, with firms like Burns & McDonnell reducing overall AWS bills by 30% in the first week. The implication for network teams is a binary choice between operational simplicity and protocol complexity.
| Feature | Individual Links | VPN Concentrator |
|---|---|---|
| Routing Protocol | Static or BGP | BGP Only |
| Scaling Method | Linear addition | Shared Aggregate |
| Max Sites | 50 per Region | 100 per Unit |
Implementation Steps for High-Bandwidth and Concentrator Deployments
according to Prerequisites for Large Bandwidth Tunnel and VPN Concentrator Deployment

Factories, factories sustain 3 Gb baselines spiking to 5 Gb, requiring La rge Bandwidth Tunnels to a void twelve parallel standard connections. Operators must delete existing connections because in-place upgrades remain impossible. This destructive requirement forces a maintenance window for any migration from Standard VPN architectures to high-bandwidth variants. Network teams cannot simply toggle a feature flag. They must provision new resources and sever live traffic paths temporarily. The operational consequence is a mandatory cutover event rather than a smooth background transition. Scenario 3: as reported by Manufacturing with Three Factories, Customer Gateways require fixed public IP addresses to function correctly. Dynamic addressing fails validation checks, rendering DHCP-based edge routers incompatible without static lease reservations. This constraint eliminates floating IP schemes often used in disaster recovery failover scenarios. Operators relying on dynamic upstream assignments must reconfigure their edge hardware to maintain persistent endpoint identity. The architecture demands stable reachability for the IPsec handshake to complete successfully.
- Verify edge device holds a static public IP address.
- Confirm Transit Gateway attachment exists in the target region.
- Document current BGP ASN and prefix lists for recreation.
- Schedule maintenance window for connection deletion and rebuild.
- Apply new tunnel configuration to the on-premises router.
The rigid dependency on static IPs prevents using certain cost-effective ISP circuits that lack fixed addressing options.
Executing High-Throughput Configurations for Manufacturing and Retail Spikes
Scenario 3: per Manufacturing with Three Factories, factories sustain 3 Gb baselines spiking to 5 Gb, requiring La rge Bandwidth Tunnels to a void twelve parallel standard connections. Operators must provision new resources because in-place upgrades are impossible, forcing a destructive cutover event during maintenance windows. The configuration mandates a Transit Gateway attachment since Virtual Private Gateways cannot terminate these high-capacity flows. A critical tension exists between throughput gains and latency sensitivity; Accelerated VPN is unsupported on large tunnels, necessitating separate standard links for real-time SCADA signaling if sub100ms jitter is required. Scenario 4: based on Retail Stores, branches generate 20–50 Mbps sustained with spikes to 80 Mbps, fit ting the shared capacity model of the VPN Concentrator. This approach aggregates traffic through a single logical interface, eliminating the operational overhead of managing hundreds of individual tunnel states. However, the architecture enforces a strict BGP-only routing policy, excluding any legacy sites reliant on static routes. Tes. The cost benefit is clear, yet the $0.05/hour Transit Gateway attachment fee accumula tes notably across large hub-and-spoke topologies.
Execute the deployment using the following sequence:
- Delete existing standard connections to clear the path for recreation.
- Create a new Customer Gateway object with a fixed public IP address.
- Attach the Large Bandwidth Tunnel or Concentrator to the Transit Gateway.
- Configure BGP peering sessions with unique Autonomous System Numbers.
Operational Risks: BGP Limitations and Site Count Thresholds in Concentrators
Scenario 4: according to Retail Chain with 4 Stores, static routing is unsupported, forcing every branch to run BGP. This mechanical constraint eliminates simple hub-and-spoke topologies that rely on default routes. Operators must configure neighbor relationships and manage ASNs for each site before traffic flows. The implication is a steep increase in configuration complexity compared to standard IPsec tunnels. A secondary failure mode emerges at scale when the default site limit is reached. Scenario 4: as reported by Retail Chain with 4 Stores, exceeding the 4 sites threshold requires deploying multiple concentrators or requesting quota increases. Failure to anticipate this ceiling causes connectivity loss for new branches during expansion phases. Network architects must design for horizontal scaling from day.
InterLIR recommends validating BGP capability on all customer gateways prior to deployment.
- Verify Customer Gateway devices support dynamic BGP peering.
- Calculate total site count against the regional concentrator limit.
- Request quota increases before reaching the default threshold.
- Design route aggregation policies to minimize table size.
The cost of ignoring these thresholds is immediate service denial for new locations.
About
Alexei Krylov Head of Sales at InterLIR brings a unique infrastructure perspective to the discussion on AWS Site-to-Site VPN solutions. While his primary expertise lies in managing global IPv4 resources and BGP routing integrity, his daily work requires deep familiarity with hybrid network architectures that connect on-premise data centers to public clouds. As organizations evaluate the new 5 Gbps tunnel capabilities and VPN Concentrator options, they must simultaneously address critical IP addressing constraints to ensure smooth connectivity. Krylov's experience guiding enterprises through complex network availability challenges allows him to frame VPN selection not just as a bandwidth decision, but as a complete strategy involving clean IP allocation and secure routing. At InterLIR, a leader in transparent IPv4 redistribution, he observes firsthand how reliable cloud connectivity relies on proper IP asset management. This article bridges high-performance AWS networking with the fundamental necessity of reliable, well-managed IP resources for modern enterprise growth.
Conclusion
The market's rapid shift toward cloud-native connectivity exposes a critical fragility in relying on single-tunnel architectures for enterprise growth. While standard IPsec configurations suffice for initial migration, they create an immediate operational bottleneck once throughput demands exceed 1.25 Gb or site counts approach triple digits. The true cost here is not merely the hourly attachment fee, but the architectural debt incurred by deferring the transition to dynamic routing and horizontal scaling. Organizations attempting to patch legacy hub-and-spoke models with static routes will face inevitable connectivity denial as expansion accelerates through 2034.
You must migrate to Transit Gateway-based topologies with Large Bandwidth Tunnels before your aggregate traffic hits 800 Mb or your branch count exceeds 80 sites. Waiting until you hit the hard technical ceiling forces a disruptive, emergency re-architecture that risks business continuity. This window for controlled evolution closes rapidly as cloud dependency deepens.
Start by auditing your current Customer Gateway devices this week to confirm BGP support and available ASN pools. Do not assume hardware compatibility; verify firmware capabilities against dynamic routing requirements immediately. If your edge devices cannot sustain dynamic peering, budget for hardware refreshes now rather than reacting to a future outage. Proactive validation of these routing fundamentals is the only way to ensure your network infrastructure scales alongside your business ambitions without catastrophic failure.