API gateways are a critical component of a modern architecture. Apigee X is Google Cloud’s API management platform. It also allows users of the legacy Apigee Edge product to leverage Virtual Private Cloud (VPC) products and features, like Cloud Armor and Load Balancers. With these new security and availability advantages come a new set of challenges. The biggest challenge is accessing API backends in multiple VPCs. This introduces VPC peering transitivity restrictions. Only two VPCs can be peered in sequence, but the Apigee X Runtime already consumes one peering connection. This post discusses overcoming transitivity limits, with an eye on future managed services that solve the problem.

VPC Peering Transitivity Limits

The main challenge with Apigee X is that customers have backend servers in multiple distinct VPCs. This means that Apigee X Runtime traffic must traverse multiple VPC peering  connections. However, peering transitivity limits VPC networking to only one sequential VPC hop.

In the past (Apigee Edge / SaaS) customers did not need to worry about securing their backend APIs across multiple VPCs. Apigee X networking is a new feature, compared to the original Apigee SaaS product.

VPC Connectivity Options

VPC peering is the easiest option. The restriction is that connectivity wise, only one peering hop is allowed, which is already consumed by a connection to the Apigee X Runtime. The Apigee X Runtime resides in a Google managed VPC.

Internal Global Load Balancers (ILBs) with Managed Instance Group (MIG) backends is the second option. This peering solution is complex though.

The open internet is the most parsimonious solution. Apigee X facilitates this, but it would violate the security mandates of many Google Cloud customers.

The final option is Cloud VPN. Cloud VPN supports transitivity across multiple VPC network hops. As well the implementation steps are relatively simple. Traffic does not traverse the open internet for VPCs within Google Cloud.

Apigee X Networking Explained

ApigeeX is a Private Service Access (PSA) based service that is deployed and managed in a tenant project. A tenant project is a project that is tied to the Apigee X Runtime, in a one-to-one relationship. The Apigee X deployment natively manages a load balancer in the customer’s VPC for ingress traffic. By default, egress to non-private routes from Apigee X is through Cloud NAT in the tenant project. The default Apigee X deployment is more suitable for non hub-and-spoke network design.

Single VPC Hub Design

Figure 3 demonstrates the API request flow. External API requests come from the internet and first encounter the XLB. The XLB has the Apigee X Runtime as a backend, using Network Endpoint Group to contain the Apigee X Environment Group domain name. From here the Apigee X Runtime sends the API request over the VPC peering connection to apigee-x-project.

Google Cloud VPN connections allow Virtual Private Clouds (VPCs) to reference each other’s internal resources, with the help of Google Cloud Routers. Cloud Routers advertise each VPCs’ IP address ranges. This enables traffic over the Cloud VPN tunnels to reference resources via their private IP addresses. Additionally, Google Cloud DNS peering facilitates a consumer VPC to reference hostnames in a producer VPC.

From the apigee-x-vpc (consumer), the API request references a backend-project-a-vpc (producer) resource hostname, via DNS peering. This would be the backend identified in an Apigee X API proxy. The request itself does not traverse the DNS peering connection. Instead, the API request goes through a Cloud VPN tunnel. In this way, Cloud VPN is used as a substitute for VPC peering.

1 Apigee-fueled APIs.jpg

Shared VPC Hub Design

A more advanced example is a shared VPC hub project, illustrated in Figure 4. The general idea is the same; use Cloud VPN connections instead of peering connections, advertise routes using Cloud Router and use DNS peering to reference any compute resources by their hostnames.

What is interesting is the idea that the apigee-x-project can use a subnet in a shared VPC, independently of the hub-project, which is in the same shared VPC. Shared VPC subnets do not violate the VPC peering transitivity limit (only two peered VPCs in sequence). This architecture pattern can be scaled to connect multiple shared VPCs via VPN tunnels. This is analogous to cascading hub-and-spoke architecture pattern: the inner hub-and-spoke is the shared VPC (with the apigee-x-project) and the outer hub-and-spoke is the VPN connected backend-projects.

2 Apigee-fueled APIs.jpg

Fixing VPC Peering with Private Service Connect

Private Service Connect (PSC) allows private consumption of services across VPC networks. This feature is coming to Apigee X in the first half of 2022. Private Services Access (PSA) is what enables you to reach services’ internal IP addresses. Apigee X is a PSA service, hence its ability to connect to backends by their internal IP addresses. PSC is the missing piece that will natively support Apigee X backends in other VPCs.

What Next?

Try it out for yourself! Google built and shared a working Terraform example in their Github repository. Apigee X opens up users to the products and features of Google Cloud VPCs. VPC transitivity limits do require some work to overcome. Private Service Connect (PSC) will solve Apigee X VPC peering challenges later in 2022. A great next step would be to implement a “wheel” architecture pattern. This is an evolution of the hub-and-spoke design, where spoke VPCs peer to each other. Cloud VPN makes this possible and PSC makes a wheel more feasible to implement. The advantage of a wheel is that spokes no longer rely on a hub VPC; for example productizing APIs in a multi-tenant environment. More details are available in the following ebook: The API Product Mindset

Resources

You can find the Terraform code in the Github repository.