Reference Architecture for Pivotal Cloud Foundry on AWS
Page last updated:
This guide presents a reference architecture for Pivotal Cloud Foundry (PCF) on Amazon Web Services (AWS). This architecture is valid for most production-grade PCF deployments using three availability zones (AZs).
The following diagram provides an overview of a reference architecture deployment of PCF on AWS using three AZs.
Note: Each AWS subnet must reside entirely within one AZ. As a result, a multi-AZ deployment topology requires a subnet for each AZ.
The following table lists the components that are part of a base reference architecture deployment on AWS with three AZs.
|Component||Reference Architecture Notes|
|Domains & DNS||CF Domain Zones and routes in use by the reference architecture include:
|Ops Manager||Deployed on one of the three public subnets and accessible by FQDN or through an optional Jumpbox.|
|BOSH Director||Deployed on the infrastructure subnet.|
|Elastic Load Balancers - HTTP, HTTPS, and SSL||Required. Load balancer that handles incoming HTTP, HTTPS, and SSL traffic and forwards them to the Gorouter(s). Deployed on all three public subnets.|
|Elastic Load Balancers - SSH||Optional. Load balancer that provides SSH access to app containers. Deployed on all three public subnets, one per AZ.|
|Gorouters||Accessed through the HTTP, HTTPS, and SSL Elastic Load Balancers. Deployed on all three ERT subnets, one per AZ.|
|Diego Brains||This component is required. However, the SSH container access functionality is optional and enabled through the SSH Elastic Load Balancers. Deployed on all three ERT subnets, one per AZ.|
|TCP Routers||Optional feature for TCP routing. Deployed on all three ERT subnets, one per AZ.|
|CF Database||Reference architecture uses AWS RDS. Deployed on all three RDS subnets, one per AZ.|
|Storage Buckets||Reference architecture uses 4 S3 buckets: buildpacks, droplets, packages, and resources.|
|Service Tiles||Deployed on all three service subnets, one per AZ.|
|Dynamic Services||Reserved for future use, dynamic services are deployed on their own subnet. Dynamic services are services autoprovisioned by BOSH based on a trigger, such as a request for that service. Pivotal recommends provisioning the multi-tenant dynamic services subnet as a /22 block.|
|Service User & Roles||One IAM role and one IAM user are recommended: the IAM role for Terraform, and the IAM user for Ops Manager and BOSH. Consult the following list:
|EC2 Instance Quota||The default EC2 instance quota on a new AWS subscription only has around 20 EC2 instances, which is not enough to host a multi-AZ deployment. The recommended quota for EC2 instances is 100. AWS requires the instances quota tickets to include Primary Instance Types, which should be t2.micro.|
The following table lists the network objects in this reference architecture.
|Network Object||Notes||Estimated Number|
|External Public IPs||One per deployment, assigned to Ops Manager.||1|
|Virtual Private Network (VPC)||One per deployment. A PCF deployment exists within a single VPC and a single AWS region, but should distribute PCF jobs and instances across 3 AWS AZs to ensure a high degree of availability.||1|
|Subnets||The reference architecture requires the following subnets:
|Route Tables||This reference architecture requires 4 route tables: one for the public subnet, and one each for all 3 private subnets across 3 AZs. Consult the following list:
Note: If an EC2 instance sits on a subnet with an Internet gateway attached as well as a public IP, it is accessible from the Internet through the public IP; for example, Ops Manager. ERT needs Internet access due to the access needs of using an S3 bucket as a blobstore.
|Security Groups||The reference architecture requires 5 Security Groups. For more information, see the Terraform Security Group rules script. The following table describes the Security Group ingress rules:
|Load Balancers||PCF on AWS requires the Elastic Load Balancer, which can be configured with multiple listeners to forward HTTP/HTTPS/TCP traffic. Two Elastic Load Balancers are recommended: one to forward the traffic to the Gorouters,
The following table describes the required listeners for each load balancer:
|Jumpbox||Optional. Provides a way of accessing different network components. For example, you can configure it with your own permissions and then set it up to access to Pivotal Network to download tiles. Using a Jumpbox is particularly useful in IaaSes where Ops Manager does not have a public IP. In these cases, you can SSH into Ops Manager or any other component through the Jumpbox.||1|
At times, applications on PCF need to access on-premise data. The connection between an AWS VPC and an on-premise datacenter is made through VPN peering. When employing non-VPN peering, there are several points to consider:
- Assign routable IP addresses with the following in mind:
- It may not be realistic to request multiple routable /22 address spaces, due to IP exhaustion.
- Using different VPC address spaces can cause snowflakes deployments and present difficulties in automation.
- Only make the load balancer, NAT devices, and Ops Manager routable.
- PCF components can route egress through a NAT instance. As a result, operators do not need to assign routable IPs to PCF components.
- Inbound traffic from the datacenter should come through an internal load balancer.
- Outbound traffic to the datacenter should go through AWS NAT instances.