LATEST VERSION: 1.10 - CHANGELOG
Pivotal Cloud Foundry v1.10

Reference Architecture for Pivotal Cloud Foundry on AWS

Page last updated:

This guide presents a reference architecture for Pivotal Cloud Foundry (PCF) on Amazon Web Services (AWS). This architecture is valid for most production-grade PCF deployments using three availability zones (AZs).

Base Reference Architecture

The following diagram provides an overview of a reference architecture deployment of PCF on AWS using three AZs.

Aws overview arch

To view a larger version of this diagram, click here.

Note: Each AWS subnet must reside entirely within one AZ. As a result, a multi-AZ deployment topology requires a subnet for each AZ.

Base Reference Architecture Components

The following table lists the components that are part of a base reference architecture deployment on AWS with three AZs.

Component Reference Architecture Notes
Domains & DNS CF Domain Zones and routes in use by the reference architecture include:

  • domains for *.apps and *.sys (required)
  • a route for Ops Manager (required)
  • a route for Doppler (required)
  • a route for Loggregator (required)
  • a route for ssh access to app containers (optional)
Using Route 53 to manage domains is optional.
Ops Manager Deployed on one of the three public subnets and accessible by FQDN or through an optional Jumpbox.
BOSH Director Deployed on the infrastructure subnet.
Elastic Load Balancers - HTTP, HTTPS, and SSL Required. Load balancer that handles incoming HTTP, HTTPS, and SSL traffic and forwards them to the Gorouter(s). Deployed on all three public subnets.
Elastic Load Balancers - SSH Optional. Load balancer that provides SSH access to app containers. Deployed on all three public subnets, one per AZ.
Gorouters Accessed through the HTTP, HTTPS, and SSL Elastic Load Balancers. Deployed on all three ERT subnets, one per AZ.
Diego Brains This component is required. However, the SSH container access functionality is optional and enabled through the SSH Elastic Load Balancers. Deployed on all three ERT subnets, one per AZ.
TCP Routers Optional feature for TCP routing. Deployed on all three ERT subnets, one per AZ.
CF Database Reference architecture uses AWS RDS. Deployed on all three RDS subnets, one per AZ.
Storage Buckets Reference architecture uses 4 S3 buckets: buildpacks, droplets, packages, and resources.
Service Tiles Deployed on all three service subnets, one per AZ.
Service Accounts Two service accounts are recommended: one for Terraform, and the other for Ops Manager and BOSH. Consult the following list:

  • Admin Account: Terraform will use this account to provision required AWS resources as well as an IAM service account.
  • IAM Service Account: This service account will be automatically provisioned with restrict access only to resources needed by PCF. See the AWS IAM Terraform script for more information.
EC2 Instance Quota The default EC2 instance quota on a new AWS subscription only has around 20 EC2 instances, which is not enough to host a multi-AZ deployment. The recommended quota for EC2 instances is 100. AWS requires the instances quota tickets to include Primary Instance Types, which should be t2.micro.

Network Objects

The following table lists the network objects in this reference architecture.

Network Object Notes Estimated Number
External Public IPs One per deployment, assigned to Ops Manager. 1
Virtual Private Network (VPC) One per deployment. A PCF deployment exists within a single VPC and a single AWS region, but should distribute PCF jobs and instances across 3 AWS AZs to ensure a high degree of availability. 1
Subnets The reference architecture requires the following subnets:
  • 1 x (/24) infrastructure (BOSH Director) subnet
  • 3 x (/24) public subnets (Ops Manager, Elastic Load Balancers, NAT instances), one per AZ
  • 3 x (/20) ERT subnets (GoRouters, Diego Cells, Cloud Controllers, etc.), one per AZ
  • 3 x (/20) services subnets (RabbitMQ, MySQL, Spring Cloud Services, etc.), one per AZ
  • 3 x (/24) RDS subnets (Cloud Controller DB, UAA DB, etc.), one per AZ.
For more information, see the Terraform subnets script.
13
Route Tables This reference architecture requires 4 route tables: one for the public subnet, and one each for all 3 private subnets across 3 AZs. Consult the following list:

  • PublicSubnetRouteTable: This routing table enables the ingress/egress routes from/to Internet through the Internet gateway for OpsManager and the NAT Gateway.
  • PrivateSubnetRouteTable: This routing table enables the egress routing to the Internet through the NAT Gateway for the BOSH Director and ERT.
For more information, see the Terraform script that creates the route tables and the script that performs the route table association.

Note: If an EC2 instance sits on a subnet with an Internet gateway attached as well as a public IP, it is accessible from the Internet through the public IP; for example, Ops Manager. ERT needs Internet access due to the access needs of using an S3 bucket as a blobstore.

4
Security Groups The reference architecture requires 5 Security Groups. For more information, see the Terraform Security Group rules script. The following table describes the Security Group ingress rules:

Note: The extra port of 4443 with the Elastic Load Balancer is due to the limitation that the Elastic Load Balancer does not support WebSocket connections on HTTP/HTTPS.

Security Group Port From CIDR Protocol Description
OpsMgrSG 22 0.0.0.0/0 TCP Ops Manager SSH access
OpsMgrSG 443 0.0.0.0/0 TCP Ops Manager HTTP access
VmsSG ALL VPC_CIDR ALL Open up connections among BOSH-deployed VMs
MysqlSG 3306 VPC_CIDR TCP Enable network access to RDS
ElbSG 80 0.0.0.0/0 TCP HTTP to Elastic Runtime
ElbSG 443 0.0.0.0/0 TCP HTTPS to Elastic Runtime
ElbSG 4443 0.0.0.0/0 TCP WebSocket connection to Loggregator endpoint
SshElbSG 2222 0.0.0.0/0 TCP SSH connection to containers
5
Load Balancers PCF on AWS requires the Elastic Load Balancer, which can be configured with multiple listeners to forward HTTP/HTTPS/TCP traffic. Two Elastic Load Balancers are recommended: one to forward the traffic to the Gorouters, PcfElb, the other to forward the traffic to the Diego Brain SSH proxy, PcfSshElb. For more information, see the Terraform load balancers script

The following table describes the required listeners for each load balancer:
ELB Instance/Port LB Port Protocol Description
PcfElb gorouter/80 80 HTTP Forward traffic to Gorouters
PcfElb gorouter/80 443 HTTPS SSL termination and forward traffic to Gorouters
PcfElb gorouter/80 4443 SSL SSL termination and forward traffic to Gorouters
PcfSshElb diego-brain/2222 2222 TCP Forward traffic to Diego Brain for container SSH connections
Each ELB binds with a health check to check the health of the back-end instances:
  • PcfElb checks the health on Gorouter port 80 with TCP
  • PcfSshElb checks the health on Diego Brain port 2222 with TCP
2
Jumpbox Optional. Provides a way of accessing different network components. For example, you can configure it with your own permissions and then set it up to access to Pivotal Network to download tiles. Using a Jumpbox is particularly useful in IaaSes where Ops Manager does not have a public IP. In these cases, you can SSH into Ops Manager or any other component through the Jumpbox. 1
Create a pull request or raise an issue on the source for this page in GitHub