LATEST VERSION: 1.10 - CHANGELOG
Pivotal Cloud Foundry v1.10

Reference Architecture for Pivotal Cloud Foundry on OpenStack

Page last updated:

This guide presents a reference architecture for Pivotal Cloud Foundry (PCF) on OpenStack. This architecture is valid for most production-grade PCF deployments in a single project using three availability zones (AZs).

Base Reference Architecture

The following diagram provides an overview of a reference architecture deployment of PCF on OpenStack using three AZs.

Openstack overview arch

To view a larger version of this diagram, click here.

Base Reference Architecture Components

The following table lists the components that are part of a base reference architecture deployment on OpenStack with three AZs.

Component Reference Architecture Notes
Domains & DNS CF Domain zones and routes in use by the reference architecture include:

  • zones for *.apps and *.sys (required)
  • a route for Ops Manager (required)
  • a route for Doppler (required)
  • a route for Loggregator (required)
  • a route for ssh access to app containers (optional)
  • a route for tcp access to tcp routers (optional)
Ops Manager Deployed on the infrastructure network and accessible by FQDN or through an optional Jumpbox.
BOSH Director Deployed on the infrastructure network.
Application Load Balancer Required. Load balancer that handles incoming HTTP, HTTPS, TCP, and SSL traffic and forwards them to the Gorouter(s). Load balancers are outside the scope of this document.
SSH Load Balancer Optional. Load balancer that provides SSH access to application containers for developers. Load balancers are outside the scope of this document.
Gorouters Accessed through the Application Load Balancer. Deployed on the ERT network, one per AZ.
Diego Brains This component is required. However, the SSH container access functionality is optional and enabled through the SSH Load Balancers. Deployed on the ERT network, one per AZ.
TCP Routers Optional feature for TCP routing. Deployed on the ERT network, one per AZ.
CF Database Reference architecture uses internal MySQL.
Storage Buckets Reference architecture uses customer provided blobstore. Buckets are needed for BOSH & Elastic Runtime.
Service Tiles Deployed on the services network.
Service Accounts Two service accounts are recommended: one for OpenStack “paving,” and the other for Ops Manager and BOSH. Consult the following list:

  • Admin Account: Concourse will use this account to provision required OpenStack resources as well as a Keystone service account.
  • Keystone Service Account: This service account will be automatically provisioned with restricted access only to resources needed by PCF.
OpenStack Quota The default compute quota on a new OpenStack subscription is typically not enough to host a multi-AZ deployment. The recommended quota for instances is 100. Your OpenStack network quotas may also need to be increased.

OpenStack Objects

The following table lists the network objects in this reference architecture.

Network Object Notes Estimated Number
Floating IPs Two per deployment, one assigned to Ops Manager, the other to your Jumpbox. 2
Project One per deployment. A PCF deployment exists within a single project and a single OpenStack region, but should distribute PCF jobs and instances across three OpenStack AZs to ensure a high degree of availability. 1
Networks The reference architecture requires the following Tenant Networks:
  • 1 x (/24) Infrastructure (Ops Manager, BOSH Director, Jumpbox).
  • 1 x (/20) ERT (GoRouters, Diego Cells, Cloud Controllers, etc.).
  • 1 x (/20) Services (RabbitMQ, MySQL, Spring Cloud Services, etc.)
  • 1 x (/24) On-demand services (Various.)
An internet facing network is also required:
  • 1 x Public network.

Note: In many cases, the public network is an “under the cloud” network that is shared across projects.

5
Routers This reference architecture requires one router attached to all networks:
  • VirtualRouter: This router table enables the ingress/egress routes from/to Internet to the project networks and provides sNAT services.
1
Security Groups The reference architecture requires one Security Groups. The following table describes the Security Group ingress rules:
Security Group Port From CIDR Protocol Description
OpsMgrSG 22 0.0.0.0/0 TCP Ops Manager SSH access
OpsMgrSG 443 0.0.0.0/0 TCP Ops Manager HTTP access
VmsSG ALL VPC_CIDR ALL Open up connections among BOSH-deployed VMs
Additional security groups may be needed which are specific to your chosen load balancing solution.
5
Load Balancers PCF on OpenStack requires a load balancer, which can be configured with multiple listeners to forward HTTP/HTTPS/TCP traffic. Two load balancers are recommended: one to forward the traffic to the Gorouters, AppsLB, the other to forward the traffic to the Diego Brain SSH proxy, SSHLB.

The following table describes the required listeners for each load balancer:
Name Instance/Port LB Port Protocol Description
AppsLB gorouter/80 80 HTTP Forward traffic to Gorouters
AppsLB gorouter/80 443 HTTPS SSL termination and forward traffic to Gorouters
SSHLB diego-brain/2222 2222 TCP Forward traffic to Diego Brain for container SSH connections
Each load balancer needs a check to validate the health of the back-end instances:
  • AppsLB checks the health on Gorouter port 80 with TCP
  • SSHLB checks the health on Diego Brain port 2222 with TCP

Note: In many cases, the load balancers are provided as an “under the cloud” service that is shared across projects.

2
Jumpbox Optional. Provides a way of accessing different network components. For example, you can configure it with your own permissions and then set it up to access to Pivotal Network to download tiles. Using a Jumpbox is particularly useful in IaaSes where Ops Manager does not have a public IP. In these cases, you can SSH into Ops Manager or any other component through the Jumpbox. 1
Create a pull request or raise an issue on the source for this page in GitHub