Pivotal Cloud Foundry v1.9

Reference Architecture for Pivotal Cloud Foundry on Azure

Page last updated:

This guide presents a reference architecture for Pivotal Cloud Foundry (PCF) on Azure.

Azure does not provide resources in a way that translates directly to PCF availability zones. Instead, Azure provides high availability via fault domains and availability sets.

All reference architectures described in this topic are validated for production-grade PCF deployments using fault domains and availability sets that include multiple job instances.

Base Reference Architecture

The following diagram provides an overview of a reference architecture deployment of PCF on Azure.

Azure overview arch

To view a larger version of this diagram, click here.

Base Reference Architecture Components

The following table lists the components that are part of a base reference architecture deployment on Azure using a single resource group.

Component Reference Architecture Notes
Domains & DNS CF Domain Zones and routes in use by the reference architecture include:
  • domains for *.apps and *.system (required),
  • a route for Ops Manager (required),
  • a route for doppler (required),
  • a route for loggregator (required),
  • a route for ssh access to app containers (optional),
  • and a route for TCP routing to apps (optional).
Ops Manager Deployed on the infrastructure subnet and accessible by FQDN or via an optional Jumpbox.
BOSH Deployed on the infrastructure subnet.
Azure Load Balancer - API & Apps Required. Load balancer that handles incoming API and apps requests and forwards them to the Gorouter(s).
Azure Load Balancer - ssh-proxy Optional. Load balancer that provides SSH access to app containers.
Azure Load Balancer - tcp-router Optional. Load balancer that handles TCP routing requests for apps.
Azure Load Balancer - MySQL Required to provide high availability for MySQL backend to Elastic Runtime.
Gorouter(s) Accessed via the API & Apps load balancer. Deployed on the ERT subnet, one job per Azure availability set.
Diego Brain(s) This component is required, however the SSH container access functionality is optional and enabled via the SSH Proxy load balancer. Deployed on the ERT subnet, one job per Azure availability set.
TCP Router(s) Optional feature for TCP routing. Deployed on the ERT subnet, one job per availability zone.
MySQL Reference architecture uses internal MySQL provided with PCF. Deployed on the ERT subnet, one job per Azure availability set.
Elastic Runtime Required. Deployed on the ERT subnet, one job per Azure availability set.
Storage Accounts PCF on Azure requires 5 standard storage accounts - BOSH, Ops Manager, and three ERT storage accounts. Each account comes with a set amount of disk. Reference architecture recommends using 5 storage accounts because Azure Storage Accounts have an IOPs limit (~20k, per each account), which generally relates to a BOSH JOB/VM limit of ~20 VMs each.
Service Tiles Deployed on the PCF managed services subnet. Each service tile is deployed to an availabilty set.
Dynamic Services Reserved for future use, dynamic services are deployed on their own subnet. Dynamic services are services autoprovisioned by BOSH based on a trigger, such as a request for that service.

Alternative Network Layouts for Azure

This section describes the possible network layouts for PCF deployments as covered by the reference architecture of PCF on Azure.

At a high level, there are currently two possible ways of deploying PCF as described by the reference architecture:

  1. Single resource group, or
  2. Multiple resource groups.

The first scenario is currently outlined in the existing installation documentation for Azure deployments of PCF. It models a single PCF deployment in a single Azure Resource Group.

If you require multiple resource groups, you may refer to the Multiple Resource Group deployment section.

Network Layout

This diagram illustrates the network topology of the base reference architecture for PCF on Azure. In this deployment, you expose only a minimal number of public IPs and deploy only one resource group.

Azure net topology base

To view a larger version of this diagram, click here.

Network Objects

The following table lists the network objects in PCF on Azure reference architecture.

Network Object Notes Estimated Number
External Public IPs Use
  1. global IP for apps and system access
  2. Ops Manager (or optional Jumpbox).
Optionally, you can use a public IPs fro the ssh-proxy and tcp-router load balancers.
Virtual Network One per deployment. Azure virtual network objects allow multiple subnets with multiple CIDRs, so a typical deployment of PCF will likely only ever require one Azure Virtual Network object. 1
Subnets Separate subnets for
  1. infrastructure (Ops Manager, Ops Manager Director, Jumpbox),
  2. ERT,
  3. services,
  4. and dynamic services.
Using separate subnets allows you to configure different firewall rules due to your needs.
Routes Routes are typically created by Azure dynamically when subnets are created, but you may need to create additional routes to force outbound communication to dedicated SNAT nodes. These objects are required to deploy PCF without public IP addresses. 3+
Firewall Rules Azure firewall rules are collected into a Network Security Group (NSG) and bound to a Virtual Network object and can be created to use IP ranges, subnets, or instance tags to match for source & destination fields in a rule. One NSG can be used for all firewall rules. 12
Load Balancers Used to handle requests to Gorouters and infrastructure components. Azure uses 1 or more load balancers. The API and Apps load balancer is required. The TCP Router load balancer used for TCP routing feature and the SSH load balancer that allows SSH access to Diego apps are both optional. In addition, you can use a MySQL load balancer to provide high avilability to MySQL. This is also optional. 1-4
Jumpbox Optional. Provides a way of accessing different network components. For example, you can configure it with your own permissions and then set it up to access to Pivotal Network to download tiles. Using a Jumpbox is particularly useful in IaaSes where Ops Manager does not have a public IP. In these cases, you can SSH into Ops Manager or any other component via the Jumpbox. 1

Multiple Resource Group Deployment

This diagram illustrates the case where you want to use additional resource groups in your PCF deployment on Azure.

Shared network resources may already exist in an Azure subscription. In this type of deployment, using multiple resource groups allows you to reuse existing resources instead of provisioning new ones.

To use multiple resource groups, you need to provide the BOSH Service Principal with access to the existing network resources.

Azure multi resgroup

To view a larger version of this diagram, click here.

Multiple Resource Groups Deployment Notes

To deploy PCF on Azure with multiple resource groups, you can define custom roles to grant resource group access to your BOSH Service Principal. For example, you might develop the following:

  • Dedicated Network Resource Group, limits BOSH Service Principal so that it does not have admin access to network objects.

  • Custom Role for BOSH Service Principal, applied to Network Resource Group, limits the BOSH Service Principal to minimum read-only access.

    "Name": "PCF Network Read Only",
    "IsCustom": true,
    "Description": "MVP PCF Read Network Resgroup",
    "Actions": [
    "Microsoft.Network/publicIPAddresses/read",<-- Only Required if Using Public IPs
    "Microsoft.Network/publicIPAddresses/join/action", <-- Only Required if Using Public IPs
    "NotActions": [],
    "AssignableScopes": ["/subscriptions/[YOURSUBSCRIPTIONID]"]

  • Custom Role for BOSH Service Principal, applied to Subscription, allowing the Operator to deploy PCF components

    "Name": "PCF Deploy Min Perms",
    "IsCustom": true,
    "Description": "MVP PCF Terraform Perms",
    "Actions": [
    "NotActions": [],
    "AssignableScopes": ["/subscriptions/[YOUR_SUBSCRIPTION_ID]"]

Was this helpful?
What can we do to improve?
View the source for this page in GitHub