High Availability in Cloud Foundry

Page last updated:

This topic explains how to configure Cloud Foundry (CF) for high availability (HA), and how Cloud Foundry is designed to ensure high availability at multiple layers.

Note: In PCF v1.11 and later, Elastic Runtime defaults to a highly available resource configuration. However, you may need to complete additional procedures described below to make your deployment highly available.

Configuring High Availability

This section describes how to configure system components to ensure high availability. You accomplish this by scaling component VMs and locating them in multiple Availability Zones (AZs), so that their redundancy and distribution minimizes downtime during ongoing operation, product updates, and platform upgrades.

Scaling component VMs means changing the number of VM instances dedicated to running a functional component of the system. Scaling usually means increasing this number, while scaling down or scaling back means decreasing it.

Deploying or scaling applications to at least two instances per app also helps maintain high availability. For information about scaling applications and maintaining app uptime, see the Scaling an Application Using cf scale and Using Blue-Green Deployment to Reduce Downtime and Risk topics.

Availability Zones

During product updates and platform upgrades, the VMs in a deployment restart in succession, rendering them temporarily unavailable. During outages, VMs go down in a less orderly way. Spreading components across Availability Zones and scaling them to a sufficient level of redundancy maintains high availability during both upgrades and outages and can ensure zero downtime.

Deploying Cloud Foundry across three or more AZs and assigning multiple component instances to different AZ locations lets a deployment operate uninterrupted when entire AZs become unavailable. Cloud Foundry maintains its availability as long as a majority of the AZs remain accessible. For example, a three-AZ deployment stays up when one entire AZ goes down, and a five-AZ deployment can withstand an outage of up to two AZs with no impact on uptime.

Vertical and Horizontal Scaling

You can scale platform capacity vertically by adding memory and disk, or horizontally by adding more VMs running instances of Cloud Foundry components.

Scale cf

To scale vertically, ensure that you allocate and maintain enough of the following:

  • Free space on host Diego cell VMs so that apps expected to deploy can successfully be staged and run.
  • Disk space and memory in your deployment such that if one host VM is down, all instances of apps can be placed on the remaining Host VMs.
  • Free space to handle one AZ going down if deploying in multiple AZs.

Scaling up the following components horizontally also increases your capacity to host applications. The nature of the applications you host on Cloud Foundry should determine how you should scale vertically vs. horizontally.

Scalable Components

You can horizontally scale most Cloud Foundry components to multiple instances to achieve the redundancy required for high availability.

You should also distribute the instances of multiply-scaled components across different availability zones (AZs). If you use more than three AZs, ensure that you use an odd number of AZs.

If you do not require a high-availability deployment (for example, you are deploying a proof-of-concept or sandbox), you should scale down job instances to the minimum required for a functional deployment.

For more information regarding zero downtime deployment, see the Scaling Instances in Elastic Runtime topic.

The following table provides recommended instance counts for a high-availability deployment and the minimum instances for a functional deployment:

Elastic Runtime Job Recommended Instance Number for HA Minimum Instance Number Notes
Diego Cell ≥ 3 1 The optimal balance between CPU/memory sizing and instance count depends on the performance characteristics of the apps that run on Diego cells. Scaling vertically with larger Diego cells makes for larger points of failure, and more apps go down when a cell fails. On the other hand, scaling horizontally decreases the speed at which the system rebalances apps. Rebalancing 100 cells takes longer and demands more processing overhead than rebalancing 20 cells.
Diego Brain ≥ 2 1 For high availability, use at least one per AZ, or at least two if only one AZ.
Diego BBS ≥ 3 1 Set this to an odd number equal to or one greater than the number of AZs you have, in order to maintain quorum. Distribute the instances evenly across the AZs, at least one instance per AZ.
Consul ≥ 3 1 Set this to an odd number equal to or one greater than the number of AZs you have, in order to maintain quorum. Distribute the instances evenly across the AZs, at least one instance per AZ.
MySQL Server 3 1 If you are using an external database in your deployment, then you can set the MySQL Server instance count to 0. For instructions on how to scale down an internal MySQL cluster, see Scaling Down Your MySQL Cluster.
MySQL Proxy 2 1 If you are using an external database in your deployment, then you can set the MySQL Proxy instance count to 0.
NATS Server ≥ 2 1 In a high availability deployment, you might run a single NATS instance if your deployment lacks the resources to deploy two stable NATS servers. Components using NATS are resilient to message failures and the BOSH resurrector recovers the NATS VM quickly if it becomes non-responsive.
Cloud Controller ≥ 2 1 Scale the Cloud Controller to accommodate the number of requests to the API and the number of apps in the system.
Clock Global ≥ 2 1 For a high availability deployment, scale the Clock Global job to a value greater than 1 or to the number of AZs you have.
Router ≥ 2 1 Scale the router to accommodate the number of incoming requests. Additional instances increase available bandwidth. In general, this load is much less than the load on Diego cells.
HAProxy 0 or ≥ 2 0 or 1 For environments that require high availability, you can scale HAProxy to 0 and then configure a high-availability load balancer (LB) to point directly to each Gorouter instance. Alternately, you can also configure the high availability LB to point to HAProxy instance scaled at ≥ 2. Either way, an LB is required to host Cloud Foundry domains at a single IP address.
UAA ≥ 2 1
Doppler Server ≥ 2 1 Deploying additional Doppler servers splits traffic across them. For a high availability deployment, Pivotal recommends at least two per Availability Zone.
Loggregator Trafficcontroller ≥ 2 1 Deploying additional Loggregator Traffic Controllers allows you to direct traffic to them in a round-robin manner. For a high availability deployment, Pivotal recommends at least two per Availability Zone.
etcd Server ≥ 3 1 Set this to an odd number equal to or one greater than the number of AZs you have, in order to maintain quorum. Distribute the instances evenly across the AZs, at least one instance per AZ.

Blob Storage

For storing blobs, large binary files, the best approach for high availability is to use external storage such as Amazon S3 or an S3-compatible service.

If you store blobs internally using WebDAV or NFS, these components run as single instances and you cannot scale them. For these deployments, use the high availability features of your IaaS to immediately recover your WebDAV or NFS server VM if it fails. Contact Pivotal Support if you need assistance.

The singleton compilation components do not affect platform availability.

Supporting Component Scaling

Ops Manager Resurrector

Enable the Ops Manager Resurrector.

Resource Pools

Configure your resource pools according to the requirements of your deployment.

Each IaaS has different ways of limiting resource consumption for scaling VMs. Consult with your IaaS administrator to ensure additional VMs and related resources, like IPs and storage, will be available when scaling.

For Amazon Web Services, review the documentation regarding scaling instances. If you are using OpenStack, see the topic regarding managing projects and users. For vSphere, review the Configuring Ops Manager Director for VMware vSphere topic.


For database services deployed outside Cloud Foundry, plan to leverage your infrastructure’s high availability features and to configure backup and restore where possible. For more information about scaling internal database components, see the Scaling Instances in Elastic Runtime topic.

Note: Data services may have single points of failure depending on their configuration.

Contact Pivotal Support if you need assistance.

How CF Maintains High Availability

This section explains how Pivotal Cloud Foundry (PCF) deployments include several layers of HA to keep applications running in the face of system failure. These layers include availability zones (AZs), application health management, process monitoring, and VM resurrection.

Availability Zones

PCF supports deploying applications instances across multiple AZs. This level of high availability requires that you define AZs in your IaaS. PCF balances the applications you deploy across the AZs you defined. If an AZ goes down, you still have application instances running in another.

You can configure your deployment so that Diego cells are created across these AZs. Follow the configuration for your specific IaaS: AWS, GCP, OpenStack, or vSphere.

Health Management for App Instances

If you lose application instances for any reason, such as a bug in the app or an AZ going down, PCF restarts new instances to maintain capacity. Under Diego architecture, the nsync, BBS, and Cell Rep components track the number of instances of each application that are running across all of the Diego cells. When these components detect a discrepancy between the actual state of the app instances in the cloud and the desired state as known by the Cloud Controller, they advise the Cloud Controller of the difference and the Cloud Controller initiates the deployment of new application instances.

Process Monitoring

PCF uses a BOSH agent, monit, to monitor the processes on the component VMs that work together to keep your applications running, such as nsync, BBS, and Cell Rep. If monit detects a failure, it restarts the process and notifies the BOSH agent on the VM. The BOSH agent notifies the BOSH Health Monitor, which triggers responders through plugins such as email notifications or paging.

Resurrection for VMs

BOSH detects if a VM is present by listening for heartbeat messages that are sent from the BOSH agent every 60 seconds. The BOSH Health Monitor listens for those heartbeats. When the Health Monitor finds that a VM is not responding, it passes an alert to the Resurrector component. If the Resurrector is enabled, it sends the IaaS a request to create a new VM instance to replace the one that failed.

To enable the Resurrector, see the following pages for your particular IaaS: AWS, Azure, GCP, OpenStack, or vSphere.

Create a pull request or raise an issue on the source for this page in GitHub