Disaster Recovery in Pivotal Cloud Foundry

This document provides an overview of the options and considerations for disaster recovery in Pivotal Cloud Foundry (PCF).

Operators have a range of approaches for ensuring they can recover Pivotal Cloud Foundry, apps, and data in case of a disaster. The approaches fall into the following two categories:

Back up and Restore using BOSH Backup and Restore (BBR)

What is BBR?

BOSH Backup and Restore (BBR) is a CLI for orchestrating backing up and restoring BOSH deployments and BOSH Directors. BBR triggers the backup or restore process on the deployment or Director, and transfers the backup artifact to and from the deployment or Director.

Use BOSH Backup and Restore to reliably create backups of core PCF components and their data. These core components include CredHub, UAA, BOSH Director, and PAS.

Each component includes its own backup scripts. This decentralized structure helps keep scripts synchronized with the components. At the same time, locking features ensure data integrity and consistent, distributed backups across your deployment.

For more information about the BBR framework, see BOSH Backup and Restore in the open source Cloud Foundry documentation.

Backing up PCF

Backing up PCF requires backing up the following components:

  • Ops Manager settings
  • BOSH Director, including CredHub and UAA
  • Pivotal Application Service
  • Data services

For more information, see Backing up Pivotal Cloud Foundry with BBR. With these backup artifacts, operators can recreate PCF exactly as it was when the backup was taken.

Restoring PCF

The restore process involves creating a new PCF deployment starting with the Ops Manager VM. For more information, see Restoring Pivotal Cloud Foundry from Backup with BBR.

The time required to restore the data is proportionate to the size of the data because the restore process includes copying data. For example, restoring a 1 TB blobstore takes one thousand times as long as restoring a 1 GB blobstore.

Benefits

Unlike other backup solutions, using BBR to back up PCF enables the following:

  • Completeness: BBR supports backing up BOSH, including releases, CredHub, UAA, and service instances created with an on-demand service broker. With PCF v1.12, Ops Manager export no longer includes releases.
  • Consistency: BBR provides referential integrity between the database and the blobstore because a lock is held while both the database and blobstore are backed up.
  • Correctness: Using the BBR restore flow addresses C2C and routing issues that can occur during restore.

API Downtime During Backups

Apps are not affected during backups, but certain APIs are unavailable. The downtime occurs only while the backup is being taken, not while the backup is being copied to the jumpbox.

In a consistent backup, the blobs in the blobstore match the blobs in the Cloud Controller Database. To take a consistent backup, changes to the data are prevented during the backup. This means that the CF API, Routing API, Usage Service, Autoscaler, Notification Service, Network Policy Server, and CredHub are unavailable while the backup is being taken. UAA is in read-only mode during the backup.

Backup Timings

The first three phases of the backup are lock, backup, and unlock. During this time, the API is unavailable. The drain and checksum phase starts once the backup scripts finish. BBR downloads the backup artifacts from the instances to the BBR VM, and performs a checksum to ensure the artifacts are not corrupt. The size of the blobstore significantly influences backup time.

The table below gives an indication of the downtime that you can expect. Actual downtime varies based on hardware and PCF configuration. These example timings were recorded with Pivotal Application Service (PAS) deployed on Google Cloud Platform (GCP) with all components scaled to one and only one app pushed.

Backup Timings
API State Backup phase Duration for External Versioned S3-Compatible Blobstore Duration for External Unversioned S3-Compatible Blobstore Duration for Internal Blobstore
API unavailable lock 15 seconds 15 seconds 15 seconds
backup <30 seconds Proportional to blobstore size 10 seconds
unlock 3 minutes 3 minutes 3 minutes
API available drain and checksum <10 seconds <10 seconds Proportional to blobstore size

Blobstore backup and restore

Blobstores can be very large. To minimize downtime, BBR only takes blob metadata during the backup. For example, in the case of internal blobstores (Webdav/NFS), BBR takes a list of hardlinks that point to the blobs. Once the API becomes available, BBR makes copies of the blobs.

Unsupported Products

  • Data services. The Pivotal data services listed below do not support BBR. Operators of these services should use the automatic backups feature of each tile, available within Ops Manager.

    • MySQL for PCF
    • Pivotal Cloud Cache for PCF
    • RabbitMQ for PCF
    • Redis for PCF
  • External blobstores and databases. BBR support for backing up and restoring external databases and blobstores varies across PCF versions. For more information, see Supported Components and External Storage Support Across PCF Versions in Backing up Pivotal Cloud Foundry with BBR.

Best Practices

Frequency of Backups

Pivotal recommends that you take backups in proportion to the rate of change of the data in PCF to minimize the number of changes lost if a restore is required. We suggest starting with backing up every 24 hours. If app developers make frequent changes, you should increase the frequency of backups.

Retention of Backup Artifacts

Operators should retain backup artifacts based on the timeframe they need to be able to restore to. For example, if backups are taken every 24 hours and PCF must be able to be restored to three days prior, three sets of backup artifacts should be retained.

Artifacts should be stored in two data centers other than the PCF data center. When deciding the restore timeframe, you should take other factors such as compliance and auditability into account.

Security

Pivotal strongly recommends that you encrypt artifacts and stored them securely.

Disaster Recovery by Recreating the Deployment

An alternative strategy for recovering PCF after a disaster is to have automation in place so that all the data can be recreated. This requires that every modification to PCF settings and state be automated, typically through use of a pipeline.

Recovery steps include creating a new PCF, recreating orgs, spaces, users, services, service bindings and other state, and re-pushing apps.

For more information about this approach, see the following Cloud Foundry Summit presentation: Multi-DC Cloud Foundry: What, Why and How?.

Disaster Recovery for Different Topologies

Active-Active

To prevent app downtime, some Pivotal customers run active-active, where they run two or more identical PCF deployments in different data centers. If one PCF deployment becomes unavailable, traffic is seamlessly routed to the other deployment. To achieve identical deployments, all operations to PCF are automated so they can be applied to both PCF deployments in parallel.

Because all operations have been automated, the automation approach to disaster recovery is a viable option for active-active. Disaster recovery requires recreating PCF, then running all the automation to recreate state.

This option requires discipline to automate all changes to PCF. Some of the operations that need to be automated are the following:

  • App push, restage, scale
  • Org, space, and user create, read, update, and delete (CRUD)
  • Service instance CRUD
  • Service bindings CRUD
  • Routes CRUD
  • Security groups CRUD
  • Quota CRUD

Human-initiated changes always make their way into the system. These changes can include quotas being raised, new settings being enabled, and incident responses. For this reason, Pivotal recommends taking backups even when using an automated disaster recovery strategy.

Using BBR Backup and Restore versus Recreating a Failed PCF Deployment in Active-Active

Disaster Recovery
Restore the PCF Data Recreate the PCF Data
Preconditions IaaS prepared for PCF install IaaS prepared for PCF install
Steps
  1. Recreate PCF
  2. Restore
  3. Apply changes to make restored PCF match the other active PCF
  1. Recreate PCF
  2. Trigger automation to recreate orgs, spaces, etc.
  3. Notify app developers to repush apps, recreate service instances and bindings
RTO (Recovery Time Objective)
Platform Time to recreate PCF Time to recreate PCF
Apps Time to restore Time until orgs/spaces/etc have been recreated + apps have been repushed
RPO (Recovery Point Objective)
Platform Time of the last backup Current time
Apps Time of the last backup Current time

Active-Passive

Instead of having a true active-active deployment across all layers, some Pivotal customers prefer to install a PCF or PAS deployment on a backup site. The backup site resides on-premises, in a co-location facility, or the public cloud. The backup site includes an operational deployment, with only the most critical apps ready to accept traffic should a failure occur in the primary data center. Disaster recovery in this scenario involves the following:

  1. Switching traffic to the passive PCF, making it active.
  2. Recovering the formerly-active PCF. Operators can choose to do this through automation, if that option is available, or by using BBR and the restore process.

The RTO and RPO for recreating the active PCF are the same as outlined in the table above.

Reducing RTO

Both the restore and recreate data disaster recovery options require standing up a new PCF, which can take hours. If you require shorter RTO, several options involving a pre-created standby hardware and PCF are available:

Active-cold

Public cloud environment ready for PCF installation, no PCF installed. This saves both IaaS costs and PCF instance costs. For on-prem installations, this requires hardware on standby, ready to install on, which may not be a realistic option.

Active-warm

PCF installed on standby hardware and kept up to date, VMs scaled down to zero (spin them up each time there is a platform update), no apps installed, no orgs or spaces defined.

Active-inflate platform

Bare minimum PCF install, either with no applications, or a small number of each app in a stopped state. On recovery, push a small number of apps or start current apps, while simultaneously triggering automation to scale the platform to the primary node size, or a smaller version if large percentages of loss are acceptable. This mode allows you to start sending some traffic immediately, while not paying for a full non-primary platform. This method requires data seeded, but it is usually acceptable to complete data sync while platform is scaling up.

Active-inflate apps

Non-primary deployment scaled to the primary node size, or smaller version if large percentages of loss are acceptable, with a small number of Diego cells (VMs). On failover, scale Diego cells up to primary node counts. This mode allows you to start sending most traffic immediately, while not paying for all the AIs of a fully fledged node. This method requires data to be there very quickly after failure. It does not require real-time sync, but near-real time.

There is a tradeoff between cost and RTO: the less the replacement PCF needs to be deployed and scaled, the faster the restore.

Automating Backups

BBR generates the backup artifacts required for PCF, but does not handle scheduling, artifact management, or encryption. The BBR team has created a starter Concourse pipeline to automate backups with BBR.

Also, Stark & Wayne’s Shield can be used as a front end management tool using the BBR plugin.

Validating Backups

To ensure that backup artifacts are valid, the BBR tool creates checksums of the generated backup artifacts, and ensures that the checksums match the artifacts on the jumpbox.

However, the only way to be sure that the backup artifact can be used to successfully recreate PCF is to test it in the restore process. This is a cumbersome, dangerous process so should be done with care. For instructions, see Step 11: (Optional) Validate Your Backup of the Backing Up Pivotal Cloud Foundry with BBR.

Create a pull request or raise an issue on the source for this page in GitHub