Disaster Recovery in Pivotal Platform

Page last updated:

This topic provides an overview of the options and considerations for disaster recovery in Pivotal Platform.

Operators have a range of approaches for ensuring they can recover Pivotal Platform, apps, and data in case of a disaster. The approaches fall into the following two categories:

Back Up and Restore Using BOSH Backup and Restore (BBR)

What is BBR?

BOSH Backup and Restore (BBR) is a CLI for orchestrating backing up and restoring BOSH deployments and BOSH Directors. BBR triggers the backup or restore process on the deployment or Director, and transfers the backup artifact to and from the deployment or Director.

Use BOSH Backup and Restore to reliably create backups of core Pivotal Platform components and their data. These core components include CredHub, UAA, BOSH Director, and PAS.

Each component includes its own backup scripts. This decentralized structure helps keep scripts synchronized with the components. At the same time, locking features ensure data integrity and consistent, distributed backups across your deployment.

For more information about the BBR framework, see BOSH Backup and Restore in the open source Cloud Foundry documentation.

Backing Up Pivotal Platform

Backing up Pivotal Platform requires backing up the following components:

  • Pivotal Operations Manager settings
  • BOSH Director, including CredHub and UAA
  • Pivotal Application Service
  • Data services

For more information, see Backing Up Pivotal Platform with BBR. With these backup artifacts, operators can re-create Pivotal Platform exactly as it was when the backup was taken.

Restoring Pivotal Platform

The restore process involves creating a new Pivotal Platform deployment starting with the Ops Manager VM. For more information, see Restoring Pivotal Platform from Backup with BBR.

The time required to restore the data is proportionate to the size of the data because the restore process includes copying data. For example, restoring a 1 TB blobstore takes 1,000 times as long as restoring a 1 GB blobstore.

Benefits

Unlike other backup solutions, using BBR to back up Pivotal Platform enables the following:

  • Completeness: BBR supports backing up BOSH, including releases, CredHub, UAA, and service instances created with an on-demand service broker. With Pivotal Platform v1.12, Ops Manager export no longer includes releases.

  • Consistency: BBR provides referential integrity between the database and the blobstore because a lock is held while both the database and blobstore are backed up.

  • Correctness: Using the BBR restore flow addresses C2C and routing issues that can occur during restore.

API Downtime During Backups

Apps are not affected during backups, but certain APIs are unavailable. The downtime occurs only while the backup is being taken, not while the backup is being copied to the jumpbox.

In a consistent backup, the blobs in the blobstore match the blobs in the Cloud Controller database. To take a consistent backup, changes to the data are prevented during the backup. This means that the CF API, Routing API, Usage service, Autoscaler, Notification Service, Network Policy Server, and CredHub are unavailable while the backup is being taken. UAA is in read-only mode during the backup.

Backup Timings

The first three phases of the backup are lock, backup, and unlock. During this time, the API is unavailable. The drain and checksum phase starts after the backup scripts finish. BBR downloads the backup artifacts from the instances to the BBR VM, and performs a checksum to ensure the artifacts are not corrupt. The size of the blobstore significantly influences backup time.

The table below gives an indication of the downtime that you can expect. Actual downtime varies based on hardware and Pivotal Platform configuration. These example timings were recorded with Pivotal Application Service (PAS) deployed on Google Cloud Platform (GCP) with all components scaled to one and only one app pushed.

Backup Timings
API State Backup phase Duration for External Versioned S3-Compatible Blobstore Duration for External Unversioned S3-Compatible Blobstore Duration for Internal Blobstore
API unavailable lock 15 seconds 15 seconds 15 seconds
backup <30 seconds Proportional to blobstore size 10 seconds
unlock 3 minutes 3 minutes 3 minutes
API available drain and checksum <10 seconds <10 seconds Proportional to blobstore size

Blobstore Backup and Restore

Blobstores can be very large. To minimize downtime, BBR only takes blob metadata during the backup. For example, in the case of internal blobstores such as WebDav and NFS, BBR takes a list of hard links that point to the blobs. After the API becomes available, BBR makes copies of the blobs.

Unsupported Products

  • Data services. The Pivotal data services listed below do not support BBR. Operators of these services should use the automatic backups feature of each tile, available within Ops Manager.

    • MySQL for Pivotal Platform
    • Pivotal Cloud Cache for Pivotal Platform
    • RabbitMQ for Pivotal Platform
    • Redis for Pivotal Platform
  • External blobstores and databases. BBR support for backing up and restoring external databases and blobstores varies across Pivotal Platform versions. For more information, see Supported Components and External Storage Support Across Pivotal Platform Versions in Backing Up Pivotal Platform with BBR.

Best Practices

Frequency of Backups

Pivotal recommends that you take backups in proportion to the rate of change of the data in Pivotal Platform to minimize the number of changes lost if a restore is required. Pivotal recommends starting with backing up every 24 hours. If app developers make frequent changes, you should increase the frequency of backups.

Retention of Backup Artifacts

Operators should retain backup artifacts based on the timeframe they need to be able to restore to. For example, if backups are taken every 24 hours and Pivotal Platform must be able to be restored to three days prior, three sets of backup artifacts should be retained.

Artifacts should be stored in two data centers other than the Pivotal Platform data center. When deciding the restore timeframe, you should take other factors such as compliance and audit-ability into account.

Security

Pivotal recommends that you encrypt artifacts and store them securely.

Disaster Recovery by Re-Creating the Deployment

An alternative strategy for recovering Pivotal Platform after a disaster is to have automation in place so that all the data can be re-created. This requires that every modification to Pivotal Platform settings and state is automated, typically through the use of a pipeline.

Recovery steps include creating a new Pivotal Platform, re-creating orgs, spaces, users, services, service bindings and other state, and re-pushing apps.

For more information about this approach, see the Cloud Foundry Summit presentation Multi-DC Cloud Foundry: What, Why and How? on YouTube.

Disaster Recovery for Different Topologies

Active-Active

To prevent app downtime, some Pivotal customers run active-active, where they run two or more identical Pivotal Platform deployments in different data centers. If one Pivotal Platform deployment becomes unavailable, traffic is seamlessly routed to the other deployment. To achieve identical deployments, all operations to Pivotal Platform are automated so they can be applied to both Pivotal Platform deployments in parallel.

Because all operations have been automated, the automation approach to disaster recovery is a viable option for active-active. Disaster recovery requires re-creating Pivotal Platform, then running all the automation to re-create state.

This option requires discipline to automate all changes to Pivotal Platform. Some of the operations that need to be automated are the following:

  • App push, restage, scale
  • Org, space, and user create, read, update, and delete (CRUD)
  • Service instance CRUD
  • Service bindings CRUD
  • Routes CRUD
  • Security groups CRUD
  • Quota CRUD

Human-initiated changes always make their way into the system. These changes can include quotas being raised, new settings being enabled, and incident responses. For this reason, Pivotal recommends taking backups even when using an automated disaster recovery strategy.

Using BBR Backup and Restore Versus Recreating a Failed Pivotal Platform Deployment in Active-Active

Disaster Recovery
Restore the Pivotal Platform Data Re-create the Pivotal Platform Data
Preconditions IaaS prepared for Pivotal Platform install IaaS prepared for Pivotal Platform install
Steps
  1. Re-create Pivotal Platform
  2. Restore
  3. Apply changes to make restored Pivotal Platform match the other active Pivotal Platform
  1. Re-create Pivotal Platform
  2. Trigger automation to re-create orgs, spaces, etc.
  3. Notify app developers to re-push apps, re-create service instances and bindings
RTO (Recovery Time Objective)
Platform Time to re-create Pivotal Platform Time to re-create Pivotal Platform
Apps Time to restore Time until orgs/spaces/etc have been re-created and apps have been re-pushed
RPO (Recovery Point Objective)
Platform Time of the last backup Current time
Apps Time of the last backup Current time

Active-Passive

Instead of having a true active-active deployment across all layers, some Pivotal customers prefer to install a Pivotal Platform or PAS deployment on a backup site. The backup site resides on-premises, in a co-location facility, or the public cloud. The backup site includes an operational deployment, with only the most critical apps ready to accept traffic should a failure occur in the primary data center. Disaster recovery in this scenario involves the following:

  1. Switching traffic to the passive Pivotal Platform, making it active.

  2. Recovering the formerly-active Pivotal Platform. Operators can choose to do this through automation, if that option is available, or by using BBR and the restore process.

The RTO and RPO for re-creating the active Pivotal Platform are the same as outlined in the table above.

Reducing RTO

Both the restore and re-create data disaster recovery options require standing up a new Pivotal Platform, which can take hours. If you require shorter RTO, several options involving a pre-created standby hardware and Pivotal Platform are available:

Active-cold

Public cloud environment ready for Pivotal Platform installation, no Pivotal Platform installed. This saves both IaaS costs and Pivotal Platform instance costs. For on-on-premise installations, this requires hardware on standby, ready to install on, which might not be a realistic option.

Active-warm

Pivotal Platform installed on standby hardware and kept up to date, VMs scaled down to zero (spin them up each time there is a platform update), no apps installed, no orgs or spaces defined.

Active-inflate platform

Bare minimum Pivotal Platform installation, either with no apps, or a small number of each app in a stopped state. On recovery, push a small number of apps or start current apps, while simultaneously triggering automation to scale the platform to the primary node size, or a smaller version if large percentages of loss are acceptable. This mode allows you to start sending some traffic immediately, while not paying for a full non-primary platform. This method requires data seeded, but it is usually acceptable to complete data sync while platform is scaling up.

Active-inflate apps

Non-primary deployment scaled to the primary node size, or smaller version if large percentages of loss are acceptable, with a small number of Diego Cells (VMs). On fail-over, scale Diego Cells up to primary node counts. This mode allows you to start sending most traffic immediately, while not paying for all the AIs of a fully fledged node. This method requires data to be there very quickly after failure. It does not require real-time sync, but near-real time.

There is a trade-off between cost and RTO: the less the replacement Pivotal Platform needs to be deployed and scaled, the faster the restore.

Automating Backups

BBR generates the backup artifacts required for Pivotal Platform, but does not handle scheduling, artifact management, or encryption. The BBR team has created a starter Concourse pipeline to automate backups with BBR.

Also, Stark & Wayne’s Shield can be used as a front-end management tool using the BBR plugin.

Validating Backups

To ensure that backup artifacts are valid, the BBR tool creates checksums of the generated backup artifacts, and ensures that the checksums match the artifacts on the jumpbox.

However, the only way to be sure that the backup artifact can be used to successfully re-create Pivotal Platform is to test it in the restore process. This is a cumbersome, dangerous process that should be done with care. For instructions, see Step 11: (Optional) Validate Your Backup in the Backing Up Pivotal Platform with BBR topic.