Restoring PCF from Backup with BBR

Page last updated:

This topic describes the procedure for restoring your critical backend Pivotal Cloud Foundry (PCF) components with BOSH Backup and Restore (BBR), a command-line tool for backing up and restoring BOSH deployments. To perform the procedures in this topic, you must have backed up PCF by following the steps in the Backing Up Pivotal Cloud Foundry with BBR topic.

To view the BBR release notes, see BOSH Backup and Restore Release Notes.

The procedures described in this topic prepare your environment for PCF, deploy Ops Manager, import your installation settings, and use BBR to restore your PCF components.

WARNING: Restoring PCF with BBR is a destructive operation. If the restore fails, the new environment may be left in an unusable state and require reprovisioning. Only perform the procedures in this topic for the purpose of disaster recovery, such as recreating PCF after a storage-area network (SAN) corruption.

WARNING: When validating your backup, the VMs and disks from the backed-up BOSH Director should not visible to the new BOSH Director. As a result, Pivotal recommends that you deploy the new BOSH Director to a different IaaS network and account than the VMs and disks of the backed up BOSH Director.

WARNING: For PCF v2.0, BBR only supports backup and restore of environments with zero or one CredHub instances.

Note: BBR is a feature in PCF v1.11. You can only use BBR to back up PCF v1.11 and later. To restore earlier versions of PCF, perform the manual procedure documented for your specific PCF version.

Note: If the BOSH Director you are restoring had any deployments that were deployed manually rather than through an Ops Manager Tile, you must restore them manually at the end of the process. For more information, see (Optional) Step 14: Restore Non-Tile Deployments.

Note: If you are restoring in order to validate a backup, look for notes marked Validation throughout the topic.

Compatibility of Restore

This section describes the restrictions for a backup artifact to be restorable to another environment. This section is for guidance only, and Pivotal highly recommends that operators validate their backups by using the backup artifacts in a restore.

Consult the following restrictions for a backup artifact to be restorable:

  • CIDR ranges: BBR requires the IP address ranges to be the same in the restore environment as in the backup environment.
  • Topology: BBR requires the BOSH topology of a deployment to be the same in the restore environment as it was in the backup environment.
  • Naming of instance groups and jobs: For any deployment that implements the backup and restore scripts, the instance groups and jobs must have the same names.
  • Number of instance groups and jobs: For instance groups and jobs that have backup and restore scripts, there must be the same number of instances.
  • Limited validation: BBR puts the backed up data into the corresponding instance groups and jobs in the restored environment, but can’t validate the restore beyond that. For example, if the MySQL encryption key is different in the restore environment, the BBR restore might succeed although the restored MySQL database is unusable.
  • PCF version: BBR can restore to the same version of PCF that was backed up. BBR does not support restoring to other major, minor, or patch releases.

Note: A change in VM size or underlying hardware should not affect BBR’s ability to restore data, as long as there is adequate storage space to restore the data.

(Optional) Step 1: Prepare Your Environment

In an event of a disaster, you may lose not only your VMs and disks, but your IaaS resources as well, such as networks and load balancers.

If you need to recreate your IaaS resources, prepare your environment for PCF by following the instructions specific to your IaaS in Installing Pivotal Cloud Foundry.

Note: The instructions for installing PCF on Amazon Web Services (AWS) and OpenStack combine the procedures for preparing your environment and deploying Ops Manager into a single topic. The instructions for the other supported IaaSes split these procedures into two separate topics.

If you recreate your IaaS resources, you must also add those resources to Ops Manager by performing the procedures in the (Optional) Step 3: Configure Ops Manager for New Resources section.

Step 2: Deploy Ops Manager and Import Installation Settings

  1. Perform the procedures for your IaaS to deploy Ops Manager:

  2. Import your installation settings. This can be done in two ways:

    1. Using the Ops Manager UI:

      1. Access your new Ops Manager by navigating to YOUR-OPS-MAN-FQDN in a browser.
      2. On the Welcome to Ops Manager page, click Import Existing Installation.

        Welcome

      3. In the import panel, perform the following tasks:

        • Enter your Decryption Passphrase.
        • Click Choose File and browse to the installation zip file that you exported in the Step 7: Export Installation Settings section of the Backing Up Pivotal Cloud Foundry with BBR topic.

        Decryption passphrase

      4. Click Import.

        Note: Some browsers do not provide feedback on the status of the import process, and may appear to hang. The import process takes at least 10 minutes, and takes longer the more tiles that were present on the backed-up Ops Manager.

      5. A Successfully imported installation message appears upon completion.

        Success

    2. Using the Ops Manager API: curl OPSMAN-URL/api/v1/installation_asset_collection \ -X POST \ -H "Authorization: Bearer UAA-ACCESS-TOKEN" \ -F 'installation[file]=@installation.zip' \ -F 'passphrase=DECRYPTION-PASSPHRASE'

      Where:

      • UAA-ACCESS-TOKEN is the UAA access token. For more information about how to retrieve this token, see Using the Ops Manager API.
      • DECRYPTION-PASSPHRASE is the decryption passphrase in use when you exported the installation settings from Ops Manager.

(Optional) Step 3: Configure Ops Manager for New Resources

If you recreated IaaS resources such as networks and load balancers by following the steps in the (Optional) Step 1: Prepare Your Environment section above, do the following to update Ops Manager with your new resources:

  1. Enable Ops Manager advanced mode by following the instructions in this Knowledge Base article.

  2. Navigate to the Ops Manager Installation Dashboard and click the Ops Manager Director tile.

  3. Click Create Networks and update the network names to reflect the network names for the new environment.

  4. If running on GCP, click Google Config and update the Project ID to reflect the new GCP project ID.

  5. Return to the Ops Manager Installation Dashboard and click the Elastic Runtime tile.

  6. Click Resource Config. If necessary for your IaaS, enter the name of your new load balancers in the Load Balancers column.

  7. If necessary, click Networking and update the load balancer SSL certificate and private key under Router SSL Termination Certificate and Private Key.

  8. If your environment has a new DNS address, update the old environment DNS entries to point to the new load balancer addresses. For more information, see the Step 4: Configure Networking section of the Using Your Own Load Balancer topic and follow the link to the instructions for your IaaS.

  9. If you are using Google Cloud Platform (GCP), navigate to the Google Config section of the Ops Manager Director tile and update the Default Deployment Tag to reflect the new environment.

  10. Disable Ops Manager advanced mode as recommended in the Knowledge Base article.

Step 4: Remove BOSH State File

  1. SSH into your Ops Manager VM. For more information, see the SSH into Ops Manager section of the Advanced Troubleshooting with the BOSH CLI topic.

  2. On the Ops Manager VM, delete the /var/tempest/workspaces/default/deployments/bosh-state.json file:

    $ sudo rm /var/tempest/workspaces/default/deployments/bosh-state.json
    

  3. Navigate to YOUR-OPS-MAN-FQDN in a browser and log into Ops Manager.

WARNING: Do not click Apply Changes at this point.

Step 5: Deploy Ops Manager Director

Perform the steps in the Applying Changes to Ops Manager Director topic to use the Ops Manager API to only deploy the Ops Manager Director.

Note: If your BOSH Director has an external hostname, you should change it in Ops Manager Director > Director Config > Director Hostname to ensure it does not conflict with the hostname of the backed up Director.

Step 6: Transfer Artifacts to Jumpbox

In the Step 9: Back Up Your Elastic Runtime Deployment section of the Backing Up Pivotal Cloud Foundry with BBR topic, in the After Taking the Backups section you moved the TAR and metadata files of the backup artifacts off your jumpbox to your preferred storage space. Now you must transfer those files back to your jumpbox.

For instance, you could SCP the backup artifact to your jumpbox:

$ scp LOCAL-PATH-TO-BACKUP-ARTIFACT JUMPBOX-USER/JUMPBOX-ADDRESS

Note: Pivotal recommends that you regularly update the BBR binary on your jumpbox to the latest version. See Transfer BBR Binary to Your Jumpbox in Setting Up Your Jumpbox for BBR for more information.

Step 7: Retrieve BOSH Director Address and Credentials

Perform the following steps to retrieve the IP address of your BOSH Director and the credentials for logging in from the Ops Manager Director tile:

  1. Install the BOSH v2+ CLI on a machine outside of your PCF deployment. You can use the jumpbox for this task.
  2. From the Installation Dashboard in Ops Manager, select Ops Manager Director > Status and record the IP address listed for the Director. You access the BOSH Director using this IP address.

  3. Click Credentials and record the Director credentials.
  4. From the command line, log into the BOSH Director using the IP address and credentials that you recorded:
    $ bosh -e DIRECTOR_IP \
    --ca-cert PATH-TO-BOSH-SERVER-CERTIFICATE log-in
    Email (): director
    Password (): *******************
    Successfully authenticated with UAA
    Succeeded
    

Step 8: Restore the BOSH Director

  1. Navigate to the Ops Manager Installation Dashboard.

  2. Click the Ops Manager tile.

  3. Click the Credentials tab.

  4. Locate Bbr Ssh Credentials and click Link to Credential next to it.

    You can also retrieve the credentials using the Ops Manager API with a GET request to the following endpoint: /api/v0/deployed/director/credentials/bbr_ssh_credentials. For more information, see the Using the Ops Manager API topic.

  5. Copy the value for private_key_pem.

  6. SSH into your jumpbox.

  7. Run the following command to reformat the key and save it to a file named PRIVATE-KEY in the current directory, copying in the contents of your private key for YOUR-PRIVATE-KEY:

    $ printf -- "YOUR-PRIVATE-KEY" > PRIVATE-KEY
    

  8. Ensure the BOSH Director backup artifact is in the folder you from which you will run BBR.

  9. Run the BBR restore command from your jumpbox to restore the BOSH Director:

    $ nohup bbr director \
      --private-key-path PRIVATE-KEY \
      --username bbr \
      --host HOST \
      restore \
        --artifact-path PATH-TO-DIRECTOR-BACKUP
    
    Use the optional --debug flag to enable debug logs. See the Logging section of the Backing Up Pivotal Cloud Foundry with BBR topic for more information.

    Replace the placeholder values as follows:

    • PATH-TO-DIRECTOR-BACKUP: This is the path to the Director backup you want to restore.
    • PRIVATE-KEY: This is the path to the private key file you created above.
    • HOST: This is the address of the BOSH Director. If the BOSH Director is public, this is a URL, such as https://my-bosh.xxx.cf-app.com. Otherwise, it is the BOSH-DIRECTOR-IP, which you retrieved in the Step 6: Retrieve BOSH Director Address and Credentials section.

Note: The BBR BOSH Director restore command can take at least 15 minutes to complete. Pivotal recommends that you run it independently of the SSH session, so that the process can continue running even if your connection to the jumpbox fails. The command above uses nohup but you could also run the command in a screen or tmux session.

If the commands completes successfully, continue to Step 9: Identify Your Deployment.

If the command fails, do the following:

  1. Run restore-cleanup

    $ bbr director \
    --private-key-path PRIVATE-KEY \
    --username bbr \
    --host HOST \
    backup-cleanup
    

  2. Ensure all the parameters in the command are set.

  3. Ensure the BOSH Director credentials are valid.

  4. Ensure the specified deployment exists.

  5. Ensure the source deployment is compatible with the target deployment.

  6. Ensure that the jumpbox can reach the BOSH Director.

Step 9: Identify Your Deployment

After logging in to your BOSH Director, run bosh deployments to identify the name of the BOSH deployment that contains PCF:

$ bosh -e DIRECTOR-IP --ca-cert PATH-TO-BOSH-SERVER-CERTIFICATE deployments

Name                     Release(s)
cf-example               push-apps-manager-release/661.1.24
                         cf-backup-and-restore/0.0.1
                         binary-buildpack/1.0.11
                         capi/1.28.0
                         cf-autoscaling/91
                         cf-mysql/35
                         ...

In the above example, the name of the BOSH deployment that contains PCF is cf-example. PATH-TO-BOSH-SERVER-CERTIFICATE is the path to the Certificate Authority (CA) certificate for the BOSH Director. For more information, see Ensure BOSH Director Certificate Availability.

Step 10: Remove Stale Cloud IDs for All Deployments

For every deployment in the BOSH Director, run the following command:

$ bosh -e BOSH-DIRECTOR-IP \
  --ca-cert PATH-TO-BOSH-SERVER-CERTIFICATE \
  -d DEPLOYMENT-NAME -n cck \
  --resolution delete_disk_reference \
  --resolution delete_vm_reference

This reconciles the BOSH Director’s internal state with the state in the IaaS. You can use the list of deployments returned in Step 9: Identify Your Deployment.

If the bosh cloud-check command does not successfully delete disk references and you see a message similar to the following, perform the additional procedures in the Remove Unused Disks section below.

Scanning 19 persistent disks: 19 OK, 0 missing ...

Step 11: Redeploy Elastic Runtime

  1. Perform the following steps to determine which stemcell is used by Elastic Runtime:

    1. Navigate to the Ops Manager Installation Dashboard.
    2. Click the Elastic Runtime tile.
    3. Click Stemcell and record the release number included in the displayed filename: Stemcell In the screenshot above, the stemcell release number is 3421.9.

      You can also retrieve the stemcell release using the BOSH CLI:

      $ bosh -e DIRECTOR-IP deployments
      Using environment '10.0.0.5' as user 'director' (bosh.*.read, openid, bosh.*.admin, bosh.read, bosh.admin)

      Name Release(s) Stemcell(s) Team(s) Cloud Config cf-9cb6995b7d746cd77438 push-apps-manager-release/661.1.24 bosh-google-kvm-ubuntu-trusty-go_agent/3421.9 - latest ...

  2. Download the stemcell from Pivotal Network.

  3. Run the following command to upload the stemcell used by Elastic Runtime:

    $ bosh -e BOSH-DIRECTOR-IP \
      -d DEPLOYMENT-NAME \
      --ca-cert PATH-TO-BOSH-SERVER-CERTIFICATE \
      upload stemcell \
      --fix PATH-TO-STEMCELL
    

  4. If you have any other tiles installed, ensure you upload their stemcells if they are different from the Elastic Runtime stemcell. Upload stemcells to the BOSH Director with bosh upload stemcell --fix PATH-TO-STEMCELL, as in the command above.

  5. From the Ops Manager Installation Dashboard, navigate to Elastic Runtime Resource Config.

  6. Ensure the number of instances for MySQL Server is set to 1.

    WARNING: Restore will fail if there is not exactly one MySQL Server instance deployed.

  7. Ensure that all errands needed by your system are set to run.

  8. Return to the Ops Manager Installation Dashboard and click Apply Changes to redeploy.

    Note: If your Elastic Runtime uses an external blobstore, ensure that the Elastic Runtime tile is configured to use a different blobstore before clicking Apply Changes. Otherwise it will attempt to connect to the blobstore that the existing Elastic Runtime uses.

    Note: Ensure your System Domain and Apps Domain under Elastic Runtime Domains are updated to refer to the validation environment.

Step 12: Restore Elastic Runtime

Note: If your apps must not run after a restore, run bosh stop on each diego_cell VM in the deployment.

  1. If you use an external blobstore and copied it during the backup, restore the external blobstore with your IAAS specific tools before running Elastic Runtime restore.

  2. Run the BBR restore command from your jumpbox to restore Elastic Runtime:

    $ BOSH_CLIENT_SECRET=BOSH-PASSWORD \
      bbr deployment \
        --target BOSH-DIRECTOR-IP \
        --username BOSH-CLIENT \
        --deployment DEPLOYMENT-NAME \
        --ca-cert PATH-TO-BOSH-SERVER-CERTIFICATE \
        restore \
          --artifact-path PATH-TO-ELASTIC-RUNTIME-BACKUP
    

    Replace the placeholder values as follows:

    • BOSH-CLIENT, BOSH-PASSWORD: Use the BOSH UAA user provided in Pivotal Ops Manager > Credentials > Uaa Bbr Client Credentials.

      You can also retrieve the credentials using the Ops Manager API with a GET request to the following endpoint: /api/v0/deployed/director/credentials/uaa_bbr_client_credentials. For more information, see the Using the Ops Manager API topic.

    • BOSH-DIRECTOR-IP: You retrieved this value in the Step 6: Retrieve BOSH Director Address and Credentials section.
    • DEPLOYMENT-NAME: You retrieved this value in the Step 8: Identify Your Deployment section.
    • PATH-TO-BOSH-SERVER-CERTIFICATE: This is the path to the BOSH Director’s Certificate Authority (CA) certificate, if the certificate is not verifiable by the local machine’s certificate chain.
    • PATH-TO-ELASTIC-RUNTIME-BACKUP: This is the path to the Elastic Runtime backup you want to restore.

    Note: If you ran bosh stop on each diego_cell before running bbr restore, you can now run cf stop on all apps and then run bosh start on each diego_cell. After this, all apps will be deployed in a stopped state.

  3. Perform the following steps after restoring Elastic Runtime:

    1. Retrieve the MySQL admin credentials from CredHub using the Ops Manager API:
      1. Perform the procedures in the Using the Ops Manager API topic to authenticate and access the Ops Manager API.
      2. Use the GET /api/v0/deployed/products endpoint to retrieve a list of deployed products, replacing UAA-ACCESS-TOKEN with the access token recorded in the previous step:
        $ curl "https://OPS-MAN-FQDN/api/v0/deployed/products" \
            -X GET \
            -H "Authorization: Bearer UAA-ACCESS-TOKEN"
      3. In the response to the above request, locate the product with an installation_name starting with cf- and copy its guid.
      4. Run the following curl command, replacing PRODUCT-GUID with the value of guid from the previous step:
        $ curl "https://OPS-MAN-FQDN/api/v0/deployed/products/PRODUCT-GUID/variables?name=mysql-admin-credentials" \
            -X GET \
            -H "Authorization: Bearer UAA-ACCESS-TOKEN"
      5. Record the MySQL admin credentials from the response to the above request.
    2. List the MySQL Server instances in your deployment:
      $ bosh -e DIRECTOR-IP \
      --ca-cert PATH-TO-BOSH-SERVER-CERTIFICATE \
      -d DEPLOYMENT-NAME \
      instances | grep mysql
    3. SSH into a MySQL Server instance:
      $ bosh -e DIRECTOR-IP \
      --ca-cert PATH-TO-BOSH-SERVER-CERTIFICATE \
      -d DEPLOYMENT-NAME \
      ssh mysql/INSTANCE-GUID
    4. From the MySQL Server instance, run the following command:
      $ sudo /var/vcap/packages/mariadb/bin/mysql -u root -p
      When prompted, enter the MySQL admin password.

    5. At the mysql prompt, run the following command:
      mysql> use silk; drop table if exists subnets; drop table if exists gorp_migrations;
    6. Exit mysql:
      mysql> exit
    7. Exit the MySQL Server SSH session:
      $ exit
    8. List the Diego Database instances in your deployment:
      $ bosh -e DIRECTOR-IP \
      --ca-cert PATH-TO-BOSH-SERVER-CERTIFICATE \
      -d DEPLOYMENT-NAME \
      instances | grep diego_database
    9. SSH into each Diego Database instance and run the following command:
      $ sudo monit restart silk-controller

    Restored apps will begin to start. The amount of time it takes for all apps to start depends on the number of app instances, the resources available to the underlying infrastructure, and the value of the Max Inflight Container Starts field in the Elastic Runtime tile.

  4. If desired, scale the MySQL Server job back up to its previous number of instances by navigating to the Resource Config section of the Elastic Runtime tile. After scaling the job, return to the Ops Manager Installation Dashboard and click Apply Changes to deploy.

(Optional) Step 13: Restore On-Demand Service Instances

Note: These procedures restore the on-demand service instances but do not restore service instance data.

If you have on-demand service instances provisioned by an on-demand service broker, perform the following steps to restore them after successfully restoring PCF:

  1. Use the Cloud Foundry Command Line Interface (cf CLI) to target your PCF deployment:

    $ cf api api.YOUR-SYSTEM-DOMAIN
    

  2. Log in:

    $ cf login
    

  3. Perform the following steps to make a list of all the service instances provisioned by your on-demand service broker:

    1. List your service offerings:
      $ cf curl /v2/services
      
    2. Record the GUID of the on-demand service offering you want to restore by examining the value for guid under metadata:
      "metadata": {
      "guid": "ab2b01cc-2a22-525a-a333-e6e666a6aa66",
      "url": "/v2/services/ab2b01cc-2a22-525a-a333-e6e666a6aa66",
      "created_at": "2017-02-10T18:19:35Z",
      "updated_at": "2017-02-10T18:19:35Z"
      
    3. List all service plans for the service offering, replacing SERVICE-OFFERING-GUID with the GUID obtained in the previous step:
      $ cf curl /v2/services/SERVICE-OFFERING-GUID/service_plans
      
    4. Record the GUID of each service plan by examining the value for guid under metadata.
    5. For each service plan, list all service instances:
      $ cf curl /v2/service_plans/SERVICE-PLAN-GUID/service_instances
      
    6. Record the GUID of each service instance by examining the value for guid under metadata.
  4. Perform the following steps to obtain the BOSH credentials used by your on-demand service broker:

    1. Navigate to https://YOUR-OPS-MAN-FQDN/api/v0/staged/products in a browser to obtain the product GUID of your tile.
    2. Navigate to https://YOUR-OPS-MAN-FQDN/api/v0/staged/products/PRODUCT-GUID/manifest to obtain your product’s staged manifest.
    3. Copy the manifest into a file on your local machine called manifest.json.
    4. Run the following command to find the name of the deployment’s on-demand broker instance group:
      $ cat manifest.json | jq '(.instance_groups[].name )' | grep on-demand-broker | grep -v -E "register|smoke"
      > redis-on-demand-broker
      
    5. Run the following command to extract the BOSH credentials:
      $ cat manifest.json | jq '(.instance_groups[] |
      select(.name == "redis-on-demand-broker").jobs[] |
      select(.name == "broker").properties.bosh.authentication.uaa )'
      
  5. SSH into your Ops Manager VM. For more information, see the SSH into Ops Manager section of the Advanced Troubleshooting with the BOSH CLI topic.

  6. Using the BOSH credentials retrieved above, authenticate with your BOSH Director by running the following commands with the BOSH CLI v2+:

    $ export BOSH_CLIENT=YOUR-CLIENT-ID
    $ export BOSH_CLIENT_SECRET=YOUR-CLIENT-SECRET
    $ bosh alias-env director -e DIRECTOR-IP \
    --ca-cert PATH-TO-BOSH-SERVER-CERTIFICATE
    

  7. Using the list of service instance GUIDs gathered above, deploy each instance with the following commands:

    $ bosh -e director manifest \
    -d service-instance_SERVICE-INSTANCE-GUID > /tmp/manifest.yml
    $ bosh -e director \
    -d service-instance_SERVICE-INSTANCE-GUID deploy /tmp/manifest.yml
    

  8. After deploying all service instances, remove the manifest from tmp.

    $ rm /tmp/manifest.yml
    

  9. Any Elastic Runtime apps bound to these services will have to be restarted to pick up the recreated service instances.

(Optional) Step 14: Restore Non-Tile Deployments

If you have any deployments that were deployed manually with the BOSH Director rather than through an Ops Manager Tile, perform the following steps to restore the VMs.

  1. Identify the names of the deployments that you need to restore. Do not include the deployments from Ops Manager Tiles. Run the following command to obtain a list of all deployments on your BOSH Director:

    $ bosh -e BOSH-DIRECTOR-IP \
      --ca-cert PATH-TO-BOSH-SERVER-CERTIFICATE \
      deployments
    

  2. Run the following command for each deployment you need to restore:

    $ bosh -n -e BOSH-DIRECTOR-IP \
      --ca-cert PATH-TO-BOSH-SERVER-CERTIFICATE \
      -d DEPLOYMENT-NAME \
      cck --resolution=recreate_vm
    

  3. Run the following command to verify the status of the VMs in each deployment:

    $ bosh -e BOSH-DIRECTOR-IP \
      --ca-cert PATH-TO-BOSH-SERVER-CERTIFICATE \
      -d DEPLOYMENT-NAME \
      vms
    
    The process state for all VMs should show as running.

Remove Unused Disks

If bosh cloud-check does not clean up all disk references, you must manually delete the disks from a previous deployment that will prevent recreated deployments from working.

WARNING: This is a very destructive operation.

To delete the disks, perform one of the following procedures:

  • Use BOSH CLI to delete the disks by performing the following steps:

    1. Target the redeployed BOSH Director using the BOSH CLI by performing the procedures in Step 6: Retrieve BOSH Director Address and Credentials.
    2. List the deployments by running the following command:
      $ bosh -e DIRECTOR-IP \
      --ca-cert PATH-TO-BOSH-SERVER-CERTIFICATE deployments
      
    3. Delete each deployment with the following command:
      $ bosh -d DEPLOYMENT-NAME delete deployment
      
  • Log in to your IaaS account and delete the disks manually. Run the following command to retrieve a list of disk IDs:

    $ bosh -e BOSH-DIRECTOR-IP \
    --ca-cert PATH-TO-BOSH-SERVER-CERTIFICATE instances -i
    

Once the disks are deleted, continue with Step 9: Remove Stale Cloud IDs for All Deployments.

Create a pull request or raise an issue on the source for this page in GitHub