Backing Up Pivotal Cloud Foundry Manually

Page last updated:

Note: You can use BOSH Backup and Restore (BBR) to back up Pivotal Cloud Foundry (PCF). See the backup instructions here.

This topic describes the procedure for manually backing up each critical backend PCF component. Pivotal recommends frequently backing up your installation settings before making any changes to your PCF deployment, such as configuration of any tiles in Ops Manager.

To back up a deployment, you must do the following:

  • Step 1: Record the Cloud Controller database encryption credentials
  • Step 2: Export installation settings
  • Step 3: Download the BOSH manifest
  • Step 4: Temporarily stop the Cloud Controller
  • Step 5: Create and export backup files for each critical backend component
  • Step 6: Restart the Cloud Controller

To restore your backup, see the Restoring Pivotal Cloud Foundry from Backup topic.

Step 1: Record the Cloud Controller Database Encryption Credentials

Record your Cloud Controller Database encryption credentials. You will need these credentials if you contact Pivotal Support for help restoring your installation,

From the Installation Dashboard, select Pivotal Elastic Runtime > Credentials and locate the Cloud Controller section. Record the Cloud Controller DB Encryption Credentials. You must provide these credentials if you contact Pivotal Support for help restoring your installation.

Ccdb encrypt creds

Step 2: Export Installation Settings

Pivotal recommends that you back up your installation settings by exporting frequently. This option is only available after you have deployed at least one time. Always export an installation before following the steps in the Import Installation Settings section of the Restoring Pivotal Cloud Foundry from Backup topic.

Note: Exporting your installation only backs up your installation settings. It does not back up your virtual machines (VMs) or any external MySQL databases.

From the Installation Dashboard in the Ops Manager interface, click your user name at the top right navigation. Select Settings.

Export installation settings exports the current PCF installation settings and assets. When you export an installation, the exported file contains the base VM images, all necessary packages, and references to the installation IP addresses. As a result, an exported installation file can exceed 5 GB in size.

Settings

Note: For versions of Ops Manager 1.3 or older, the process of archiving files for export may exceed the timeout limit of 600 seconds and result in a 500 error. To resolve this issue, you can manually increase the timeout value, or assign additional resources to the Ops Manager VM to improve performance. For more information, see the Pivotal Support Knowledge Base.

Step 3: Download the BOSH Manifest

To download the BOSH manifest for your deployment, use the BOSH CLI (either v1 or v2) or access the OpsMan API.

Using BOSH CLI v1

First, identify and target the BOSH Director by performing the following steps:

  1. Install Ruby and the BOSH CLI Ruby gem on a machine outside of your PCF deployment.

  2. From the Installation Dashboard in Ops Manager, select Ops Manager Director > Status and record the IP address listed for the Director. You access the BOSH Director using this IP address.

  3. Click Credentials and record the Director credentials.

  4. From the command line, run bosh target to log into the BOSH Director using the IP address and credentials that you recorded:

    $ bosh target DIRECTOR_IP
    Target set to 'microbosh-1234abcd1234abcd1234'
    Email(): director
    Enter password: ********************
    Logged in as 'director'
    

    Note: If bosh target does not prompt you for your username and password, run bosh login.

Next, download the BOSH manifest for the product by performing the following steps:

  1. Run bosh deployments to identify the name of your current BOSH deployment:

    $ bosh deployments
    +-------------+--------------+-------------------------------------------------+
    | Name        | Release(s)   | Stemcell(s)                                     |
    +-------------+--------------+-------------------------------------------------+
    | cf-example  | cf-mysql/10  | bosh-vsphere-esxi-ubuntu-trusty-go_agent/2690.3 |
    |             | cf/183.2     |                                                 |
    +-------------+--------------+-------------------------------------------------+
    

  2. Run bosh download manifest DEPLOYMENT-NAME LOCAL-SAVE-NAME to download and save each BOSH deployment manifest. You need this manifest to locate information about your databases. For each manifest, you will need to repeat these instructions. Replace DEPLOYMENT-NAME with the name of the current BOSH deployment. For this procedure, use cf.yml as the LOCAL-SAVE-NAME.

    $ bosh download manifest cf-example cf.yml
    Deployment manifest saved to `cf.yml'
    
  3. Place the .yml file in a secure location.

Using BOSH CLI v2

First, target the BOSH Director by performing the following steps:

  1. Install the BOSH v2 CLI on a machine outside of your PCF deployment. You can use the jumpbox for this task.
  2. From the Installation Dashboard in Ops Manager, select Ops Manager Director > Status and record the IP address listed for the Director. You access the BOSH Director using this IP address.

  3. Click Credentials and record the Director credentials.
  4. From the command line, log into the BOSH Director using the IP address and credentials that you recorded:
    $ bosh -e DIRECTOR_IP \
    --ca-cert PATH-TO-BOSH-SERVER-CERT log-in
    Email (): director
    Password (): *******************
    Successfully authenticated with UAA
    Succeeded
    

Next, identify the deployment and download the BOSH manifest for the product by performing the following steps:

  1. After logging in to your BOSH Director, run the following command to identify the name of the BOSH deployment that contains PCF:

    $ bosh -e DIRECTOR_IP \
    --ca-cert /var/tempest/workspaces/default/root_ca_certificate deployments

    Name Release(s) cf-example push-apps-manager-release/661.1.24 cf-backup-and-restore/0.0.1 binary-buildpack/1.0.11 capi/1.28.0 cf-autoscaling/91 cf-mysql/35 ...

    In the above example, the name of the BOSH deployment that contains PCF is cf-example.

  2. Run the following command to download the BOSH manifest and replace DEPLOYMENT-NAME with the deployment name you retrieved in the prevoius step:

    $ bosh -e DIRECTOR_IP -d DEPLOYMENT-NAME manifest > /tmp/cf.yml
    
  3. Place the .yml file in a secure location.

Using the Ops Manager API

First, identify and target the BOSH Director by performing the following steps:

  1. Install the BOSH v2 CLI on a machine outside of your PCF deployment.
  2. Perform the procedures in the Using the Ops Manager API topic to authenticate and access the Ops Manager API.
  3. Use the GET /api/v0/deployed/products endpoint to retrieve a list of deployed products, replacing UAA-ACCESS-TOKEN with the access token recorded in the Using the Ops Manager API topic:
    $ curl "https://OPS-MAN-FQDN/api/v0/deployed/products" \ 
    -X GET \ 
    -H "Authorization: Bearer UAA-ACCESS-TOKEN"
  4. In the response to the above request, locate the product with an installation_name starting with cf- and copy its guid.
  5. Run the following curl command, replacing PRODUCT-GUID with the value of guid from the previous step:

    $ curl "https://OPS-MAN-FQDN/api/v0/deployed/products/PRODUCT-GUID/manifest" \ 
    -X GET \
    -H "Authorization: Bearer UAA-ACCESS-TOKEN" > /tmp/cf.yml

  6. Place the .yml file in a secure location.

Step 3: Stop Cloud Controller

To stop the Cloud Controller, use the BOSH CLI (either v1 or v2).

Using BOSH CLI v1

  1. Run bosh vms DEPLOYMENT-NAME to view a list of VMs in your PCF deployment.

    $ bosh vms cf-example
    +-------------------------------------------+---------+----------------------------------+--------------+
    | Job/index                                 | State   | Resource Pool                    | IPs          |
    +-------------------------------------------+---------+----------------------------------+--------------+
    | ccdb-partition-bd784/0                    | running | ccdb-partition-bd784             | 10.85.xx.xx  | 
    | cloud_controller-partition-bd784/0        | running | cloud_controller-partition-bd784 | 10.85.xx.xx  |
    | cloud_controller_worker-partition-bd784/0 | running | cloud_controller-partition-bd784 | 10.85.xx.xx  |
    | clock_global-partition-bd784/0            | running | clock_global-partition-bd784     | 10.85.xx.xx  |
    | nats-partition-bd784/0                    | running | nats-partition-bd784             | 10.85.xx.xx  |
    | router-partition-bd784/0                  | running | router-partition-bd784           | 10.85.xx.xx  |
    | uaa-partition-bd784/0                     | running | uaa-partition-bd784              | 10.85.xx.xx  |
    +-------------------------------------------+---------+----------------------------------+--------------+
    

  2. Perform the following steps for each Cloud Controller VM, excluding the Cloud Controller Database VM:

    1. SSH onto the VM:
      $ bosh ssh JOB-NAME
    2. From the VM, list the running processes:
      $ monit summary
    3. Stop all processes that start with cloud_controller_:
      $ monit stop PROCESS-NAME

Using BOSH CLI v2

  1. Run bosh instances to view a list of VM instances in your selected deployment.

    $ bosh -e DIRECTOR_IP -d DEPLOYMENT-NAME instances
    

    The command returns results similar to the following:

    Instance                                                            Process State  AZ       IPs
    autoscaling-register-broker/4305bc6d-b391-4d12-af1e-97c42dc746bb    -              default  10.85.101.41
    autoscaling/4a96fc03-ad48-4452-a3a1-21666b56c166                    -              default  10.85.101.40
    bootstrap/952de267-6498-4437-a4eb-d352d9412d85                      -              default  -
    clock_global/a41be911-0b64-477b-be95-04823fe4588e                   running        default  10.85.101.15
    cloud_controller/d8190587-9bd5-436c-9b98-2b307025ef37               running        default  10.85.101.14
    cloud_controller_worker/5059b2a7-5691-47e3-ac45-4874024beb56        running        default  10.85.101.24
    consul_server/06383f02-3837-4ba0-b30a-c49a4aaae832                  running        default  10.85.101.16
    diego_brain/4690bb25-0ef3-43fc-b5f9-902e536340f5                    running        default  10.85.101.31
    diego_cell/c0d1845c-a84e-48b6-9051-f0454e201226                     running        default  10.85.101.25
    ...
    
    The names of the Cloud Controller VMs begin with cloud_controller.

  2. Perform the following steps for each Cloud Controller VM, excluding the Cloud Controller Database VM:

    1. SSH onto the VM:
      $ bosh -e DIRECTOR_IP -d DEPLOYMENT-NAME ssh JOB-NAME
      For example:
      $ bosh -e DIRECTOR_IP -d DEPLOYMENT-NAME ssh cloud_controller
    2. From the VM, list the running processes:
      $ monit summary
    3. Start all processes that start with cloud_controller_:
      $ monit start PROCESS-NAME

Step 4: Back Up Critical Backend Components

Your Elastic Runtime deployment contains several critical data stores that must be present for a complete restore. This section describes the procedure for backing up the databases and the servers associated with your PCF installation.

You must back up each of the following:

  • Cloud Controller Database
  • UAA Database
  • WebDAV Server
  • Pivotal MySQL Server

Note: If you are running your databases or filestores externally, ensure that you back up your external databases and filestores.

Note: To follow the backup instructions below, your network must be configured to allow access to the BOSH Director VM from your local machine. If you do not have local administrator access, use the scp command to copy the TAR file to the BOSH Director VM. For example: scp vcap@192.0.2.10:webdav.tar.gz \ and vcap@192.0.2.3:/webdav.tar.gz

Back Up Pivotal MySQL Server

Note: The Elastic Runtime deploy contains an embedded MySQL Server that serves as the data store for the Application Usage Events, Notifications, and Autoscaler services. If you are using an internal MySQL, this will also include the Cloud Controller and UAA.

There are two ways to backup the MySQL Server:

Backing up MySQL Server Manually

  1. From the Installation Dashboard in Ops Manager, select Pivotal Elastic Runtime.

  2. Click Credentials and record the Mysql Admin Credentials of MySQL Server.

    Mysql cred

  3. From your local machine, use bosh ssh to SSH into the MySQL database VM.
    Using BOSH CLI v2:

    $ bosh -e ENVIRONMENT -d DEPLOYMENT-NAME ssh mysql
    
    For example:
    $ bosh -e myenv -d cf-1234567 ssh mysql
    

  4. On the MySQL database VM, run the following command to export data from all the internal MySQL databases used by Elastic Runtime:

    $ /var/vcap/packages/mariadb/bin/mysqldump -u root -p \
     --all-databases > /tmp/cf_databases.sql
    
    When prompted, enter the password you obtained for the Mysql Admin Credentials.

  5. From your local machine, run bosh scp to download the exported databases to your local machine.
    Using BOSH CLI v2:

    $ bosh -e ENVIRONMENT -d DEPLOYMENT-NAME \
    scp mysql:/tmp/cf_databases.sql FILEPATH/cf_databases.sql
    
    For example:
    $ bosh -e myenv -d cf-1234567\
    scp mysql:/tmp/cf_databases.sql ~/cf_databases.sql
    

Back Up WebDAV Server

  1. In the BOSH deployment manifest, locate the nfs_server component and record the address:

    nfs_server:
      address: 192.0.2.10
      network: 192.0.2.0/24
    syslog_aggregator:
      address:
      port:
    

    Note: The job name associated with the WebDAV server is nfs_server for historical reasons. The server is not based on NFS.

  2. From the Installation Dashboard in Ops Manager, select Elastic Runtime and click Credentials > Link to Credential. Record the File Storage server VM credentials.

    File store creds

  3. SSH into the WebDAV server VM and create a TAR file:

    $ ssh vcap@192.0.2.10 'cd /var/vcap/store && tar cz shared' > webdav.tar.gz
    

    Note: The TAR file that you create to back up WebDAV server might be large. To estimate the size of the TAR file before you create it, run the following command: ssh vcap@192.0.2.10 tar -cf - /dir/to/archive/ | wc -c

Step 5: Start Cloud Controller

To start the Cloud Controller, use the BOSH CLI (either v1 or v2).

Using BOSH CLI v1

  1. Run bosh vms to view a list of VMs in your selected deployment. The names of the Cloud Controller VMs begin with cloud_controller.

    $ bosh vms
    +-------------------------------------------+---------+----------------------------------+--------------+
    | Job/index                                 | State   | Resource Pool                    | IPs          |
    +-------------------------------------------+---------+----------------------------------+--------------+
    | cloud_controller-partition-bd784/0        | failing | cloud_controller-partition-bd784 | 10.85.xx.xx  |
    | cloud_controller_worker-partition-bd784/0 | running | cloud_controller-partition-bd784 | 10.85.xx.xx  |
    | clock_global-partition-bd784/0            | running | clock_global-partition-bd784     | 10.85.xx.xx  |
    | nats-partition-bd784/0                    | running | nats-partition-bd784             | 10.85.xx.xx  |
    | router-partition-bd784/0                  | running | router-partition-bd784           | 10.85.xx.xx  |
    | uaa-partition-bd784/0                     | running | uaa-partition-bd784              | 10.85.xx.xx  |
    +-------------------------------------------+---------+----------------------------------+--------------+
    

  2. Perform the following steps for each Cloud Controller VM, excluding the Cloud Controller Database VM:

    1. SSH onto the VM:
      $ bosh ssh JOB-NAME
    2. From the VM, list the running processes:
      $ monit summary
    3. Start all processes that start with cloud_controller_:
      $ monit start PROCESS-NAME

Using BOSH CLI v2

  1. Run bosh instances to view a list of VM instances in your selected deployment.

    $ bosh -e DIRECTOR_IP -d DEPLOYMENT-NAME instances
    

    The command returns results similar to the following:

    Instance                                                            Process State  AZ       IPs
    autoscaling-register-broker/4305bc6d-b391-4d12-af1e-97c42dc746bb    -              default  10.85.101.41
    autoscaling/4a96fc03-ad48-4452-a3a1-21666b56c166                    -              default  10.85.101.40
    bootstrap/952de267-6498-4437-a4eb-d352d9412d85                      -              default  -
    clock_global/a41be911-0b64-477b-be95-04823fe4588e                   running        default  10.85.101.15
    cloud_controller/d8190587-9bd5-436c-9b98-2b307025ef37               running        default  10.85.101.14
    cloud_controller_worker/5059b2a7-5691-47e3-ac45-4874024beb56        running        default  10.85.101.24
    consul_server/06383f02-3837-4ba0-b30a-c49a4aaae832                  running        default  10.85.101.16
    diego_brain/4690bb25-0ef3-43fc-b5f9-902e536340f5                    running        default  10.85.101.31
    diego_cell/c0d1845c-a84e-48b6-9051-f0454e201226                     running        default  10.85.101.25
    ...
    
    The names of the Cloud Controller VMs begin with cloud_controller.

  2. Perform the following steps for each Cloud Controller VM, excluding the Cloud Controller Database VM:

    1. SSH onto the VM:
      $ bosh -e DIRECTOR_IP -d DEPLOYMENT-NAME ssh JOB-NAME
    2. From the VM, list the running processes:
      $ monit summary
    3. Start all processes that start with cloud_controller_:
      $ monit start PROCESS-NAME

Follow the steps in the Restoring Pivotal Cloud Foundry from Backup topic to restore a backup, import an installation to restore your settings, or to share your settings with another user.

Create a pull request or raise an issue on the source for this page in GitHub