LATEST VERSION: 2.5 - RELEASE NOTES

Backing Up and Restoring On-Demand MySQL for PCF

Page last updated:

This topic describes how to configure automated backups for MySQL for Pivotal Cloud Foundry (PCF), and how to manually restore a MySQL service instance from a backup.

Note: As of v2.2.0, the MySQL for PCF tile no longer includes the option to disable automated backups. You must configure automated backups.

Note: Pivotal recommends that you always configure a single instance plan in order to streamline the restore process for leader-follower plans.

About Automated Backups

Automated backups do the following:

  • Periodically create and upload backups suitable for restoring all of the databases used by the service instance.
  • Operate without locking apps out of the database; no downtime.
  • Include a metadata file that contains the critical details of the backup, including the calendar time of the backup.
  • Encrypt backups within the MySQL for PCF VM; unencrypted data is never transported outside the MySQL for PCF deployment.

Backup Files and Metadata

When MySQL for PCF runs a backup, it uploads two files with Unix epoch-timestamped filenames of the form mysql-backup-TIMESTAMP:

  • The encrypted data backup file mysql-backup-TIMESTAMP.tar.gpg
  • A metadata file mysql-backup-TIMESTAMP.txt

The metadata file contains information about the backup and looks like this:

ibbackup_version = 2.4.5
end_time = 2017-04-24 21:00:03
lock_time = 0
binlog_pos =
incremental = N
encrypted = N
server_version = 5.7.16-10-log
start_time = 2017-04-24 21:00:00
tool_command = --user=admin --password=... --stream=tar tmp/
innodb_from_lsn = 0
innodb_to_lsn = 2491844
format = tar
compact = N
name =
tool_name = innobackupex
partial = N
compressed = N
uuid = fd13cf26-2930-11e7-871e-42010a000807
tool_version = 2.4.5

Within this file, the most important items are the start_time and the server_version entries.

The backup process does not interrupt MySQL service, but backups only reflect transactions that completed before their start_time.

Note: Although both compressed and encrypted show as N in this file, the backup uploaded by MySQL for PCF is both compressed and encrypted. This is a known issue.

About Configuring Automated Backups

MySQL for PCF automatically backs up databases to external storage.

  • How and where—There are four options for how automated backups transfer backup data and where the data saves to:

    • Option 1: Back up with SCP—MySQL for PCF runs an SCP command that secure-copies backups to a VM or physical machine operating outside of PCF. SCP stands for secure copy protocol, and offers a way to securely transfer files between two hosts. The operator provisions the backup machine separately from their PCF installation. This is the fastest option.
    • Option 2: Back up to Ceph or S3—MySQL for PCF runs an Amazon S3 client that saves backups to an S3 bucket, a Ceph storage cluster, or another S3-compatible endpoint certified by Pivotal.
    • Option 3: Back up to GCS—MySQL for PCF runs an GCS SDK that saves backups to an Google Cloud Storage bucket.
    • Option 4: Back up to Azure Storage—MySQL for PCF runs an Azure SDK that saves backups to an Azure storage account.
  • When—Backups follow a schedule that you specify with a cron expression.

  • What is backed up—Each MySQL instance backs up its entire MySQL data directory, consistent to a specific point in time.

To configure automated backups, follow the procedures below according to the option you choose for external storage.

Option 1: Back Up with SCP

SCP enables the operator to use any desired storage solution on the destination VM.

To back up your database using SCP, complete the following procedures:

Create a Public and Private Key Pair

MySQL for PCF accesses a remote host as a user with a private key for authentication. Pivotal recommends that this user and key pair be solely for MySQL for PCF.

  1. Determine the remote host that you will be using to store backups for MySQL for PCF. Ensure that the MySQL service instances can access the remote host.

    Note: Pivotal recommends using a VM outside the PCF deployment for the destination of SCP backups. As a result you might need to enable public IPs for the MySQL VMs.

  2. (Recommended) Create a new user for MySQL for PCF on the destination VM.

  3. (Recommended) Create a new public and private key pair for authenticating as the above user on the destination VM.

Configure Backups in Ops Manager

Use Ops Manager to configure MySQL for PCF to backup using SCP.

  1. In Ops Manager, open the MySQL for PCF tile Backups pane.

  2. Select SCP.
    SCP Backup Configuration Form

  3. Fill in the fields as follows:

    • Username—Enter the user that you created above.
    • Private Key—Enter the private key that you created above. The public key should be stored for ssh and scp access on the destination VM.
    • Hostname—Enter the IP or DNS entry that should be used to access the destination VM.
    • Destination Directory—Enter the directory in which MySQL for PCF should upload backups.
    • Cron Schedule—Enter a cron schedule, using standard cron syntax, to take backups of each service instance.
    • Fingerprint—Enter the fingerprint of the destination VM’s public key. This helps to detect any changes to the destination VM.
    • Enable Email Alerts—Select to receive email notifications when a backup failure occurs. Also verify that you have done the following:
      • Added all users who need to be notified about failed backups to System org and System space
      • Configured Email Notifications in Pivotal Application Service (PAS) or Elastic Runtime, see Configure Email Notifications for your IaaS: AWS, Azure, GCP, OpenStack, or vSphere.

Option 2: Back Up to Ceph or S3

To back up your database on Ceph or Amazon S3, complete the following procedures:

Create a Policy and Access Key

MySQL for PCF accesses your S3 store through a user account. Pivotal recommends that this account be solely for MySQL for PCF. You must apply a minimal policy that lets the user account upload backups to your S3 store.

  1. Create a policy for your MySQL for PCF user account.


    Policy creation varies depending on the S3 provider. Follow the relevant procedure to give MySQL for PCF the following permissions:

    • List and upload to buckets
    • (Optional) Create buckets in order to use buckets that do not already exist

    For example, in AWS, to create a new custom policy, go to IAM > Policies > Create Policy > Create Your Own Policy and paste in the following permissions:

    {
    "Version": "2012-10-17",
    "Statement": [
    {
      "Sid": "ServiceBackupPolicy",
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket",
        "s3:ListBucketMultipartUploads",
        "s3:ListMultipartUploadParts",
        "s3:CreateBucket",
        "s3:PutObject"
      ],
      "Resource": [
        "arn:aws:s3:::MY_BUCKET_NAME/*",
        "arn:aws:s3:::MY_BUCKET_NAME"
      ]
    }
    ]
    }
    

  2. (Recommended) Create a new user for MySQL for PCF and record its Access Key ID and Secret Access Key, the user credentials.

  3. Attach the policy you created to the AWS user account that MySQL for PCF will use to access S3 (IAM > Policies > Policy Actions > Attach).

Configure Backups in Ops Manager

Use Ops Manager to connect MySQL for PCF to your S3 account.

  1. In Ops Manager, open the MySQL for PCF tile Backups pane.

  2. Select Ceph or Amazon S3. S3 Backup Configuration Form

  3. Fill in the fields as follows:

    • Access Key ID and Secret Access Key—Enter the S3 Access Key ID and Secret Access Key from above.
    • Endpoint URL—Enter the S3 compatible endpoint URL for uploading backups. URL must start with http:// or https://. The default is https://s3.amazonaws.com
    • Region—Enter the region where your bucket is located or the region where you want a bucket to be created. If the bucket does not already exist, it is created automatically.
    • Bucket Name—Enter the name of your bucket. Do not include an s3:// prefix, a trailing /, or underscores. Pivotal recommends using the naming convention DEPLOYMENT-backups, such as sandbox-backups.
    • Bucket Path—Enter the path in the bucket to store backups. Do not include a trailing /. Pivotal recommends using mysql-v2.
    • Cron Schedule—Enter a cron schedule, using standard cron syntax, to take backups of each service instance.
    • Enable Email Alerts—Select to receive email notifications when a backup failure occurs. Also verify that you done the following:
      • Added all users who need to be notified about failed backups to System org and System space
      • Configured Email Notifications in PAS (or Elastic Runtime), see Configure Email Notifications for your IaaS: AWS, Azure, GCP, OpenStack, or vSphere.

Option 3: Back Up to GCS

To back up your database on Google Cloud Storage (GCS), complete the following procedures:

Create a Policy and Access Key

MySQL for PCF accesses your GCS store through a service account. Pivotal recommends that this account be solely for MySQL for PCF. You must apply a minimal policy that lets the user account upload backups to your GCS store.

  1. In the GCS console, create a new service account for MySQL for PCF: IAM and Admin > Service Accounts > Create Service Account.

    MySQL for PCF needs the following permissions for this account:

    • List and upload to buckets
    • (Optional) Create buckets in order to use buckets that do not already exist
  2. Enter a unique name in the Service account name field, such as MySQL-for-PCF.

  3. In the Roles dropdown, grant the MySQL-for-PCF service account the Storage Admin role.

  4. Check the Furnish a new private key box so that a new key is created and downloaded.

  5. Click Create and take note of the name and location of the service account JSON file that is downloaded.

Configure Backups in Ops Manager

Use Ops Manager to connect MySQL for PCF to your GCS account.

  1. In Ops Manager, open the MySQL for PCF tile Backups pane.

  2. Select GCS.

    GCS Backup Configuration Form

  3. Fill in the fields as follows:

    • Project ID—Enter the Project ID for the Google Cloud project that you are using.
    • Bucket Name—Enter the bucket name in which to upload.
    • Service Account JSON—Enter the contents of the service account json file that you downloaded when creating a service account above.
    • Cron Schedule—Enter a cron schedule, using standard cron syntax, to take backups of each service instance.
    • Enable Email Alerts—Select to receive email notifications when a backup failure occurs. Also verify that you done the following:
      • Added all users who need to be notified about failed backups to System org and System space
      • Configured Email Notifications in PAS (or Elastic Runtime), see Configure Email Notifications for your IaaS: AWS, Azure, GCP, OpenStack, or vSphere.

Option 4: Back Up to Azure Storage

Complete the following steps to back up your database to your Azure Storage account.

  1. In Ops Manager, open the MySQL for PCF tile Backups pane.

  2. Select Azure. Azure Backup Configuration Form

  3. Fill in the fields as follows:

    • Account—Enter the Account name for the Microsoft Azure account that you are using.
    • Azure Storage Access Key—Enter one of the storage access keys that can be used to access the Azure Storage account.
    • Container Name—Enter the container name that backups should be uploaded to.
    • Destination Directory—Enter the directory in which backups should be uploaded to inside of the Container.
    • Blob Store Base URL—By default, backups are sent to the public Azure blob store. To use an on-premise blob store, enter the hostname of the blob store.
    • Cron Schedule—Enter a cron schedule, using standard cron syntax, to take backups of each service instance.
    • Enable Email Alerts—Select to receive email notifications when a backup failure occurs. Also verify that you done the following:
      • Added all users who need to be notified about failed backups to System org and System space
      • Configured Email Notifications in PAS (or Elastic Runtime), see Configure Email Notifications for your IaaS: AWS, Azure, GCP, OpenStack, or vSphere.

Manual Backup

MySQL for PCF v2.4 disables remote admin access to MySQL databases. To access your MySQL database to perform a manual backup, you must create a service key for each service instance you want to back up.

This backup acquires a global read lock on all tables, but does not hold it for the entire duration of the dump.

Perform the following steps to back up your MySQL for PCF data manually:

  1. Use the Cloud Foundry Command Line Interface (cf CLI) to target the Cloud Controller of your PCF deployment with cf api api.YOUR-SYSTEM-DOMAIN. For example:

    $ cf api api.sys.cf-example.com
    For more information about installing and using the cf CLI, see the cf CLI documentation.

  2. Log in:

    $ cf login

  3. Create a service key for the MySQL service instance. Run the following command:

    cf create-service-key SERVICE-INSTANCE-NAME SERVICE-KEY-NAME \
    -c '{"read-only":true}'

    Where:

    • SERVICE-INSTANCE-NAME is the name of the existing MySQL service instance that contains the data you want to back up.
    • SERVICE-KEY-NAME a name you choose for the new service key.

    For example:

    $ cf create-service-key mysql-spring spring-key \
    -c '{"read-only":true}'
    Creating service key spring-key for service instance mysql-spring as admin...
    OK
    

  4. After creating the service key, retrieve its information. Run the following command: cf service-key SERVICE-INSTANCE-NAME SERVICE-KEY-NAME

    Where:

    • SERVICE-INSTANCE-NAME is the name of the MySQL service instance you created a service key for.
    • SERVICE-KEY-NAME is the name of the newly created service key.

    For example:

    $ cf service-key mysql-spring spring-key
    Getting key spring-key for service instance mysql-spring as admin...

    { "hostname": "q-n3s3y1.q-g696.bosh", "jdbcUrl": "jdbc:mysql://q-n3s3y1.q-g696.bosh:3306/cf_e2d148a8_1baa_4961_b314_2431f57037e5?user=abcdefghijklm\u0026password=123456789", "name": "cf_e2d148a8_1baa_4961_b314_2431f57037e5", "password": "123456789", "port": 3306, "uri": "mysql://abcdefghijklm:123456789@q-n3s3y1.q-g696.bosh:3306/cf_e2d148a8_1baa_4961_b314_2431f57037e5?reconnect=true", "username": "abcdefghijklm" }

  5. Examine the output and record the following values:

    • hostname: The MySQL BOSH DNS hostname
    • password: The password for the user that can be used to perform backups of the service instance database
    • username: The username for the user that can be used to perform backups of the service instance database
  6. Connect to the database, by either using an SSH tunnel, or by connecting directly to its IP address. For more information, see Establish a Connection to a Service Instance from Outside Your PCF Deployment.

  7. To view a list of your databases, run the following command:

    mysql --user=USERNAME --password=PASSWORD \
    --host=MYSQL-IP \
    --silent --silent --execute='show databases'

    Where:

    • USERNAME is the username retrieved from the output of cf service-key.
    • PASSWORD is the password retrieved from the output of cf service-key.
    • MYSQL-IP is the MySQL IP address. This value is 0 if you are connecting via SSH tunnel.

    For example:

    $ mysql --user=abcdefghijklm --password=123456789 \
    --host=10.10.10.5 \
    --silent --execute='show databases'
    

  8. Delete the following databases from the list. Do not back up the following databases:

    • cf_metadata
    • information_schema
    • mysql
    • performance_schema
    • sys
  9. To back up the databases remaining in the list, run the following command for each database:

    mysqldump --single-transaction --user=USERNAME --password=PASSWORD \
    --host=MYSQL-IP \
    --databases DB-NAME > BACKUP.sql
    

    Where:

    • USERNAME is the username retrieved from the output of cf service-key.
    • PASSWORD is the password retrieved from the output of cf service-key.
    • MYSQL-IP is the MySQL IP address.
    • DB-NAME is the name of the database.
    • BACKUP is a name you create for the backup file. Use a different filename for each backup.

    For example:

    $ mysqldump --single-transaction --user=abcdefghijklm --password=123456789 \
    --host=10.10.10.5 \
    --databases canary_db > canary_db.sql
    

    For more information about the mysqldump utility, see mysqldump in the MySQL Documentation.

Restore a Service Instance from Backup

Restoring MySQL for PCF from backup is a manual process primarily intended for disaster recovery. Restoring a MySQL for PCF service instance replaces all of its data and running state.

To restore a MySQL for PCF instance from an offsite backup, download the backup and restore to a new instance by following these procedures:

Retrieve Backup Encryption Key

Perform the following steps to retrieve the backup encryption key:

  1. Run cf service to determine the GUID associated with the service instance that you want to restore:
    $ cf service MY-INSTANCE-NAME --guid \
     12345678-90ab-cdef-1234-567890abcdef
  2. Perform the steps in Gather Credential and IP Address Information and SSH into Ops Manager of Advanced Troubleshooting with the BOSH CLI to SSH into the Ops Manager VM.
  3. From the Ops Manager VM, log in to your BOSH Director with the BOSH CLI. See Log in to the BOSH Director in Advanced Troubleshooting with the BOSH CLI.
  4. Perform the steps in Find the CredHub Credentials, and record the values for identity and password.
  5. Set the API target of the CredHub CLI to your BOSH CredHub server.

    Run the following command:
    credhub api https://BOSH-DIRECTOR:8844 \
        --ca-cert=/var/tempest/workspaces/default/root_ca_certificate

    Where BOSH-DIRECTOR is the IP address of the BOSH Director VM.

    For example:
    $ credhub api https://10.0.0.5:8844 \
        --ca-cert=/var/tempest/workspaces/default/root_ca_certificate
  6. Log in to CredHub.

    Run the following command:
    credhub login \
        --client-name=CREDHUB-CLIENT-NAME \
        --client-secret=CREDHUB-CLIENT-SECRET
    Where: For example:
    $ credhub login \
        --client-name=credhub \
        --client-secret=abcdefghijklm123456789
  7. Use the CredHub CLI to retrieve the backup encryption key. Run the following command:
    credhub get \
      -n /p-bosh/service-instance_GUID/backup_encryption_key
    For example:
    $ credhub get \
      -n /p-bosh/service-instance_70d30bb6-7f30-441a-a87c-05a5e4afff26/backup_encryption_key
    Examine the output and copy the backup encryption key under value. For example:
    id: d6e5bd10-3b60-4a1a-9e01-c76da688b847
      name: /p-bosh/service-instance_70d30bb6-7f30-441a-a87c-05a5e4afff26/backup_encryption_key
      type: password
      value: UMF2DXsqNPPlCNWMdVMcNv7RC3Wi10
      version_created_at: 2018-04-02T23:16:09Z

Download the Backup Artifact

Download the backup artifact from your blob storage.

These instructions assume that you are using AWS S3 as your backup destination. If you are using a different backup destination, see the documentation for your backup provider to download the backup.

Perform the following steps to download the backup artifact from an AWS S3 bucket:

  1. From the Ops Manager VM, download the manifest for the service instance deployment by specifying the deployment name as service-instance_GUID and a filename for the manifest. For example:

    $ bosh2 -e my-env \
    -d service-instance_12345678-90ab-cdef-1234-567890abcdef \
    manifest > ./manifest.yml
    

  2. Inspect the downloaded manifest and record the following properties:

    • properties.service-backup.destinations[0].config.bucket_name: This is the bucket where the backups are uploaded.
    • properties.service-backup.destinations[0].config.bucket_path: This is the path within the bucket above.
  3. Log in to the AWS CLI. For information about how to download and use the AWS CLI, see AWS Command Line Interface.

  4. List the available backups for the instance, replacing BUCKET-NAME and BUCKET-PATH with values from above. The artifacts are sorted by time.

    $ aws s3 ls \
    --recursive s3://BUCKET-NAME/BUCKET-PATH/service-instance_12345678-90ab-cdef-1234-567890abcdef/
    

  5. Choose the most recent backup file, or an older backup you want to restore from. The backups are timestamped in the filename and have a .gpg extension.

  6. Download the selected backup:

    $ aws s3 cp \
    s3://BUCKET-NAME/BUCKET-PATH/service-instance_12345678-90ab-cdef-1234-567890abcdef/YEAR/MONTH/DATE/mysql-backup-1489168980-0.tar.gpg \
    ./a-local-path/mysql-backup-1489168980-0.tar.gpg
    

    Note: You can also log in to AWS and download S3 backups from a browser.

Restore the Service Instance

Because restoring a service instance is destructive, Pivotal recommends that you restore to a new and unused service instance.

These instructions assume you have a backup downloaded.

To restore the backup to a new service instance for a single node, leader-follower, or highly available (HA) cluster plan, do the following:

Create and Prepare a New Service Instance for Restore

To prepare a new service instance for restore, do the following:

  1. To create a new MySQL service instance, run the following command:

    cf create-service p.mysql SERVICE-PLAN NEW-INSTANCE-NAME
    

    Where:

    • SERVICE-PLAN is the name of the service plan for your new service instance.
    • NEW-INSTANCE-NAME is the name of the new service instance.

    • If you are using an HA cluster plan, you can only restore backup artifacts to an HA topology.

    • If you are using a single node or leader-follower plan, you can only restore backup artifacts to a single-node topology.

    • If you want to restore to a leader-follower plan, you can do one of the following:
      • Create a single-node instance to restore to then update the plan on this instance to leader-follower after you run the restore utility. For how to update the plan, see step 1 of Redeploy the Service Instance.
      • Create a leader-follower instance to restore to then scale the instances to one before running the restore utility. For how to scale instances, see step 6 of this procedure.

      Note: Pivotal recommends that you restore to a single-node plan.

    For more information, see Create a Service Instance.

  2. To monitor the status of the service instance creation, run the following command:

    cf service NEW-INSTANCE-NAME
    

    Where NEW-INSTANCE-NAME is the name of the new service instance.

  3. To locate and record the GUID associated with your new service instance, run the following command:

    cf service NEW-INSTANCE-NAME --guid
    

    Where NEW-INSTANCE-NAME is the name of the new service instance.

  4. To retrieve the admin password for your new service instance, do the procedure Retrieve Admin and Read-Only Admin Credentials for a Service Instance.

  5. From the Ops Manager VM, to find and record the new instance name and GUID from BOSH, run the following command:

    bosh2 -e ENVIRONMENT -d DEPLOYMENT instances
    
  6. If you created a leader-follower or HA cluster service instance to restore to, do the following:

    1. To scale the instances to one, modify the deployment manifest as follows:

      instance_groups:
      - name: mysql
        ...
        instances: 1    # Scale instances to 1
        ...
      
    2. To redeploy the deployment manifest, run the following command:

      bosh2 -e ENVIRONMENT -d DEPLOYMENT deploy PATH-TO-MANIFEST.yml
      

      For example:

      bosh2 -e my-env -d service-instance_12345678-90ab-cdef-1234-567890abcdef deploy ./manifest.yml
      
  7. To copy the downloaded backup to the new service instance, run the following command:

    bosh2 -e my-env \
    -d my-dep scp mysql-backup-TIMESTAMP.tar.gpg BOSH-INSTANCE:DESTINATION-PATH
    

    Where:

    • BOSH-INSTANCE is mysql/INSTANCE-GUID. For example, mysql/d7ff8d46-c3e8-449f-aea7-5a05b0a1929c.
    • DESTINATION-PATH is where the backup file saves on the BOSH VM. For example, /tmp/.
  8. Use the BOSH CLI to SSH in to the newly created MySQL service instance. For more information, see BOSH SSH.

  9. After securely logging in to MySQL, to become root, run the following command:

    sudo su
    

Restore a Single Node or Leader-Follower Instance

WARNING: This is a destructive action and should only be run on a new and unused service instance.

Be sure you have followed the procedure in Create and Prepare a New Service Instance for Restore above.

To restore a single node or leader-follower service instance, do the following:

  1. Run the restore utility to restore the backup artifact into the data directory. This process:

    • Deletes any existing data
    • Decrypts and untars the backup artifact
    • Restores the backup artifact into the MySQL data directory
    mysql-restore --encryption-key ENCRYPTION-KEY \
    --mysql-username admin --mysql-password ADMIN-PASSWORD --restore-file RESTORE-FILE-PATH
    

    Where:

  2. Exit the MySQL VM.

Restore an HA Cluster Instance

WARNING: This is a destructive action and should only be run on a new and unused service instance.

Be sure you have followed the procedure in Create and Prepare a New Service Instance for Restore above.

To restore an HA cluster, do the following:

  1. To pause the local database server, run the following command:

    monit stop all
    
  2. To confirm that all jobs are listed as not monitored, run the following command:

    watch monit summary 
    
  3. To delete the existing MySQL data that is stored on disk, run the following command:

    rm -rf /var/vcap/store/mysql/*
    
  4. Move the compressed backup file to the node using scp.

  5. To decrypt and expand the file using gpg, sending the output to tar:

    gpg --decrypt mysql-backup.tar.gpg | tar -C /var/vcap/store/mysql -xvf -
    
  6. To change the owner of the data directory, run the following:

     chown -R vcap:vcap /var/vcap/store/mysql
    

    MySQL expects the data directory to be owned by a particular user.

  7. To start all services with monit, run the following command:

    monit start all
    
  8. To watch the summary until all jobs are listed as running, run the following command:

    watch monit summary
    
  9. Exit out of the MySQL node.

Restage the Service Instance

After you restore your single node, leader-follower, or HA cluster service instance, you must restage your new service instance.

To restage your service instance, do the following:

  1. If you restored to a service instance with a single-node plan but want a leader-follower plan, update the plan now:

    cf update-service NEW-INSTANCE-NAME -p LEADER-FOLLOWER-PLAN
    
  2. If you scaled instances to one in step 6 of Create and Prepare a New Service Instance for Restore above, do the following:

    1. To scale you instance, do one of the following:

      • If you scaled a leader-follower instance, scale the instances to two by modifying the deployment manifest. Update instance group for your new service instance as follows:

        instance_groups:
        - name: mysql
          ...
          instances: 2    # Scale instances to 2
          ...
        
      • If you scaled an HA cluster instance, scale the instances to three by modifying the deployment manifest. Update instance group for your new service instance as follows:

        instance_groups:
        - name: mysql
          ...
          instances: 3    # Scale instances to 3
          ...
        
    2. Redeploy the deployment manifest by running:

      bosh2 -e ENVIRONMENT -d DEPLOYMENT deploy PATH-TO-MANIFEST.yml
      

      For example:

      bosh2 -e my-env -d service-instance\_12345678-90ab-cdef-1234-567890abcdef deploy ./manifest.yml
      
  3. Determine if the app is currently bound to a MySQL service instance:

    cf services
    
  4. If the previous step shows that the app is currently bound to a MySQL instance, unbind it:

    cf unbind-service MY-APP OLD-INSTANCE-NAME
    
  5. Update your CF app to bind to the new service instance:

    cf bind-service MY-APP  NEW-INSTANCE-NAME
    
  6. Restage your CF app to make the changes take effect:

    cf restage MY-APP
    

Your app should be running and able to access the restored data.

Create a pull request or raise an issue on the source for this page in GitHub