Backing up MySQL for Pivotal Platform

Page last updated:

This topic describes how to configure automated backups for MySQL for Pivotal Platform.

You can restore an automatic backup by following the procedures in Restoring MySQL for Pivotal Platform

Additionally, developers can create logical backups using mysqldump. For information, see Backing Up and Restoring with mysqldump.

About Automated Backups

Automated backups do the following:

  • Periodically create and upload backups for restoring all of the databases used by a service instance.
  • Operate without locking apps out of the database. There is no downtime.
  • Include a metadata file that contains the critical details for the backup, including the calendar time of the backup.
  • Encrypt backups within the MySQL for Pivotal Platform VM. Unencrypted data is never transferred outside the MySQL for Pivotal Platform deployment.

Note: You must configure automated backups. Automated backups cannot be disabled.

Warning: If MySQL for Pivotal Platform fails to upload a backup, the backup artifact remains on the persistent disk. This can cause the persistent disk to fill up faster. If the persistent disk is full, apps become inoperable. For instructions on how to troubleshoot persistent disk errors, see Persistent Disk is Full.

Backup Files and Metadata

When MySQL for Pivotal Platform runs a backup, it uploads two files with Unix epoch-timestamped filenames:

  • The encrypted data backup file, mysql-backup-TIMESTAMP.tar.gpg
  • A metadata file, mysql-backup-TIMESTAMP.txt

The metadata file contains information about the backup that looks similar to the following:

ibbackup_version = 2.4.5
end_time = 2017-04-24 21:00:03
lock_time = 0
binlog_pos =
incremental = N
encrypted = N
server_version = 5.7.16-10-log
start_time = 2017-04-24 21:00:00
tool_command = --user=admin --password=... --stream=tar tmp/
innodb_from_lsn = 0
innodb_to_lsn = 2491844
format = tar
compact = N
name =
tool_name = innobackupex
partial = N
compressed = N
uuid = fd13cf26-2930-11e7-871e-42010a000807
tool_version = 2.4.5

The most important entries in the metadata file are the following:

  • start_time: Use this to determine which transactions are captured in the backup.
  • server_version: Use this to determine potential incompatibilities when restoring an instance with the backup artifact.

The backup process does not interrupt the MySQL service. However backups only reflect transactions that are completed before the start_time.

Note: The metadata file sets compressed encrypted as N, which indicates that the backup is not compressed or encrypted. However, the backup uploaded by MySQL for Pivotal Platform is both compressed and encrypted. This is a known issue.

About Configuring Automated Backups

You can configure MySQL for Pivotal Platform to automatically back up databases to external storage. MySQL for Pivotal Platform backs up the entire data directory for each service instance.

MySQL for Pivotal Platform backs up your database on a schedule. You configure this schedule with a cron expression. For information about cron expressions, see CRON expression on Wikipedia.

To configure automated backups, follow the procedure for your external storage solution:

Back up Using SCP

Secure copy protocol (SCP) enables operators to use any storage solution on the destination VM. This is the fastest method for backing up your database.

When you configure automatic backups with SCP, MySQL for Pivotal Platform runs an SCP command that securely copies backups to a VM or physical machine operating outside of your deployment. The operator provisions the backup machine separately from their installation.

To back up your database using SCP:

Create a Public and Private Key‑Pair

MySQL for Pivotal Platform accesses a remote host as a user with a private key for authentication. VMware recommends that this user and key-pair is only used for MySQL for Pivotal Platform.

  1. Determine the remote host that you use to store backups for MySQL for Pivotal Platform. Ensure that the MySQL service instances can access the remote host.

    Note: VMware recommends using a VM outside your deployment for the destination of SCP backups. As a result, you might need to enable public IPs for the MySQL VMs.

  2. (Recommended) Create a new user for MySQL for Pivotal Platform on the destination VM.

  3. (Recommended) Create a new public and private key-pair for authenticating as the above user on the destination VM.

Configure Backups in Ops Manager

Use Ops Manager to configure MySQL for Pivotal Platform to backup using SCP.

  1. In Ops Manager, open the MySQL for Pivotal Platform tile Backups pane.

  2. Select SCP.
    SCP Backup Configuration Form

  3. Configure the fields as follows:

    Field Instructions
    Username Enter the user that you created in Create a Public and Private Key‑Pair above.
    Private Key Enter the private key that you created in Create a Public and Private Key‑Pair above.
    Store the public key that is used for SSH and SCP access on the destination VM.
    Hostname Enter the IP address or DNS entry that is used to access the destination VM.
    Destination Directory Enter the directory that MySQL for Pivotal Platform uploads backups to.
    SCP Port Enter the SCP port number for SSH. This port usually is 22.
    Cron Schedule Enter a cron expression using standard syntax. The cron expression sets the schedule for taking backups for each service instance.
    For information about cron expressions, see CRON expression on Wikipedia.
    Fingerprint (Optional) Enter the fingerprint for the destination VM public key. The fingerprint detects any changes to the destination VM.
    Enable Email Alerts Select to receive email notifications when a backup fails. Also verify that:
    • Email notifications are configured in Pivotal Application Service (PAS). See (Optional) Configure Email Notifications.
    • All users who need to be notified about failed backups have the Space Developer role in the system org and system space. The username must be an email address.

Back up to Amazon S3 or Ceph

Warning: For MySQL for Pivotal Platform v2.7.5 and v2.7.6, backups do not upload to s3 external storage. Configure a different external storage option.

When you configure automatic backups for Amazon S3 or Ceph, MySQL for Pivotal Platform runs an Amazon S3 client that saves the backups to one of the following:

  • an Amazon S3 bucket
  • a Ceph storage cluster
  • another S3-compatible endpoint certified by VMware

For information about Amazon S3 buckets, see the Amazon documentation. For information about Ceph storage clusters, see the Ceph documentation.

To back up your database to Amazon S3 or Ceph:

Create a Custom Policy and Access Key

MySQL for Pivotal Platform accesses your S3 bucket through a user account. VMware recommends that this account is only used by MySQL for Pivotal Platform. You must apply a minimal policy that enables the user account upload backups to your S3 bucket.

Give the policy the following permissions:

  • List and upload to buckets
  • (Optional) Create buckets if they do not already exist

The procedure in this section assumes that you are using an Amazon S3 bucket. If you are using a Ceph or another S3-compatible bucket to create the policy and access key, follow the documentation for your storage solution. For more information about Ceph S3 bucket policies, see the Ceph documentation.

To create a policy and access key in AWS:

  1. Create a policy for your MySQL for Pivotal Platform user account.

    In AWS, create a new custom policy by following this procedure in the AWS documentation.

    Paste in the following permissions:

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "ServiceBackupPolicy",
          "Effect": "Allow",
          "Action": [
            "s3:ListBucket",
            "s3:ListBucketMultipartUploads",
            "s3:ListMultipartUploadParts",
            "s3:CreateBucket",
            "s3:PutObject"
          ],
          "Resource": [
            "arn:aws:s3:::MY_BUCKET_NAME/*",
            "arn:aws:s3:::MY_BUCKET_NAME"
          ]
        }
      ]
    }
    
  2. Record the Access Key ID and Secret Access Key user credentials for a new user account by following this procedure in the AWS documentation. Ensure you select Programmatic access and Attach existing policies to user directly. You must attach the policy you created in the previous step.

Configure Backups in Ops Manager

Use Ops Manager to connect MySQL for Pivotal Platform to your S3 account.

  1. In Ops Manager, open the MySQL for Pivotal Platform tile Backups pane.

  2. Select Ceph or Amazon S3.

    S3 Backup Configuration Form

  3. Configure the fields as follows:

    Field Instructions
    Access Key ID and Secret Access Key Enter the S3 Access Key ID and Secret Access Key that you created in Create a Custom Policy and Access Key above.
    Endpoint URL Enter the S3-compatible endpoint URL for uploading backups.
    The URL must start with http:// or https://.
    The default is https://s3.amazonaws.com
    Region Enter the region where your bucket is located or the region where you want a bucket to be created. If the bucket does not already exist, it is created automatically.
    Bucket Name Enter the name of your bucket.
    Do not include an s3:// prefix, a trailing /, or underscores. VMware recommends using the naming convention DEPLOYMENT-backups. For example, sandbox-backups.
    Bucket Path Enter the path in the bucket to store backups.
    Do not include a trailing /. VMware recommends using mysql-v2.
    Cron Schedule Enter a cron expression using standard syntax. The cron expression sets the schedule for taking backups for each service instance.
    For information about cron expressions, see CRON expression on Wikipedia.
    Enable Email Alerts Select to receive email notifications when a backup fails. Also verify that:
    • Email notifications are configured in Pivotal Application Service (PAS). See (Optional) Configure Email Notifications.
    • All users who need to be notified about failed backups have the Space Developer role in the system org and system space. The username must be an email address.

Back up to GCS

When you configure automatic backups for a Google Cloud Storage (GCS) bucket, MySQL for Pivotal Platform runs a GCS SDK that saves backups to a GCS bucket.

For information about GCS buckets, see the GCS documentation.

To back up your database to Google Cloud Storage (GCS):

Create a Service Account and Private Key

MySQL for Pivotal Platform accesses your GCS bucket through a service account. VMware recommends that this account is only used by MySQL for Pivotal Platform. You must apply a minimal policy that enables the service account to upload backups to your GCS bucket.

The service account needs the following permissions:

  • List and upload to buckets
  • (Optional) Create buckets if they do not already exist

To create a service account and private key in GCS:

  1. Create a new service account by following this procedure in the GCS documentation.
    When you create the service account:
    1. Enter a unique name for the service account name.
    2. Add the Storage Admin role.
    3. Create and download a private key JSON file.

Configure Backups in Ops Manager

Use Ops Manager to connect MySQL for Pivotal Platform to your GCS account.

  1. In Ops Manager, open the MySQL for Pivotal Platform tile Backups pane.

  2. Select GCS.

    GCS Backup Configuration Form

  3. Configure the fields as follows:

    Field Instructions
    Project ID Enter the Project ID for the Google Cloud project that you are using.
    Bucket name Enter the bucket name that MySQL for Pivotal Platform uploads backups to.
    Service Account JSON Enter the contents of the service account JSON file that you downloaded when creating a service account in Create a Service Account and Private Key above.
    Cron Schedule Enter a cron expression using standard syntax. The cron expression sets the schedule for taking backups for each service instance.
    For information about cron expressions, see CRON expression on Wikipedia.
    Enable Email Alerts Select to receive email notifications when a backup fails. Also verify that:
    • Email notifications are configured in Pivotal Application Service (PAS). See (Optional) Configure Email Notifications.
    • All users who need to be notified about failed backups have the Space Developer role in the system org and system space. The username must be an email address.

Back up to Azure Storage

When you configure automatic backups for Azure Storage, MySQL for Pivotal Platform runs an Azure SDK that saves backups to an Azure storage account.

For information about Azure Storage, see the Azure documentation.

To back up your database to Azure Storage:

Create a Storage Account and Access Key

MySQL for Pivotal Platform accesses your Azure Storage account through a storage access key. VMware recommends that this account is only used by MySQL for Pivotal Platform. You must apply a minimal policy that enables the storage account upload backups to your Azure Storage.

The storage account needs the following permissions:

  • List and upload to buckets
  • (Optional) Create buckets if they do not already exist

To create a storage account and access key:

  1. Create a new storage account by following this procedure in the Azure documentation.

  2. View your access key by following this procedure in the Azure documentation

Configure Backups in Ops Manager

To back up your database to your Azure Storage account:

  1. In Ops Manager, open the MySQL for Pivotal Platform tile Backups pane.

  2. Select Azure.

    Azure Backup Configuration Form

  3. Configure the fields as follows:

    Field Instructions
    Account Enter the Azure Storage account name that you created in Create a Storage Account and Access Key above.
    Azure Storage Access Key Enter one of the storage access keys that you viewed in Create a Storage Account and Access Key above.
    Container Name Enter the container name that MySQL for Pivotal Platform uploads backups to.
    Destination Directory Enter the directory that backups are uploaded to inside of the container.
    Blob Store Base URL To use an on-premise blob storage, enter the hostname of the blob storage. By default, backups are sent to the public Azure blob storage.
    Cron Schedule Enter a cron expression using standard syntax. The cron expression sets the schedule for taking backups for each service instance.
    For information about cron expressions, see CRON expression on Wikipedia.
    Enable Email Alerts Select to receive email notifications when a backup fails. Also verify that:
    • Email notifications are configured in Pivotal Application Service (PAS). See (Optional) Configure Email Notifications.
    • All users who need to be notified about failed backups have the Space Developer role in the system org and system space. The username must be an email address.