LATEST VERSION: 2.2 - CHANGELOG

Troubleshooting MySQL for PCF

In this topic:

This topic provides operators with basic instructions for troubleshooting on-demand MySQL for PCF.

For information on temporary MySQL for PCF service interruptions, see Service Interruptions.

Troubleshooting Errors

This section provides information on how to troubleshooting specific errors or error messages.

Failed Install

  1. Certificate issues: The on-demand broker (ODB) requires valid certificates. Ensure that your certificates are valid and generate new ones if necessary.
  2. Deploy fails: Deploys can fail for a variety of reasons. View the logs using Ops Manager to determine why the deploy is failing.
  3. Networking problems:
    • Cloud Foundry cannot reach the MySQL for PCF service broker
    • Cloud Foundry cannot reach the service instances
    • The service network cannot access the BOSH director
  4. Register broker errand fails.
  5. The smoke test errand fails.
  6. Resource sizing issues: These occur when the resource sizes selected for a given plan are less than the MySQL for PCF service requires to function. Check your resource configuration in Ops Manager and ensure that the configuration matches that recommended by the service.
  7. Other service-specific issues.

Cannot Create or Delete Service Instances

If developers report errors such as the following:

Instance provisioning failed: There was a problem completing your request. Please contact your operations team providing the following information: service: redis-acceptance, service-instance-guid: ae9e232c-0bd5-4684-af27-1b08b0c70089, broker-request-id: 63da3a35-24aa-4183-aec6-db8294506bac, task-id: 442, operation: create
  1. If the BOSH error shows a problem with the deployment manifest:

    1. Download the manifest for the on-demand service instance by running:
      bosh download manifest service-instance_SERVICE-INSTANCE-GUID MY-SERVICE.yml.

    2. Check the manifest for configuration errors.

    Note: This error does not apply if you are using BOSH CLI v2. In that case, to troubleshoot possible problems with the manifest, open it in a text editor and inspect the manifest there.

  2. To continue troubleshooting, Log in to BOSH and target the MySQL for PCF service instance using the instructions on parsing a Cloud Foundry error message.

  3. Retrieve the BOSH task ID from the error message and run one of the following commands depending on your Ops Manager version:

    Ops Manager Version BOSH Command
    1.10 and earlier bosh task TASK-ID
    1.11 bosh2 task TASK-ID
    1.12 and later bosh task TASK-ID
  4. If you need more information, access the broker logs and use the broker-request-id from the error message above to search the logs for more information. Check for:

Broker Request Timeouts

If developers report errors such as:

Server error, status code: 504, error code: 10001, message: The request to the service broker timed out: https://BROKER-URL/v2/service_instances/e34046d3-2379-40d0-a318-d54fc7a5b13f/service_bindings/aa635a3b-ef6d-41c3-a23f-55752f3f651b
  1. Confirm that Cloud Foundry (CF) is connected to the service broker.
  2. Check the BOSH queue size:
    1. Log into BOSH as an admin.
    2. Run one of these commands depending on your Ops Manager version:
      • 1.10 and earlier: bosh tasks
      • 1.11: bosh2 tasks
      • 1.12 and later: bosh tasks
  3. If there are a large number of queued tasks, the system may be under too much load. BOSH is configured with two workers and one status worker, which may not be sufficient resources for the level of load. Advise app developers to try again once the system is under less load.

Cannot Bind to or Unbind from Service Instances

Instance Does Not Exist

If developers report errors such as:

Server error, status code: 502, error code: 10001, message: Service broker error: instance does not exist`

Follow these steps:

  1. Type cf service MY-INSTANCE --guid. This confirms that the the MySQL for PCF service instance exists in BOSH and CF, and returns a GUID.

  2. Using the GUID obtained above, run one of the following BOSH CLI commands depending on your Ops Manager version:

    Ops Manager Version BOSH Command
    1.10 and earlier bosh vms service-instance_GUID
    1.11 bosh2 -d service-instance_GUID vms
    1.12 and later bosh -d service-instance_GUID vms

If the BOSH deployment is not found, it has been deleted from BOSH. Contact Pivotal support for further assistance.

Other Errors

If developers report errors such as:

Server error, status code: 502, error code: 10001, message: Service broker error: There was a problem completing your request. Please contact your operations team providing the following information: service: example-service, service-instance-guid: 8d69de6c-88c6-4283-b8bc-1c46103714e2, broker-request-id: 15f4f87e-200a-4b1a-b76c-1c4b6597c2e1, operation: bind

To find out the exact issue with the binding process:

  1. Access the service broker logs.

  2. Search the logs for the broker-request-id string listed in the error message above.

  3. Contact Pivotal support for further assistance if you are unable to resolve the problem.

  4. Check for:

Cannot Connect to a Service Instance

If developers report that their app cannot use service instances that they have successfully created and bound:

Ask the user to send application logs that show the connection error. If the error is originating from the service, then follow MySQL for PCF-specific instructions. If the issue appears to be network-related, then:

  1. Check that application security groups are configured correctly. Access should be configured for the service network that the tile is deployed to.

  2. Ensure that the network the PCF Elastic Runtime tile is deployed to has network access to the service network. You can find the network definition for this service network in the Ops Manager Director tile.

  3. In Ops Manager go into the service tile and see the service network that is configured in the networks tab.

  4. In Ops Manager go into the ERT tile and see the network it is assigned to. Make sure that these networks can access each other.

Service instances can also become temporarily inaccessible during upgrades and VM or network failures. See Service Interruptions for more information.

Upgrade All Instances Fails

If the upgrade-all-service-instances errand fails, look at the errand output in the Ops Manager log.

If an instance fails to upgrade, debug and fix it before running the errand again to prevent any failure issues from spreading to other on-demand instances.

Once the Ops Manager log no longer lists the deployment as failing, re-run the errand to upgrade the rest of the instances.

Missing Logs and Metrics

If no logs are being emitted by the on-demand broker, check that your syslog forwarding address is correct in Ops Manager.

  1. Ensure you have configured syslog for the tile.
  2. Ensure that you have network connectivity between the networks that the tile is using and the syslog destination. If the destination is external, you need to use the public ip VM extension feature available in your Ops Manager tile configuration settings.
  3. Verify that the Firehose is emitting metrics:
    1. Install the cf nozzle plugin
    2. Run cf nozzle -f ValueMetric | grep --line-buffered "on-demand-broker/MY-SERVICE" to find logs from your service in the cf nozzle output.

If no metrics appear within five minutes, verify that the broker network has access to the Loggregator system on all required ports.

Contact Pivotal support if you are unable to resolve the issue.

Unable to Determine Leader and Follower (Errand Error)

This problem happens when the configure-leader-follower errand fails because it cannot determine the VM roles.

Symptom

The configure-leader-follower errand exits with 1 and the errand logs contain the following:
$ Unable to determine leader and follower based on transaction history.

Explanation

Something has happened to the instances, such as a failure or manual intervention. As a result, there is not enough information available to determine the correct state and topology without operator intervention to resolve the issue.

Solution

Use the inspect errand to determine which instance should be the leader. Then, using the orchestration errands and backup/restore, you can put the service instance into a safe topology, and then rerun the configure-leader-follower errand. This is shown in the example below. This example shows one outcome that the inspect errand can return:
  1. Use the inspect errand to retrieve relevant information about the two VMs:
    $ bosh2 -e my-env -d my-dep run-errand inspect
    [...]
    Instance   mysql/4ecad54b-0704-47eb-8eef-eb228cab9724
    Exit Code  0
    Stdout     -
    Stderr     2017/12/11 18:25:54 Started executing command: inspect
             2017/12/11 18:25:54 Started GET https://127.0.0.1:8443/status
             2017/12/11 18:25:54
             Has Data: false
             Read Only: true
             GTID Executed: 1d774323-de9e-11e7-be01-42010a001014:1-25
             Replication Configured: false

    Instance mysql/e0b94ade-0114-4d49-a929-ce1616d8beda Exit Code 0 Stdout - Stderr 2017/12/11 18:25:54 Started executing command: inspect 2017/12/11 18:25:54 Started GET https://127.0.0.1:8443/status 2017/12/11 18:25:54 Has Data: true Read Only: true GTID Executed: 1d774323-de9e-11e7-be01-42010a001014:1-25 Replication Configured: true

    2 errand(s)
    Succeeded
    In the above scenario, the first instance is missing data but does not have replication configured. The second instance has data, and also has replication configured. The instructions below resolve this by copying data to the first instance, and resuming replication.
  2. Take a backup of the first instance using the Manual Backup steps.
  3. Restore the backup artifact to the first instance using the Manual Restore steps.

    At this point, the instances have equivalent data.
  4. Run the configure-leader-follower errand to reconfigure replication: bosh2 -e ENVIRONMENT -d DEPLOYMENT run-errand configure-leader-follower --instance=mysql/INDEX-OF-LEADER
    $ bosh2 -e my-env -d my-dep \
      run-errand configure-leader-follower \
      --instance=mysql/4ecad54b-0704-47eb-8eef-eb228cab9724
    
  5. Both Leader and Follower Instances Are Writable (Errand Error)

    This problem happens when the configure-leader-follower errand fails because both VMs are writable and the VMs might hold differing data.

    Symptom

    The configure–leader-follower errand exits with 1 and the errand logs contain the following:

    $ Both mysql instances are writable. Please ensure no divergent data and set one instance to read-only mode.

    Explanation

    MySQL for PCF tries to ensure that there is only one writable instance of the leader-follower pair at any given time. However, in certain situations, such as network partitions, or manual intervention outside of the provided bosh errands, it is possible for both instances to be writable. The service instances remain in this state until an operator resolves the issue to ensure that the correct instance is promoted and reduce the potential for data divergence.

    Solution

    1. Use the inspect errand to retrieve the GTID Executed set for each VM:
      $ bosh2 -e my-env -d my-dep run-errand inspect
      [...]
      Instance   mysql/4ecad54b-0704-47eb-8eef-eb228cab9724
      Exit Code  0
      Stdout     -
      Stderr     2017/12/11 18:25:54 Started executing command: inspect
               2017/12/11 18:25:54 Started GET https:127.0.0.1:8443/status
               2017/12/11 18:25:54
               Has Data: true
               Read Only: false
               GTID Executed: 1d774323-de9e-11e7-be01-42010a001014:1-23
               Replication Configured: false

      Instance mysql/e0b94ade-0114-4d49-a929-ce1616d8beda Exit Code 0 Stdout - Stderr 2017/12/11 18:25:54 Started executing command: inspect 2017/12/11 18:25:54 Started GET https:127.0.0.1:8443/status 2017/12/11 18:25:54 Has Data: true Read Only: false GTID Executed: 1d774323-de9e-11e7-be01-42010a001014:1-25 Replication Configured: false

      2 errand(s)
      Succeeded
      If the GTID Executed sets for both instances are the same, continue to Step 2. If they are different, continue to Step 4.
    2. Look at the value of GTID Executed for both instances.
      • If the range after the GUID is equivalent, either instance can be made read-only, as described in Step 3.
      • If one instance has a range that is a subset of the other, the instance with the subset should be made read-only, as described in Step 3.
    3. Based on the information you gathered in the step above, run the make-read-only errand to make the appropriate instance read-only: bosh2 -e ENVIRONMENT -d DEPLOYMENT run-errand make-read-only --instance=mysql/MYSQL-SUBSET-INSTANCE
      $ bosh2 -e my-env -d my-dep \
        run-errand make-read-only \
        --instance=mysql/e0b94ade-0114-4d49-a929-ce1616d8beda
      [...]
      Succeeded
      
    4. If the GTID Executed sets are neither equivalent nor subsets, data has diverged and you must determine what data has diverged as part of the procedure below:
      1. Use the make-read-only errand to set both instances to read-only to prevent further data divergence. bosh2 -e ENVIRONMENT -d DEPLOYMENT run-errand make-read-only --instance=mysql/MYSQL-INSTANCE
        $ bosh2 -e my-env -d my-dep \
          run-errand make-read-only \
          --instance=mysql/e0b94ade-0114-4d49-a929-ce1616d8beda
        [...]
        Succeeded
      2. Take a backup of both instances using the Manual Backup steps.
      3. Manually inspect the data on each instance to determine the discrepancies and put the data on the instance that is further ahead—this instance has the higher GTID Executed set, and will be the new leader.
      4. Migrate all appropriate data to the new leader instance.
      5. After putting all data on the leader, ssh onto the follower: bosh2 -e ENVIRONMENT -d DEPLOYMENT ssh mysql/INDEX-OF-FOLLOWER
        $ bosh2 -e my-env -d my-dep ssh mysql/e0b94ade-0114-4d49-a929-ce1616d8beda
      6. Become root with the command sudo su.
      7. Stop the mysql process with the command monit stop mysql.
      8. Delete the data directory of the follower with the command rm -rf /var/vcap/store/mysql.
      9. Use the configure-leader-follower errand to copy the leader’s data to the follower and resume replication: bosh2 -e ENVIRONMENT -d DEPLOYMENT run-errand configure-leader-follower --instance=mysql/INDEX-OF-LEADER
        $ bosh2 -e my-env -d my-dep \
          run-errand configure-leader-follower \
          --instance=mysql/4ecad54b-0704-47eb-8eef-eb228cab9724

    Both Leader and Follower Instances Are Read-Only

    In a leader-follower topology, the leader VM is writable and the follower VM is read-only. However if both VMs are read only, apps cannot write to the database.

    Symptom

    Developers report that apps cannot write to the database.

    Explanation

    This problem happens if the leader VM fails and the BOSH Resurrector is enabled. When the leader is resurrected, it is set as read-only.

    Solution

    1. Use the inspect errand to confirm that both VMs are in a read-only state: bosh2 -e ENVIRONMENT -d DEPLOYMENT run-errand inspect
    2. Examine the output and locate the information about the leader-follower MySQL VMs:
      Instance   mysql/4eexample54b-0704-47eb-8eef-eb2example724
      Exit Code  0
      Stdout     -
      Stderr     2017/12/11 18:25:54 Started executing command: inspect
               2017/12/11 18:25:54 Started GET https:999.0.0.1:8443/status
               2017/12/11 18:25:54
               Has Data: true
               Read Only: true
               GTID Executed: 1d779999-de9e-11e7-be01-42010a009999:1-23
               Replication Configured: true

      Instance mysql/e0exampleade-0114-4d49-a929-cexample8beda Exit Code 0 Stdout - Stderr 2017/12/11 18:25:54 Started executing command: inspect 2017/12/11 18:25:54 Started GET https:999.0.0.1:8443/status 2017/12/11 18:25:54 Has Data: true Read Only: true GTID Executed: 1d779999-de9e-11e7-be01-42010a009999:1-25 Replication Configured: false

      2 errand(s)
      Succeeded
    3. If Read Only is set to true for both VMs, make the leader writable using the following command: bosh2 -e ENVIRONMENT -d DEPLOYMENT run-errand configure-leader-follower --instance=mysql/INDEX-OF-LEADER

      For example, if the second instance above is the leader:
      $ bosh2 -e my-env -d my-dep \
        run-errand configure-leader-follower \
        --instance=mysql/e0exampleade-0114-4d49-a929-cexample8beda
      

    Troubleshooting Components

    This section provides guidance on checking for and fixing issues in on-demand service components.

    BOSH Problems

    Missing BOSH Director UUID

    Note: This error does not occur if you are using BOSH CLI v2

    If using the BOSH CLI v1, re-add the director_uuid to the manifest:

    1. Run bosh status --uuid and record the director_uuid value from the output.

    2. Edit the manifest and add the director_uuid: DIRECTOR-UUID from the last step at the top of the manifest.

    For more, see Deployment Identification in the BOSH docs.

    Large BOSH Queue

    On-demand service brokers add tasks to the BOSH request queue, which can back up and cause delay under heavy loads. An app developer who requests a new MySQL for PCF instance sees create in progress in the Cloud Foundry Command Line Interface (cf CLI) until BOSH processes the queued request.

    Ops Manager currently deploys two BOSH workers to process its queue. Future versions of Ops Manager will let users configure the number of BOSH workers.

    Configuration

    Service instances in failing state

    You may have configured a VM / Disk type in tile plan page in Ops Manager that is insufficiently large for the MySQL for PCF service instance to start. See tile-specific guidance on resource requirements.

    Authentication

    UAA Changes

    If you have rotated any UAA user credentials then you may see authentication issues in the service broker logs.

    To resolve this, redeploy the MySQL for PCF tile in Ops Manager. This provides the broker with the latest configuration.

    Note: You must ensure that any changes to UAA credentials are reflected in the Ops Manager credentials tab of the Elastic Runtime tile.

    Networking

    Common issues include:

    1. Network latency when connecting to the MySQL for PCF service instance to create or delete a binding.
      • Solution: Try again or improve network performance
    2. Network firewall rules are blocking connections from the MySQL for PCF service broker to the service instance.
      • Solution: Open the MySQL for PCF tile in Ops Manager and check the two networks configured in the Networks pane. Ensure that these networks allow access to each other.
    3. Network firewall rules are blocking connections from the service network to the BOSH director network.
      • Solution: Ensure that service instances can access the Director so that the BOSH agents can report in.
    4. Apps cannot access the service network.
      • Solution: Configure Cloud Foundry application security groups to allow runtime access to the service network.
    5. Problems accessing BOSH’s UAA or the BOSH director.
      • Solution: Follow network troubleshooting and check that the BOSH director is online.

    Validate Service Broker Connectivity to Service Instances

    1. To validate you can bosh2 ssh onto the MySQL for PCF service broker:

      • With BOSH CLI v2: Target the deployment, and reach the service instance.
      • With BOSH CLI v1: Download the broker manifest and target the deployment, then try to reach the service instance.
    2. If no BOSH task-id appears in the error message, look in the broker log using the broker-request-id from the task.

    Validate App Access to Service Instance

    Use cf ssh to access to the app container, then try connecting to the MySQL for PCF service instance using the binding included in the VCAP_SERVICES environment variable.

    Quotas

    Plan Quota issues

    If developers report errors such as:

    Message: Service broker error: The quota for this service plan has been exceeded. 
    Please contact your Operator for help.
    
    1. Check your current plan quota.
    2. Increase the plan quota.
    3. Log into Ops Manager.
    4. Reconfigure the quota on the plan page.
    5. Deploy the tile.
    6. Find who is using the plan quota and take the appropriate action.

    Global Quota Issues

    If developers report errors such as:

    Message: Service broker error: The quota for this service has been exceeded. 
    Please contact your Operator for help.
    
    1. Check your current global quota.
    2. Increase the global quota.
    3. Log into Ops Manager.
    4. Reconfigure the quota on the on-demand settings page.
    5. Deploy the tile.
    6. Find out who is using the quota and take the appropriate action.

    Failing Jobs and Unhealthy Instances

    To determine whether there is an issue with the MySQL for PCF service deployment, inspect the VMs. To do so, run one of the following commands:

    Ops Manager Version BOSH Command
    1.10 or earlier bosh vms --vitals service-instance_GUID
    1.11 bosh2 -d service-instance_GUID vms --vitals
    1.12 and later bosh -d service-instance_GUID vms --vitals

    For additional information, run one of the following commands:

    Ops Manager Version BOSH Command
    1.10 and earlier bosh instances --ps --vitals
    1.11 bosh2 instances --ps --vitals
    1.12 and later bosh instances --ps --vitals

    If the VM is failing, follow the service-specific information. Any unadvised corrective actions (such as running BOSH restart on a VM) can cause issues in the service instance.

    A failing process or failing VM might come back automatically after a temporary service outage. See VM Process Failure and VM Failure.

    AZ or Region Failure

    Failures at the IaaS level, such as Availability Zone (AZ) or region failures, can interrupt service and require manual restoration. See AZ Failure and Region Failure.

    Techniques for Troubleshooting

    Instructions on interacting with the on-demand service broker and on-demand service instance BOSH deployments, and on performing general maintenance and housekeeping tasks

    Parse a Cloud Foundry (CF) Error Message

    Failed operations (create, update, bind, unbind, delete) result in an error message. You can retrieve the error message later by running the cf CLI command cf service INSTANCE-NAME.

    $ cf service myservice
    
    Service instance: myservice
    Service: super-db
    Bound apps:
    Tags:
    Plan: dedicated-vm
    Description: Dedicated Instance
    Documentation url:
    Dashboard: 
    
    Last Operation
    Status: create failed
    Message: Instance provisioning failed: There was a problem completing your request. 
         Please contact your operations team providing the following information: 
         service: redis-acceptance, 
         service-instance-guid: ae9e232c-0bd5-4684-af27-1b08b0c70089,
         broker-request-id: 63da3a35-24aa-4183-aec6-db8294506bac, 
         task-id: 442, 
         operation: create
    Started: 2017-03-13T10:16:55Z
    Updated: 2017-03-13T10:17:58Z
    

    Use the information in the Message field to debug further. Provide this information to Pivotal Support when filing a ticket.

    The task-id field maps to the BOSH task id. For further information on a failed BOSH task, use the bosh task TASK-ID command in v1 of the BOSH CLI. For v2, use bosh2 task TASK-ID.

    The broker-request-guid maps to the portion of the On-Demand Broker log containing the failed step. Access the broker log through your syslog aggregator, or access BOSH logs for the broker by typing bosh logs broker 0. If you have more than one broker instance, repeat this process for each instance.

    Access Broker and Instance Logs and VMs

    Before following the procedures below, log into the cf CLI and the BOSH CLI.

    Access Broker Logs and VM(s)

    You can access logs using Ops Manager by clicking on the Logs tab in the tile and downloading the broker logs.

    To access logs using the BOSH CLI, do the following:

    1. Identify the on-demand broker (ODB) deployment by running one of the following commands, depending on your Ops Manager version:

      Ops Manager Version BOSH Command
      1.10 and earlier bosh deployments
      1.11 bosh2 deployments
      1.12 and later
    2. For BOSH CLI v1 only:

      1. Run bosh download manifest ODB-DEPLOYMENT-NAME odb.yml to download the ODB manifest.
      2. Select the ODB deployment using bosh deployment odb.yml.
    3. View VMs in the deployment using one of the following commands:

      Ops Manager Version BOSH Command
      1.10 and earlier bosh instances
      1.11 bosh2 -d DEPLOYMENT-NAME instances
      1.12 and later
    4. SSH onto the VM by running one of the following commands:

      Ops Manager Version BOSH Command
      1.10 and earlier bosh ssh service-instance_GUID
      1.11 bosh2 -d service-instance_GUID ssh
      1.12 and later bosh -d service-instance_GUID ssh
    5. Download the broker logs by running one of the following commands:

      Ops Manager Version BOSH Command
      1.10 and earlier bosh logs service-instance_GUID
      1.11 bosh2 -d service-instance_GUID logs
      1.12 and later bosh -d service-instance_GUID logs

    The archive generated by BOSH or Ops Manager includes the following logs:

    Log Name Description
    broker.log Requests to the on-demand broker and the actions the broker performs while orchestrating the request (e.g. generating a manifest and calling BOSH). Start here when troubleshooting.
    broker_ctl.log Control script logs for starting and stopping the on-demand broker.
    post-start.stderr.log Errors that occur during post-start verification.
    post-start.stdout.log Post-start verification.
    drain.stderr.log Errors that occur while running the drain script.

    Access Service Instance Logs and VMs

    1. To target an individual service instance deployment, retrieve the GUID of your service instance with the cf CLI command cf service MY-SERVICE --guid.

    2. For BOSH CLI v1 only:

      1. Run bosh status --uuid to retrieve the BOSH Director GUID.

        Note: “GUID” and “UUID” mean the same thing.

      2. To download your BOSH manifest for the service, run bosh download manifest service-instance_BOSH-DIRECTOR-GUID MANIFEST.yml using the GUID you just obtained and a filename you want to save the manifest as.

      3. Edit the following line in the service instance manifest that you just saved, to include the current BOSH Director GUID:
            director_uuid: BOSH-DIRECTOR-GUID
        
      4. Run bosh deployment MANIFEST.yml to select the deployment using the Director UUID.
    3. View VMs in the deployment using one of the following commands:

      Ops Manager Version BOSH Command
      1.10 and earlier bosh instances
      1.11 bosh2 -d DEPLOYMENT-NAME instances
      1.12 and later bosh -d DEPLOYMENT-NAME instances
    4. SSH onto a VM by running one of the following commands:

      Ops Manager Version BOSH Command
      1.10 and earlier bosh ssh service-instance_GUID
      1.11 bosh2 -d service-instance_GUID ssh
      1.12 and later bosh -d service-instance_GUID ssh
    5. Download the instance logs by running one of the following commands:

      Ops Manager Version BOSH Command
      1.10 and earlier bosh logs service-instance_GUID
      1.11 bosh2 -d service-instance_GUID logs
      1.12 and later bosh -d service-instance_GUID logs

    Run Service Broker Errands to Manage Brokers and Instances

    From the BOSH CLI, you can run service broker errands that manage the service brokers and perform mass operations on the service instances that the brokers created. These service broker errands include:

    To run errands:

    1. For BOSH CLI v1 only: Select the broker deployment by running this command:
      bosh deployment BOSH_MANIFEST.yml

    2. Run one of the following commands depending on your Ops Manager version:

      Ops Manager Version BOSH Command
      1.10 and earlier bosh run errand ERRAND_NAME
      1.11 bosh2 -d DEPLOYMENT_NAME run-errand ERRAND_NAME
      1.12 and later bosh -d DEPLOYMENT_NAME run-errand ERRAND_NAME


      Examples:
      bosh run errand deregister-broker
      bosh2 -d DEPLOYMENT-NAME run-errand deregister-broker

    Register Broker

    The register-broker errand registers the broker with Cloud Foundry and enables access to plans in the service catalog. Run this errand whenever the broker is re-deployed with new catalog metadata to update the Cloud Foundry catalog.

    Plans with disabled service access are not visible to non-admin Cloud Foundry users (including Org Managers and Space Managers). Admin Cloud Foundry users can see all plans including those with disabled service access.

    The errand does the following:

    • Registers the service broker with Cloud Controller.
    • Enables service access for any plans that have the radio button set to enabled in the tile plan page.
    • Disables service access for any plans that have the radio button set to disabled in the tile plan page.
    • Does nothing for any for any plans that have the radio button set to manual.

    To run the errand, do the following:

    1. For BOSH CLI v1 only: Select the broker deployment by running this command:
      bosh deployment BOSH_MANIFEST.yml

    2. Run one of the following commands depending on your Ops Manager version:

      Ops Manager Version BOSH Command
      1.10 and earlier bosh run errand register-broker
      1.11 bosh2 -d DEPLOYMENT-NAME run-errand register-broker
      1.12 and later bosh -d DEPLOYMENT-NAME run-errand register-broker

    Deregister Broker

    This errand deregisters a broker from Cloud Foundry.

    The errand does the following:

    • Deletes the service broker from Cloud Controller
    • Fails if there are any service instances, with or without bindings

    Use the Delete All Service Instances errand to delete any existing service instances.

    To run the errand, do the following:

    1. For BOSH CLI v1 only: Select the broker deployment by running the command:
      bosh deployment BROKER_MANIFEST.yml.

    2. Run one of the following commands depending on your Ops Manager version:

      Ops Manager Version BOSH Command
      1.10 and earlier bosh run errand deregister-broker
      1.11 bosh2 -d DEPLOYMENT-NAME run-errand deregister-broker
      1.12 and later bosh -d DEPLOYMENT-NAME run-errand deregister-broker

    Upgrade All Service Instances

    If you have made changes to the plan definition or uploaded a new tile into Ops Manager, you may want to upgrade all the MySQL for PCF service instances to the latest software/plan definition.

    The upgrade-all-service-instances errand does the following:

    • Collects all of the service instances the on-demand broker has registered.
    • For each instance the errand serially:
      • Issues an upgrade command to the on-demand broker.
      • Re-generates the service instance manifest based on its latest configuration from the tile.
      • Deploys the new manifest for the service instance.
      • Waits for this operation to complete, then proceeds to the next instance.
    • Adds to a retry list any instances that have ongoing BOSH tasks at the time of upgrade.
    • Retries any instances in the retry list until all are upgraded.

    If any instance fails to upgrade, the errand fails immediately. This prevents systemic problems from spreading to the rest of your service instances. Run the errand by following either of the procedures below.

    To run the errand, you can either select the errand through the Ops Manager UI and have it run when you click Apply Changes, or do the following:

    1. For BOSH CLI v1 only: Select the broker deployment by running this command:
      bosh deployment BOSH_MANIFEST.yml

    2. Run one of the following commands depending on your Ops Manager version:

      Ops Manager Version BOSH Command
      1.10 and earlier bosh run errand upgrade-all-service-instances
      1.11 bosh2 -d DEPLOYMENT-NAME run-errand upgrade-all-service-instances
      1.12 and later bosh -d DEPLOYMENT-NAME run-errand upgrade-all-service-instances

    Delete All Service Instances

    This errand deletes all service instances of your broker’s service offering in every org and space of Cloud Foundry. It uses the Cloud Controller API to do this, and therefore only deletes instances the Cloud Controller knows about. It will not delete orphan BOSH deployments.

    Orphan BOSH deployments don’t correspond to a known service instance. While rare, orphan deployments can occur. Use the orphan-deployments errand to identify them.

    The errand does the following:

    • Unbinds all applications from the service instances.
    • Deletes all service instances sequentially.
    • Checks if any instances have been created while the errand was running.
    • If newly-created instances are detected, the errand fails.

    WARNING: Use extreme caution when running this errand. You should only use it when you want to totally destroy all of the on-demand service instances in an environment.

    To run the errand, do the following:

    1. For BOSH CLI v1 only: Select the broker deployment by running the command:
      bosh deployment BROKER_MANIFEST.yml.

    2. Run one of the following commands depending on your Ops Manager version:

      Ops Manager Version BOSH Command
      1.10 and earlier bosh run errand delete-all-service-instances
      1.11 bosh2 -d service-instance_GUID delete-deployment
      1.12 and later bosh -d service-instance_GUID delete-deployment

    Detect Orphaned Instances Service Instances

    A service instance is defined as ‘orphaned’ when the BOSH deployment for the instance is still running, but the service is no longer registered in Cloud Foundry.

    The orphan-deployments errand collates a list of service deployments that have no matching service instances in Cloud Foundry and return the list to the operator. It is then up to the operator to remove the orphaned BOSH deployments.

    To run the errand, do the following:

    1. For BOSH CLI v1 only: Select the broker deployment by running the command:

      bosh deployment BROKER_MANIFEST.yml

    2. Run the errand using one of the following commands depending on your Ops Manager version:

      Ops Manager Version BOSH Command
      1.10 and earlier bosh run errand orphan-deployments
      1.11 bosh2 -d DEPLOYMENT-NAME run-errand orphan-deployments
      1.12 and later bosh -d DEPLOYMENT-NAME run-errand orphan-deployments

    If orphan deployments exist, the errand script will:

    • Exit with exit code 10
    • Output a list of deployment names under a [stdout] header
    • Provide a detailed error message under a [stderr] header

    For example:

    [stdout]
    [{"deployment_name":"service-instance_80e3c5a7-80be-49f0-8512-44840f3c4d1b"}]

    [stderr] Orphan BOSH deployments detected with no corresponding service instance in Cloud Foundry. Before deleting any deployment it is recommended to verify the service instance no longer exists in Cloud Foundry and any data is safe to delete.

    Errand 'orphan-deployments' completed with error (exit code 10)

    These details will also be available through the BOSH /tasks/ API endpoint for use in scripting:

    $ curl 'https://bosh-user:bosh-password@bosh-url:25555/tasks/task-id/output?type=result' | jq .
    {
      "exit_code": 10,
      "stdout": "[{"deployment_name":"service-instance_80e3c5a7-80be-49f0-8512-44840f3c4d1b"}]\n",
      "stderr": "Orphan BOSH deployments detected with no corresponding service instance in Cloud Foundry. Before deleting any deployment it is recommended to verify the service instance no longer exists in Cloud Foundry and any data is safe to delete.\n",
      "logs": {
        "blobstore_id": "d830c4bf-8086-4bc2-8c1d-54d3a3c6d88d"
      }
    }
    

    If no orphan deployments exist, the errand script will:

    • Exit with exit code 0
    • Stdout will be an empty list of deployments
    • Stderr will be None
    [stdout]
    []
    
    [stderr]
    None
    
    Errand 'orphan-deployments' completed successfully (exit code 0)
    

    If the errand encounters an error during running it will:

    • Exit with exit 1
    • Stdout will be empty
    • Any error messages will be under stderr

    To clean up orphaned instances, run the following command on each instance:

    WARNING: Running this command may leave IaaS resources in an unusable state.

    Ops Manager Version BOSH Command
    1.10 and earlier bosh delete deployment service-instance_SERVICE-INSTANCE-GUID
    1.11 bosh2 delete-deployment service-instance_SERVICE-INSTANCE-GUID
    1.12 and later bosh delete-deployment service-instance_SERVICE-INSTANCE-GUID

    Retrive Admin and Read-Only Admin Credentials for a Service Instance

    To retrieve the admin and read-only admin credentials for a service instance, perform the following steps:

    1. Identify the service deployment by GUID.
    2. Log into BOSH.
    3. Download the manifest for the service instance and add the GUID if using the BOSH CLI v1.

      Skip this step if you are using the BOSH CLI v2. You cannot download the manifest with the BOSH CLI v2. Open it in a text editor instead.

    4. Look in the manifest for the admin and roadmin credentials.

    Reinstall a Tile

    To reinstall the MySQL for PCF v2.x tile, see the Reinstalling MySQL for Pivotal Cloud Foundry version 2 and above Knowledge Base article.

    View Resource Saturation and Scaling

    BOSH CLI v2: Viewing statistics

    To view usage statistics for any service do the following:

    1. For BOSH CLI v1 only: Select the broker deployment by running this command:
      bosh deployment BOSH_MANIFEST.yml

    2. Run the following commands depending on your Ops Manager version:

      Ops Manager Version BOSH Commands
      v1.10 and earlier Run the BOSH CLI v1 command bosh vms --vitals.
      To view process-level information, run bosh instances --ps.
      v1.11 Run the BOSH CLI v2 command bosh2 -d DEPLOYMENT-NAME vms --vitals. To view process-level information, run bosh2 -d DEPLOYMENT-NAME instances --ps
      v1.12 and later Run the BOSH CLI v2 command bosh -d DEPLOYMENT-NAME vms --vitals. To view process-level information, run bosh2 -d DEPLOYMENT-NAME instances --ps

    Identify Service Instance Owner

    If you want to identify which apps are using a specific service instance from the BOSH deployments name, you can run the following steps:

    1. Take the deployment name and strip the service-instance_ leaving you with the GUID.
    2. Log in to CF as an admin.
    3. Obtain a list of all service bindings by running the following: cf curl /v2/service_instances/GUID/service_bindings
    4. The output from the above curl gives you a list of resources, with each item referencing a service binding, which contains the APP-URL. To find the name, org, and space for the app, run the following:
      1. cf curl APP-URL and record the app name under entity.name
      2. cf curl SPACE-URL to obtain the space, using the entity.space_url from the above curl. Record the space name under entity.name
      3. cf curl ORGANIZATION-URL to obtain the org, using the entity.organization_url from the above curl. Record the organization name under entity.name

    Note: When running cf curl ensure that you query all pages, because the responses are limited to a certain number of bindings per page. The default is 50. To find the next page curl the value under next_url

    Monitor Quota Saturation and Service Instance Count

    Quota saturation and total number of service instances are available through ODB metrics emitted to Loggregator. The metric names are shown below:

    Metric Name Description
    on-demand-broker/SERVICE-NAME-MARKETPLACE/quota_remaining global quota remaining for all instances across all plans
    on-demand-broker/SERVICE-NAME-MARKETPLACE/PLAN-NAME/quota_remaining quota remaining for a particular plan
    on-demand-broker/SERVICE-NAME-MARKETPLACE/total_instances total instances created across all plans
    on-demand-broker/SERVICE-NAME-MARKETPLACE/PLAN-NAME/total_instances total instances created for a given plan

    Note: Quota metrics are not emitted if no quota has been set.

    Knowledge Base (Community)

    Find the answer to your question and browse product discussions and solutions by searching the Pivotal Knowledge Base.

    File a Support Ticket

    You can file a support ticket here. Be sure to provide the error message from cf service YOUR-SERVICE-INSTANCE.

    To help expedite troubleshooting, also provide your service broker logs, your service instance logs and BOSH task output, if your cf service YOUR-SERVICE-INSTANCE output includes a task-id.