LATEST VERSION: 0.15.0 - CHANGELOG

Deploying an On-Demand Broker

Operator Responsibilities

The operator is responsible for performing the following:

  • Configure the BOSH director
  • Upload the required releases for the broker deployment and service instance deployments.
  • Write a broker manifest
    • See v2-style manifest docs if unfamiliar with writing BOSH v2 manifests
    • Core broker configuration
    • Service catalog and plan composition
  • Manage brokers
  • Documentation for the operator

For a list of deliverables provided by the Service Author, see Required Deliverables.

For an example manifest for a Redis service, see redis-example-service-adapter-release.

For an example manifest for a Kafka service, see kafka-example-service-adapter-release.

Set Up Your BOSH Director

Dependencies for the On-Demand Broker:

  • BOSH director v257 or later
  • Cloud Foundry v238 or later

Note: Service instance lifecycle errands require BOSH director v261 or later.

SSL certificates

If ODB is configured to communicate with BOSH on the director’s public IP you may be using a self-signed certificate unless you have a domain for your BOSH director. ODB does not ignore TLS certificate validation errors by default (as expected). You have two options to configure certificate-based authentication between the BOSH director and the ODB:

  1. Add the BOSH director’s root certificate to ODB’s trusted pool in the ODB manifest:

    bosh:
      root_ca_cert: <root-ca-cert>
    
  2. Use BOSH’s trusted_certs feature to add a self-signed CA certificate to each VM BOSH deploys. For more details on how to generate and use self-signed certificates for BOSH director and UAA, see Director SSL Certificate Configuration.

You can also configure a separate root CA certificate that is used when ODB communicates with the Cloud Foundry API (Cloud Controller). This is done in a similar way to above. Please see manifest snippets below for details.

BOSH teams

BOSH has a teams feature that allows you to further control how BOSH operations are available to different clients. We strongly recommend using it to ensure that your on-demand service broker client can only modify deployments it created. For example, if you use uaac to create the client like this:

uaac client add <client-id> \
  --secret <client-secret> \
  --authorized_grant_types "refresh_token password client_credentials" \
  --authorities "bosh.teams.<team-name>.admin"

Then when you configure the broker’s BOSH authentication, you can use this client ID and secret. The broker will then only be able to perform BOSH operations on deployments it has created itself.

For more details on how to set up and use BOSH teams, see Director teams and permissions configuration.

For more details on securing how ODB uses BOSH, see Security.

Cloud Controller

ODB used the Cloud Controller as a source of truth about service offerings, plans, and instances. To reach Cloud Controller, ODB needs to be configured with credentials. These can be either client or user credentials:

  • Client credentials: as of Cloud Foundry v238, the UAA client must have authority cloud_controller.admin.
  • User credentials: a Cloud Foundry admin user, i.e. a member of the scim.read and cloud_controller.admin groups as a minimum.

Detailed broker configuration is covered below.

Upload Required Releases

Upload the following releases to the BOSH director:

  • on-demand-service-broker
  • your service adapter
  • your service release(s)

Write a Broker Manifest

Core Broker Configuration

Your manifest should contain one non-errand instance group, that co-locates both:

  • the broker job from on-demand-service-broker
  • your service adapter job from your service adapter release

The broker is stateless and does not need a persistent disk. The VM type can be quite small: a single CPU and 1 GB of memory should be sufficient in most cases.

An example snippet is shown below:

instance_groups:
  - name: broker # this can be anything
    instances: 1
    vm_type: <vm type>
    stemcell: <stemcell>
    networks:
      - name: <network>
    jobs:
      - name: <service adapter job name>
        release: <service adapter release>
      - name: broker
        release: on-demand-service-broker
        properties:
          # choose a port and basic auth credentials for the broker
          port: <broker port>
          username: <broker username>
          password: <broker password>
          disable_ssl_cert_verification: <true|false> # optional, defaults to false. This should NOT be used in production
          cf:
            url: <CF API URL>
            root_ca_cert: <ca cert for cloud controller> # optional, see SSL certificates
            authentication: # either client_credentials or user_credentials, not both as shown
              url: <CF UAA URL>
              client_credentials:
                client_id: <UAA client id with cloud_controller.admin authority and client_credentials in the authorized_grant_type>
                secret: <UAA client secret>
              user_credentials:
                username: <CF admin username in the cloud_controller.admin and scim.read groups>
                password: <CF admin password>
          bosh:
            url: <director url>
            root_ca_cert: <ca cert for bosh director and associated UAA> # optional, see SSL certificates
            authentication: # either basic or uaa, not both as shown
              basic:
                username: <bosh username>
                password: <bosh password>
              uaa:
                url: <BOSH UAA URL> # often on the same host as the director, on a different port
                client_id: <bosh client id>
                client_secret: <bosh client secret>
          service_adapter:
            path: <path to service adapter binary> # optional, provided by the Service Author. Defaults to /var/vcap/packages/odb-service-adapter/bin/service-adapter
          features: # optional
            cf_user_triggered_upgrades: <true|false> # optional, defaults to false.

          # There are more broker properties that are discussed below

This snippet is using the BOSH v2 syntax, and making use of global cloud config and job-level properties.

Please note that the disable_ssl_cert_verification option is dangerous and should not be used in production.

Service Catalog and Plan Composition

The operator must:

  1. Supply each release job specified by the Service Author exactly once. You can include releases that provide many jobs, as long as each required job is provided by exactly one release.
  2. Supply one stemcell that is used on each VM in the service deployments. ODB does not currently support service instance deployments that use a different stemcell for different instance groups.
  3. Use exact versions for releases and stemcells. The use of latest and floating stemcells are not supported.
  4. Create Cloud Foundry service metadata in the catalog for the service offering. This metadata will be aggregated in the Cloud Foundry marketplace and displayed in Apps Manager and the cf CLI.
  5. Compose plans. In ODB, service authors do not define plans but instead expose plan properties. The operator’s role is to compose combinations of these properties, along with IAAS resources and catalog metadata into as many plans as they like.

    1. Create Cloud Foundry service plan metadata in the service catalog for each plan.
    2. Provide resource mapping for each instance group specified by the Service Author for each plan. The resource values must correspond to valid resource definitions in the BOSH director’s global cloud config. In some cases Service Authors will recommend resource configuration: e.g. in single-node Redis deployments, an instance count greater than 1 does not make sense. Here the operator can configure the deployment to span multiple availability zones, by using the BOSH multi-az feature. For example the kafka multi az plan. In some cases, service authors will provide errands for the service release. You can add an instance group of type errand by setting the lifecycle field. For example the smoke_tests for the kafka deployment.
    3. Provide values for plan properties. Plan properties are key-value pairs defined by the Service Author. Some examples include a boolean to enable disk persistence for Redis, and a list of strings representing RabbitMQ plugins to load. The Service Author should document whether these properties are mandatory or optional, whether the use of one property precludes the use of another, and whether certain properties affect recommended instance group to resource mappings.

    Properties can also be specified at the service offering level, where they will be applied to every plan. If there is a conflict between global and plan-level properties, the plan properties will take precedence. 1. Provide an (optional) update block for each plan. You may require plan-specific configuration for BOSH’s update instance operation. The ODB passes the plan-specific update block to the service adapter. Plan-specific update blocks should have the same structure as the update block in a BOSH manifest. The Service Author can define a default update block to be used when a plan-specific update block is not provided, and whether the service adapter supports their configuration in the manifest.

Add the snippet below to your broker job properties section:

service_deployment:
  releases:
    - name: <service-release>
      version: <service-release-version> # Exact release version
      jobs: [<release-jobs-needed-for-deployment-and-lifecycle-errands>] # Service Author will specify list of jobs required
  stemcell: # every instance group in the service deployment has the same stemcell
    os: <service-stemcell>
    version: <service-stemcell-version> # Exact stemcell version
service_catalog:
  id: <CF marketplace ID>
  service_name: <CF marketplace service offering name>
  service_description: <CF marketplace description>
  bindable: <true|false>
  plan_updatable: <true|false> # optional
  tags: [<tags>] # optional
  requires: [<required permissions] # optional
  dashboard_client: # optional
    id: <dashboard OAuth client ID>
    secret: <dashboard OAuth client secret>
    redirect_uri: <dashboard OAuth redirect URI>
  metadata: # optional
    display_name: <display name>
    image_url: <image url>
    long_description: <long description>
    provider_display_name: <provider display name>
    documentation_url: <documentation url>
    support_url: <support url>
  global_properties: {} # optional, applied to every plan.
  global_quotas: # optional
    service_instance_limit: <instance limit> # the maximum number of service instances across all plans
  plans:
    - name: <CF marketplace plan name>
      plan_id: <CF marketplace plan id>
      description: <CF marketplace description>
      cf_service_access: <enable|disable|manual> # optional, enable by default.
      bindable: <true|false> # optional. If specified, this takes precedence over the bindable attribute of the service
      metadata: # optional
        display_name: <display name>
        bullets: [<bullet1>, <bullet2>]
        costs:
          - amount:
              <currency code (string)>: <currency amount (float)>
            unit: <frequency of cost>
      quotas: # optional
        service_instance_limit: <instance limit> # the maximum number of service instances for this plan
      instance_groups: # resource mapping for the instance groups defined by the Service Author
        - name: <service author provided instance group name>
          vm_type: <vm type>
          vm_extensions: [<vm extensions>] # optional
          instances: <instance count>
          networks: [<network>]
          azs: [<az>]
          persistent_disk_type: <disk> # optional
        - name: <service author provided lifecycle errand name> # optional
          lifecycle: errand
          vm_type: <vm type>
          instances: <instance count>
          networks: [<network>]
          azs: [<az>]
      properties: {} # valid property key-value pairs are defined by the Service Author
      update: # optional
        canaries: 1 # required
        max_in_flight: 2  # required
        canary_watch_time: 1000-30000 # required
        update_watch_time: 1000-30000 # required
        serial: true # optional
      lifecycle_errands: #optional
        post_deploy: <errand name> #optional
        pre_delete: <errand name> #optional

Route Registration

You can optionally colocate the route_registrar job from the routing release with the on-demand-service-broker, in order to:

  1. load balance multiple instances of ODB using Cloud Foundry’s router
  2. access ODB from the public internet

To do this, upload the release to your BOSH director and configure the job properties, replacing the version in that docs URL query string as appropriate.

Remember to set the broker_uri property in the register-broker errand if you configure a route.

Service Instance Quotas

ODB offers global and plan level service quotas to set service instance limits.

Plan quotas restrict the number of service instances for a given plan, while the global limit restricts the number of service instances across all plans.

When creating a service instance, ODB will check the global service instance limit. If it has not been reached, it will check the plan service instance limit.

Note: These limits do not include orphans. See listing and deleting orphans.

Broker Metrics

The ODB bosh release contains a metrics job, that can be used to emit metrics when colocated with service metrics. You must include the loggregator release in order to do this.

Add the following jobs to the broker instance group:

- name: service-metrics
  release: service-metrics
  properties:
    service_metrics:
      execution_interval_seconds: <interval between successive metrics collections>
      origin: <origin tag for metrics>
      monit_dependencies: [broker] # hardcode this
- name: metron_agent
  release: loggregator
  properties:
    metron_agent:
      deployment: <deployment tag for metrics>
    metron_endpoint:
      shared_secret: <metron secret>
    loggregator:
      etcd:
        machines: [<CF etcd IPs>]
    loggregator_endpoint:
      shared_secret: <loggregator secret>
- name: service-metrics-adapter
  release: <ODB release>

An example of how the service metrics can be configured for an on-demand-broker deployment can be seen in the kafka-example-service-adapter-release manifest.

We have tested this example configuration with loggregator v58 and service-metrics v1.5.0.

Please see the service metrics docs for more details on service metrics.

Service Instance Lifecycle Errands

Note: This feature requires BOSH director v261 or later.

Service Instance lifecycle errands allow additional short lived jobs to be run as part of service instance deployment. A deployment is only considered successful if the lifecycle errand successfully exits.

The service adapter must offer this errand as part of the service instance deployment.

ODB supports the following lifecycle errands:

  • post-deploy - Runs after the creation or updating of a service instance. An example use case is running a health check to ensure the service instance is functioning. See the workflow here
  • pre-delete - Runs before the deletion of a service instance. An example use case is cleaning up data prior to a service shutdown. See the workflow here

Service Instance lifecycle errands are configured on a per-plan basis. To enable lifecycle errands, the errand job must be:

  • Added to the service instance deployment.
  • Added to the plan’s instance groups.
  • Set in the plan’s lifecycle errands configuration.

An example manifest snippet configuring a post-deploy lifecycle errand for service instances:

service_deployment:
  releases:
    - name: <service-release>
      version: <service-release-version>
      jobs: [service_release_job, lifecycle_errand_job] # Errand job from the service release
..other configuration
service_catalog:
  plans:
      - name: <CF marketplace plan name>
        instance_groups:
          - name: <service author provided instance group name>
            ...other configuration
          - name: <lifecycle_errand_job> # Errand job added to deployment
            lifecycle: errand
            vm_type: <vm type>
            instances: <instance count>
            networks: [<network>]
            azs: [<az>]
        lifecycle_errands: #optional
          post_deploy: <lifecycle_errand_job> # Assign errand to run on post-deploy
        ..other configuration

Please note that changing a plan’s lifecycle errands configuration while an existing deployment is in progress is not supported. Lifecycle errands will not be run.

Broker Management

Management tasks on the broker are performed with BOSH errands.

Register Broker

This errand registers the broker with Cloud Foundry and enables access to plans in the service catalog. The errand should be run whenever the broker is re-deployed with new catalog metadata to update the Cloud Foundry catalog.

Please note that if the broker_uri property is set, then you should also register a route for your broker with Cloud Foundry. See Route registration section for more details.

When cf_service_access: false is set the errand will disable service access for all plans.

Individual plans can be enabled via the optional cf_service_access property. This property accepts three values: enable, disable, manual.

  • cf_service_access: enable: register-broker errand will enable access for that plan
  • cf_service_access: disable: register broker errand will disable access for that plan
  • cf_service_access: manual: register-broker errand will perform no action

If the cf_service_access property is not set at all, the register-broker errand will enable access for that plan.

Plans with disabled service access will not be visible to non-admin Cloud Foundry users (including Org Managers and Space Managers). Admin Cloud Foundry users can see all plans including those with disabled service access.

Add the following instance group to your manifest:

- name: register-broker
  lifecycle: errand
  instances: 1
  jobs:
    - name: register-broker
      release: <odb-release-name>
      properties:
        broker_name: <broker-name>
        broker_uri: <broker URI, only required when a route has been registered> # optional
        disable_ssl_cert_verification: <true|false> # defaults to false
        enable_service_access: <true|false> # defaults to true
        cf:
          api_url: <cf-api-url>
          admin_username: <cf-api-admin-username>
          admin_password: <cf-api-admin-password>
  vm_type: <vm-type>
  stemcell: <stemcell>
  networks: [{name: <network>}]
  azs: [<az>]

Run the errand with bosh run errand register-broker.

Deregister Broker

This errand deregisters a broker from Cloud Foundry. It requires that there are no existing service instances.

Add the following instance group to your manifest:

- name: deregister-broker
  lifecycle: errand
  instances: 1
  jobs:
    - name: deregister-broker
      release: <odb-release-name>
      properties:
        broker_name: <broker-name>
        disable_ssl_cert_verification: <true|false> # defaults to false
        cf:
          api_url: <cf-api-url>
          admin_username: <cf-api-admin-username>
          admin_password: <cf-api-admin-password>
  vm_type: <vm-type>
  stemcell: <stemcell>
  networks: [{name: <service-network>}]
  azs: [<az>]

Run the errand with bosh run errand deregister-broker.

Delete All Service Instances

This errand deletes all service instances of your broker’s service offering in every org and space of Cloud Foundry. It uses the Cloud Controller API to do this, and therefore only deletes instances the Cloud Controller knows about. It will not delete orphan BOSH deployments: those that don’t correspond to a known service instance. This should never happen, but in practice it might. Use the orphan-deployments errand to identify them.

This should only be done with extreme caution, when you want to totally destroy all the on-demand service instances in an environment.

Add the following instance group to your manifest:

- name: delete-all-service-instances
  lifecycle: errand
  instances: 1
  jobs:
    - name: delete-all-service-instances
      release: *broker-release
      properties:
        timeout_minutes: <time to wait for all instances to be deleted> # defaults to 60
        disable_ssl_cert_verification: <true|false> # defaults to false
        cf:
          api_url: <cf-api-url>
          admin_username: <cf-api-admin-username>
          admin_password: <cf-api-admin-password>

  vm_type: <vm-type>
  stemcell: <stemcell>
  networks: [{name: <network>}]
  azs: [<az>]

Run the errand with bosh run errand delete-all-service-instances.

Delete Orphaned Deployments

The deployment for a service instance is defined as ‘Orphaned’ when the Bosh deployment is still running, but the service is no longer registered in Cloud Foundry.

The orphan-deployments errand will collate a list of service deployments that have no matching service instances in Cloud Foundry and return the list to the operator. It is then up to the operator to remove the orphaned bosh deployments.

Run the errand with bosh run errand orphan-deployments.

If orphan deployments are present, the errand will output a list of deployment names

[stdout]
[{"deployment_name":"service-instance_aoeu39fgn-8125-05h2-9023-9vbxf7676f3"}]

[stderr]
None

Errand 'orphan-deployments' completed successfully (exit code 0)

Warning: Deleting the bosh deployment will destroy the vm, any data present will be lost.

To delete the orphan deployment run bosh delete deployment service-instance_aoeu39fgn-8125-05h2-9023-9vbxf7676f3.

Updates

Update Broker

To update the core broker configuration:

  • make any necessary changes to the core broker configuration in the broker manifest
  • deploy the broker

Update Service Offering

To update the service offering:

  • make any changes to properties in the service_catalog of the broker manifest. For example, update the service metadata.
  • make any changes to properties in the service_deployment of the broker manifest. For example, update the jobs used from a service release.
  • deploy the broker

Warning: Once the broker has been registered with Cloud Foundry do not change the service_id or the plan_id for any plan. When the ODB starts it checks that all existing service instances in Cloud Foundy have a plan_id that exists in the service_catalog.

After changing the service_catalog, you should run the register-broker errand to update the Cloud Foundry marketplace.

When the plans are updated in the service_catalog, then upgrades will need to be applied to existing service instances. See upgrading individual service instances and upgrading all service instances.

Disable Service Plans

Access to a service plan can be disabled by using the Cloud Foundry CLI:

$ cf disable-service-access <service-name-from-catalog> -p <plan-name>

Also, when a plan has the property cf_service_access: disable in the service_catalog then the register-broker errand errand will disable service access to that plan.

Remove Service Plans

A service plans can be removed if there are no instances using the plan. To remove the a plan, remove it from the broker manifest and update the Cloud Foundry marketplace by running the register-broker errand.

Warning: If any service instances remain on a plan that has been removed from the catalog then the ODB will fail to start.

Upgrades

Upgrade the Broker

The broker is upgraded in a similar manner to all BOSH releases:

  • upload new version of on-demand-service-broker-release BOSH release to the BOSH Director
  • make any necessary changes to the core broker configuration in the broker manifest
  • deploy the broker

Upgrade Service Offering

The service offering consists of:

  • service catalog
  • service adapter BOSH release
  • service BOSH release(s)
  • service stemcell

To upgrade a service offering:

  • make any changes to the service catalog in the broker manifest
  • upload any new service BOSH release(s) to the BOSH Director
  • make any changes to service release(s) in the broker manifest
  • upload any new service stemcell to the BOSH Director
  • make any changes to the service stemcell in the service_deployment broker manifest
  • deploy the broker

Any new service instances will be created using the latest service offering.

To upgrade all existing instances you can run the upgrade-all-service-instances errand.

Warning: Until a service instance has been upgraded, cf update-service operations will be blocked and an error will be shown, see updating service offering.

Upgrade an Individual Service Instance

By default, Cloud Foundry users cannot upgrade their service instances to the latest service offering.

Until a service instance has been upgraded, Cloud Foundry users cannot set parameters, or change plan until the service instance has been upgraded by an operator:

$ cf update-service my-redis -c '{"maxclients": 10000}'
Updating service instance my-redis as admin...
FAILED
Server error, status code: 502, error code: 10001, message: Service broker error: Service cannot be updated at this time, please try again later or contact your operator for more information.

Operators should run the upgrade-all-service-instances errand to upgrade all service instances to the latest service offering.

Enabling CF User Triggered Upgrades

Set the cf_user_triggered_upgrades feature to true in the core broker configuration, to allow Cloud Foundry users to upgrade their service instances to the latest service offering.

Then, users can upgrade with this command: cf update-service SERVICE_NAME -c '{"apply-changes": true}'.

Cloud Foundry users cannot set any other parameters, or change plan until the service instance has been upgraded:

$ cf update-service my-redis -c '{"maxclients": 10000}'
Updating service instance my-redis as admin...
FAILED
Server error, status code: 502, error code: 10001, message: Service broker error: There are pending changes to your service instance, you must first run cf update-service <service-name> -c '{"apply-changes": true}', no other arbitrary parameters or plan changes are allowed.

Upgrade All Service Instances

To upgrade all existing service instances after the service offering has been updated or upgraded:

  1. Add the following instance group to your broker manifest:

    - name: upgrade-all-service-instances
      lifecycle: errand
      instances: 1
      jobs:
        - name: upgrade-all-service-instances
          release: <odb-release-name>
      vm_type: <vm-type>
      stemcell: <stemcell>
      networks: [{name: <network>}]
      azs: [<az>]
    
  2. Deploy the broker manifest.

  3. Run the errand with bosh run errand upgrade-all-service-instances.

Note, the upgrade-all-service-instances errand will trigger service instance lifecycle errands configured for the broker.

Security

BOSH API Endpoints

The ODB accesses the following BOSH API endpoints during the service instance lifecycle:

API endpoint Examples of usage in the ODB
POST /deployments create, or update a service instance
POST /deployments/<deployment_name>/errands/<errand_name>/runs register, or de-register the on-demand broker with the Cloud Controller, run smoke tests
GET /deployments/<deployment_name> passed as argument to the service adapter for generate-manifest and create-binding
GET /deployments/<deployment_name>/vms?format=full passed as argument to the service adapter for create-binding
DELETE /deployments/<deployment_name> delete a service instance
GET /tasks/<task_ID>/output?type=result check a task was successful (i.e. the exit code was zero), get list of VMs
GET /tasks/<task_ID> poll the BOSH director until a task finishes, e.g. create, update, or delete a deployment
GET /tasks?deployment=<deployment_name> determine the last operation status and message for a service instance, e.g. 'create in progress’ - used when creating, updating, deleting service instances

BOSH UAA Permissions

The actions that the ODB needs to be able to perform are:

Modify:

  • bosh deploy
  • bosh delete deployment
  • bosh run errand

Read only:

  • bosh deployments
  • bosh vms
  • bosh tasks

The minimum UAA authority required by the BOSH Director to perform these actions is bosh.teams.<team>.admin. Note: a team admin cannot view or update the director’s cloud config, nor upload releases or stemcells.

For more details on how to set up and use BOSH teams, see Director teams and permissions configuration.

Unused BOSH permissions

The team admin authority also allows the following actions, which currently are not used by the ODB:

  • bosh start/stop/recreate
  • bosh cck
  • bosh ssh
  • bosh logs
  • bosh releases
  • bosh stemcells

PCF IPsec Add-On

The ODB has been tested with the PCF IPsec Add-On, and it appears to work. Note that we excluded the BOSH director itself from IPsec ranges, as the BOSH add-on cannot be applied to BOSH itself.

Troubleshooting

Administer Service Instances

We recommend using the bosh cli gem for administering the deployments created by ODB; for example for checking VMs, ssh, viewing logs.

We recommend against using the bosh cli for updating/deleting ODB service deployments as it might accidentally trigger a race condition with Cloud Controller-induced updates/deletes or result in ODB overriding your snowflake changes at the next deploy. All updates to the service instances must be done using the errand to upgrade all service instances.

Logs

The on-demand broker writes logs to a log file, and to syslog.

The broker log contains error messages and non-zero exit codes returned by the service adapter, as well as the stdout and stderr streams of the adapter.

The log file is located at /var/vcap/sys/log/broker/broker.log. In syslog, logging is written with the tag on-demand-service-broker, under the facility user, with priority info.

If you want to forward syslog to a syslog aggregator, we recommend co-locating syslog release with the broker.

The ODB generates a UUID for each request and prefixes all the logs for that request, e.g.

on-demand-service-broker: [on-demand-service-broker] [4d63080d-e038-45a3-85f9-93910f6b40b1] 2016/09/05 16:43:26.123456 a valid UAA token was found in cache, will not obtain a new one

NB: The ODB’s negroni server and start up logs are not prefixed with a request ID.

All ODB’s logs have a UTC timestamp.

Metrics

If you have configured service metrics, then metrics should be visible from loggregator. You can consume these by using the CF CLI firehose plugin.

For each plan, the metrics will report how many instances there are of that plan and if a quota is set how much of that quota is remaining. The metrics are in the format shown below.

origin:"<broker deployment name>" eventType:ValueMetric timestamp:<timestamp> deployment:"<broker deployment name>" job:"broker" index:"<bosh job index>" ip:"<IP>" valueMetric:<name:"/on-demand-broker/<service offering name>/<plan name>/total_instances" value:<instance count> unit:"count" >
origin:"<broker deployment name>" eventType:ValueMetric timestamp:<timestamp> deployment:"<broker deployment name>" job:"broker" index:"<bosh job index>" ip:"<IP>" valueMetric:<name:"/on-demand-broker/<service offering name>/<plan name>/quota_remaining" value:<quota remaining> unit:"count" >

If quota_remaining is 0 then you need to increase your plan quota in the BOSH manifest.

Identify Deployments in BOSH

There is a one to one mapping between the service instance id from CF and the deployment name in BOSH. The convention is the BOSH deployment name would be the service instance id prepended by service-instance_. To identify the BOSH deployment for a service instance you can.

  1. Determine the GUID of the service

    $ cf service --guid <service-name>
    

  2. Identify deployment in bosh deployments by looking for service-instance_<GUID>

  3. Get current tasks for the deployment by using

    $ bosh tasks --deployment service-instance_<GUID>
    

Identify Tasks in BOSH

Most operations on the on demand service broker API are implemented by launching BOSH tasks. If an operation fails, it may be useful to investigate the corresponding BOSH task. To do this:

  1. Determine the ID of the service for which an operation failed. You can do this using the Cloud Foundry CLI:

    $ cf service --guid <service name>
    

  2. SSH on to the service broker VM:

    $ bosh deployment <path to broker manifest>
    $ bosh ssh
    

  3. In the broker log, look for lines relating to the service, identified by the service ID. Lines recording the starting and finishing of BOSH tasks will also have the BOSH task ID:

    on-demand-service-broker: [on-demand-service-broker] [4d63080d-e038-45a3-85f9-93910f6b40b1] 2016/04/13 09:01:50.793965 Bosh task id for Create instance 30d4a67f-d220-4d06-9989-58a976b86b35 was 11470
    on-demand-service-broker: [on-demand-service-broker] [4d63080d-e038-45a3-85f9-93910f6b40b1] 2016/04/13 09:06:55.793976 task 11470 success creating deployment for instance 30d4a67f-d220-4d06-9989-58a976b86b35: create deployment
    
    on-demand-service-broker: [on-demand-service-broker] [8bf5c9f6-7acd-4ab4-9214-363a6f6bef79] 2016/04/13 09:16:20.795035 Bosh task id for Update instance 30d4a67f-d220-4d06-9989-58a976b86b35 was 11473
    on-demand-service-broker: [on-demand-service-broker] [8bf5c9f6-7acd-4ab4-9214-363a6f6bef79] 2016/04/13 09:17:20.795181 task 11473 success updating deployment for instance 30d4a67f-d220-4d06-9989-58a976b86b35: create deployment
    
    on-demand-service-broker: [on-demand-service-broker] [af6fab15-c95e-438b-aa6b-bc4329d4154f] 2016/04/13 09:17:52.803824 Bosh task id for Delete instance 30d4a67f-d220-4d06-9989-58a976b86b35 was 11474
    on-demand-service-broker: [on-demand-service-broker] [af6fab15-c95e-438b-aa6b-bc4329d4154f] 2016/04/13 09:19:56.803938 task 11474 success deleting deployment for instance 30d4a67f-d220-4d06-9989-58a976b86b35: delete deployment service-instance_30d4a67f-d220-4d06-9989-58a976b86b35
    
  4. Use the task ID to obtain the task log from BOSH (adding flags such as --debug or --cpi as necessary):

    $ bosh task <task_ID>
    

Identify Issues When Connecting to BOSH or UAA

The ODB interacts with the BOSH director to provision and deprovision instances, and is authenticated via the director’s UAA. See Core Broker Configuration for an example configuration.

If BOSH and/or UAA are wrongly configured in the broker’s manifest, then meaningful error messages will be displayed in the broker’s log, indicating whether the issue is caused by an unreachable destination or bad credentials.

For example

on-demand-service-broker: [on-demand-service-broker] [575afbc1-b541-481d-9cde-b3d3e67e87bf] 2016/05/18 15:56:40.100579 Error authenticating (401): {"error":"unauthorized","error_description":"Bad credentials"}, please ensure that properties.<broker-job>.bosh.authentication.uaa is correct and try again.

List Service Instances

The ODB persists the list of ODB-deployed service instances and provides an endpoint to retrieve them. This endpoint requires basic authentication.

During disaster recovery this endpoint could be used to assess the situation.

Request

GET http://username:password@<ON_DEMAND_BROKER_IP>:8080/mgmt/service_instances

Response

200 OK

Example JSON body:

  [
    {
      "instance_id": "4d19462c-33cf-11e6-91cc-685b3585cc4e",
      "plan_id": "60476620-33cf-11e6-a841-685b3585cc4e",
      "bosh_deployment_name": "service-instance_4d19462c-33cf-11e6-91cc-685b3585cc4e"
    },
    {
      "instance_id": "57014734-33cf-11e6-ba8d-685b3585cc4e",
      "plan_id": "60476620-33cf-11e6-a841-685b3585cc4e",
      "bosh_deployment_name": "service-instance_57014734-33cf-11e6-ba8d-685b3585cc4e"
    }
  ]

List Orphan Deployments

The On-Demand Broker provides an endpoint that compares the list of service instance deployments against the service instances registered in Cloud Foundry. When called, the endpoint will return a list of orphaned deployments, if any are present.

This endpoint is exercised in the orphan-deployments errand. To call this endpoint without running the errand, use curl

Request GET http://username:password@<ON_DEMAND_BROKER_IP>:8080/mgmt/orphan_deployments

Response

200 OK

Example JSON body:

[
  {
      "deployment_name": "service-instance_d482abd3-8051-48d2-8067-9ccdf02327f3"
  }
]
Create a pull request or raise an issue on the source for this page in GitHub