Operating an On-Demand Broker
- Operator Responsibilities
- Set Up Networking
- Configure Your BOSH Director
- Upload Required Releases
- Write a Broker Manifest
- (Optional) Enable HTTPS
- (Optional) Enable Storing Manifest Secrets on BOSH CredHub
- (Optional) Enable Secure Binding
- (Optional) Enable Plan Schemas
- (Optional) Register the Route to the Broker
- (Optional) Set Service Instance Quotas
- (Optional) Set Resource Quotas
- (Optional) Configure Service Metrics
- (Optional) Use BOSH DNS Addresses for Bindings
- (Optional) Enable Telemetry
- About Broker Startup Checks
- About Broker Shutdown
- About ODB and BOSH Process Manager (bpm)
- Service Instance Lifecycle Errands
This topic provides information about operating the on-demand broker for Pivotal Cloud Foundry (PCF) Ops Manager operators and BOSH operators.
Operator Responsibilities
Operators are responsible for:
Requesting appropriate networking rules for on-demand service tiles. See Set Up Networking below.
Configuring the BOSH Director. See Configure Your BOSH Director below.
Uploading the required releases for the broker deployment and service instance deployments. See Upload Required Releases below.
Writing a broker manifest. See Write a Broker Manifest below.
Managing brokers and service plans. See Broker and Service Management.
Note: Pivotal recommends that you provide documentation when you make changes to the manifest to inform other operators about the new configurations.
Set Up Networking
Before deploying a service tile that uses the on-demand service broker (ODB), you must create networking rules to enable components to communicate with ODB. For instructions for creating networking rules, see the documentation for your IaaS.
The following table lists key components and their responsibilities in the on-demand architecture.
Key Components | Component Responsibilities |
---|---|
BOSH Director | Creates and updates service instances as instructed by ODB. |
BOSH Agent | Adds an agent on every VM that it deploys. The agent listens for instructions from the BOSH Director and executes those instructions. The agent receives job specifications from the BOSH Director and uses them to assign a role or job to the VM. |
BOSH UAA | Issues OAuth2 tokens for clients to use when they act on behalf of BOSH users. |
Pivotal Application Service | Contains the apps that consume services. |
ODB | Instructs BOSH to create and update services. Connects to services to create bindings. |
Deployed service instance | Runs the given service. For example, a deployed On-Demand Services SDK service instance runs the RabbitMQ service. |
Regardless of the specific network layout, you must ensure network
rules are set up so that connections are open as described in the table below.
Source Component | Destination Component | Default TCP Port | Notes |
---|---|---|---|
ODB |
BOSH Director BOSH UAA |
25555 8443 |
The default ports are not configurable. |
ODB | Deployed service instances | Specific to the service (such as RabbitMQ for PCF). May be one or more ports. | This connection is for administrative tasks. Avoid opening general use, app-specific ports for this connection. |
ODB | PAS | 8443 | The default port is not configurable. |
Errand VMs |
PAS ODB Deployed Service Instances |
8443 8080 Specific to the service. May be one or more ports. |
The default port is not configurable. |
BOSH Agent | BOSH Director | 4222 | The BOSH Agent runs on every VM in the system, including the BOSH Director VM. The BOSH Agent initiates the connection with the BOSH Director. The default port is not configurable. The communication between these components is two-way. |
Deployed apps on PAS | Deployed service instances | Specific to the service. May be one or more ports. | This connection is for general use, app-specific tasks. Avoid opening administrative ports for this connection. |
PAS | ODB | 8080 | This port can be different for individual services. This port can also be configurable by the operator if allowed by the tile developer. |
Configure Your BOSH Director
See the following topics for how to set up your BOSH Director:
Software Requirements
ODB requires:
- BOSH Director v266.12.0 or v267.6.0 and later. To install the BOSH Director, see Quick Start in the BOSH documentation.
- cf-release v1.10.0 or later (PCF v2.0 or later).
- ODB does not support BOSH Windows.
- Service instance lifecycle errands require BOSH Director v261 on PCF v1.10 or later. For more information, see Service Instance Lifecycle Errands below.
Configure CA Certificates for TLS Communication
There are two kinds of communication in ODB that use transport layer security (TLS) and need to validate certificates using a certificate authority (CA) certificate:
- ODB to BOSH Director
- ODB to Cloud Foundry API (Cloud Controller)
The CA certificates used to sign the BOSH and Cloud Controller certificates are often generated by BOSH, CredHub, or a customer security team, and so are not publicly trusted certificates. This means Pivotal might need to provide the CA certificates to ODB to perform the required validation.
ODB to BOSH Director
In some rare cases where the BOSH Director is not installed through Ops Manager, BOSH can be configured to be publicly accessible with a domain name and a TLS certificate issued by a public certificate authority. In such a case, you can navigate to https://BOSH-DOMAIN-NAME:25555/info in a browser and see a trusted certificate padlock in the browser address bar.
In this case, ODB can be configured to use this address for BOSH, and it does not require a CA certificate to be provided. The public CA certificate is already present on the ODB VM.
By contrast, BOSH is usually only accessible on an internal network. It uses a certificate signed by an internal CA. The CA certificate must be provided in the broker configuration so that ODB can validate the BOSH Director’s certificate. ODB always validates BOSH TLS certificates.
You have two options for providing a CA certificate to ODB for validation of the BOSH certificate. You can add the BOSH Director’s root certificate to the ODB manifest or you can use BOSH’s trusted_certs
feature to add a self-signed CA certificate to each VM that BOSH deploys.
To add the BOSH Director’s root certificate to the ODB manifest, edit the manifest as below:
bosh: root_ca_cert: ROOT-CA-CERT
Where
ROOT-CA-CERT
is the root certificate authority (CA) certificate. This is the certificate used when following the steps in Configuring SSL Certificates in the BOSH documentation.For example:
Instance_groups: - Name: broker Jobs: - Name: broker Properties: bosh: root_ca_cert: -----BEGIN CERTIFICATE----- EXAMPLExxOFxxAxxCERTIFICATE ... -----END CERTIFICATE----- authentication: ...
To use BOSH’s
trusted_certs
feature to add a self-signed CA certificate to each VM that BOSH deploys, follow the steps below.- Generate and use self-signed certificates for the BOSH Director and User Account and Authentication (UAA) through the
trusted_certs
feature. For instructions, see Configuring Trusted Certificates in the BOSH documentation. - Add trusted certificates to your BOSH Director. For instructions, see Configuring SSL Certificates in the BOSH documentation.
- Generate and use self-signed certificates for the BOSH Director and User Account and Authentication (UAA) through the
ODB to Cloud Controller
You can configure a separate root CA certificate that is used when ODB communicates with the Cloud Foundry API (Cloud Controller). This is necessary if the Cloud Controller is configured with a certificate not trusted by the broker.
For an example of how to add a separate root CA certificate to the manifest, see the line containing CA-CERT-FOR-CLOUD-CONTROLLER
in the manifest snippet in Starter Snippet for Your Broker below.
Use BOSH Teams
You can use BOSH teams to further control how BOSH operations are available to different clients. For more information about BOSH teams, see Using BOSH Teams in the BOSH documentation.
To use BOSH teams to ensure that your on-demand service broker client can only modify deployments it created:
Run the following UAA CLI (UAAC) command to create the client:
uaac client add CLIENT-ID \ --secret CLIENT-SECRET \ --authorized_grant_types "refresh_token password client_credentials" \ --authorities "bosh.teams.TEAM-NAME.admin"
Where:
CLIENT-ID
is your client ID.CLIENT-SECRET
is your client secret.TEAM-NAME
is the name of the team authorized to modify this deployment.
For example:uaac client add admin \ --secret 12345679 \ --authorized_grant_types "refresh_token password client_credentials" \ --authorities "bosh.teams.my-team.admin"
For more information about using the UAAC, see Creating and Managing Users with the UAA CLI (UAAC).
Configure the broker’s BOSH authentication.
For example:instance_groups: - name: broker ... jobs: - name: broker ... properties: ... bosh: url: DIRECTOR-URL root_ca_cert: CA-CERT-FOR-BOSH-DIRECTOR # optional, see SSL certificates authentication: uaa: client_id: BOSH-CLIENT-ID client_secret: BOSH-CLIENT-SECRET
Where the
BOSH-CLIENT-ID
andBOSH-CLIENT-SECRET
are theCLIENT-ID
andCLIENT-SECRET
you provided in step 1.
The broker can then only perform BOSH operations on deployments it has created. For a more detailed manifest snippet, see Starter Snippet for Your Broker below.
For more information about securing how ODB uses BOSH, see Security.
Set Up Cloud Controller
ODB uses the Cloud Controller as a source of truth for service offerings, plans, and instances.
To reach the Cloud Controller, configure ODB with either client or user credentials in the broker manifest. For more information, see Write a Broker Manifest below.
Note: The client or user must have the following permissions.
-
If using client credentials
then, as of Cloud Foundry v238, the UAA client must have the authority
cloud_controller.admin
. -
If using user credentials
then the user must be a member of the
scim.read
andcloud_controller.admin
groups.
The following is an example broker manifest snippet for the client credentials:
authentication:
...
client_credentials:
client_id: UAA-CLIENT-ID
secret: UAA-CLIENT-SECRET
The following is an example broker manifest snippet for the user credentials:
authentication:
...
user_credentials:
username: CF-ADMIN-USERNAME
password: CF-ADMIN-PASSWORD
Upload Required Releases
Upload the following releases to your BOSH Director:
- On Demand Service Broker (ODB)—Download ODB from Pivotal Network.
- Your service adapter—Get the service adapter from the release author.
- Your service release—Get the service release from the release author.
- BOSH Process Manager (bpm) release—Get the bpm release from the location listed in BOSH releases in the BOSH documentation. You might not need to do this if the bpm release is already uploaded.
To upload a release to your BOSH Director, run the following command:
bosh -e BOSH-DIRECTOR-NAME upload-release RELEASE-FILE-NAME.tgz
Example command for ODB:
$ bosh -e lite upload-release on-demand-service-broker-0.22.0.tgz
Example commands for service adapter or service release:
$ bosh -e lite upload-release my-service-release.tgz
$ bosh -e lite upload-release my-service-adapter.tgz
Write a Broker Manifest
There are two parts to writing your broker manifest. You must:
If you are unfamiliar with writing BOSH v2 manifests, see Deployment Config.
Two example manifests are below.
For a Redis service—redis-example-service-adapter-release in GitHub.
For a Kafka service—kafka-example-service-adapter-release in GitHub.
Configure Your Broker
Your manifest must contain exactly one non-errand instance group that is co-located with both:
- The broker job from
on-demand-service-broker
- Your service adapter job from your service adapter release
The broker is stateless and does not need a persistent disk. Its VM type can be small: a single CPU and 1 GB of memory is sufficient in most cases.
Starter Snippet for Your Broker
Use the snippet below to help you to configure your broker. The snippet uses BOSH v2 syntax as well as global cloud config and job-level properties.
For examples of complete broker manifests, see Write a Broker Manifest above.
Warning: The disable_ssl_cert_verification
option is dangerous and should be set to false
in production.
addons:
# Broker uses bpm to isolate co-located BOSH jobs from one another
- name: bpm
jobs:
- name: bpm
release: bpm
instance_groups:
- name: NAME-OF-YOUR-CHOICE
instances: 1
vm_type: VM-TYPE
stemcell: STEMCELL
networks:
- name: NETWORK
jobs:
- name: SERVICE-ADAPTER-JOB-NAME
release: SERVICE-ADAPTER-RELEASE
- name: broker
release: on-demand-service-broker
properties:
# choose a port and basic authentication credentials for the broker:
port: BROKER-PORT
username: BROKER-USERNAME
password: BROKER-PASSWORD
# optional - defaults to false. This should not be set to true in production.
disable_ssl_cert_verification: TRUE|FALSE
# optional - defaults to 60 seconds. This enables the broker to gracefully wait for any open requests to complete before shutting down.
shutdown_timeout_in_seconds: 60
# optional - defaults to false. This enables BOSH operational errors to be displayed for the CF user.
expose_operational_errors: TRUE|FALSE
# optional - defaults to false. If set to true, plan schemas are included in the catalog, and the broker fails if the adapter does not implement generate-plan-schemas.
enable_plan_schemas: TRUE|FALSE
cf:
url: CF-API-URL
# optional - see the Configure CA Certificates section above:
root_ca_cert: CA-CERT-FOR-CLOUD-CONTROLLER
# either client_credentials or user_credentials, not both as shown:
authentication:
url: CF-UAA-URL
client_credentials:
# with cloud_controller.admin authority and client_credentials in the authorized_grant_type:
client_id: UAA-CLIENT-ID
secret: UAA-CLIENT-SECRET
user_credentials:
# in the cloud_controller.admin and scim.read groups:
username: CF-ADMIN-USERNAME
password: CF-ADMIN-PASSWORD
bosh:
url: DIRECTOR-URL
# optional - see the Configure CA Certificates section above:
root_ca_cert: CA-CERT-FOR-BOSH-DIRECTOR
# either basic or uaa, not both as shown, see
authentication:
basic:
username: BOSH-USERNAME
password: BOSH-PASSWORD
uaa:
client_id: BOSH-CLIENT-ID
client_secret: BOSH-CLIENT-SECRET
service_adapter:
# optional - provided by the service author. Defaults to /var/vcap/packages/odb-service-adapter/bin/service-adapter.
path: PATH-TO-SERVICE-ADAPTER-BINARY
# optional - Filesystem paths to be mounted for use by the service adapter. These should include the paths to any config files.
mount_paths: [ PATH-TO-SERVICE-ADAPTER-CONFIG ]
# There are more broker properties that are discussed below
Configure Your Service Catalog and Plan Composition
Use the following sections as a guide to configure the service catalog and compose plans in the properties section of broker job. For an example snippet, see the Starter Snippet for the Service Catalog and Plans below.
Configure the Service Catalog
When configuring the service catalog, supply:
The release jobs specified by the service author:
- Supply each release job exactly once.
- You can include releases that provide many jobs, as long as each required job
is provided by exactly one release.
Stemcells:
Note: If you use Xenial stemcells, you must update any BOSH add-ons to support Xenial stemcells. For links to instructional topics about updating see Update Add-ons to Run with Xenial Stemcell.
- These are used on each VM in the service deployments.
- Use exact versions for releases and stemcells. The use of
latest
and floating stemcells are not supported.
Cloud Foundry service metadata for the service offering:
- This metadata is aggregated in the Marketplace and displayed in Apps Manager and the cf CLI.
- You can use other arbitrary field names as needed in addition to the Open Service Broker API (OSBAPI) recommended fields. For information about the recommended fields for service metadata, see the Open Service Broker API Profile.
Compose Plans
Service authors do not define plans, but instead expose plan properties. Operators compose plans consisting of combinations of these properties, along with IaaS resources and catalog metadata.
When composing plans, supply:
Cloud Foundry plan metadata for each plan:
You can use other arbitrary field names in addition to the OSBAPI recommended fields. For information about the recommended fields for plan metadata, see the Open Service Broker API Profile in GitHub.Resource mapping:
- For each plan, supply resource mapping for each instance group that service authors specify.
- The resource values must correspond to valid resource definitions in the BOSH Director’s global cloud config.
- Service authors might recommend resource configuration. For example, in single-node Redis deployments, an instance count greater than one does not make sense. Here, you can configure the deployment to span multiple availability zones (AZs). For how to do this, see Availability Zones in the BOSH documentation.
- Service authors might provide errands for the service release.
You can add an instance group of type
errand
by setting thelifecycle
field. For an example, seeregister-broker
in the kafka-example-service-adapter-release in GitHub.
Values for plan properties:
- Plan properties are key-value pairs defined by the service authors. For example, including a boolean to enable disk persistence for Redis or a list of strings representing RabbitMQ plugins to load.
- The service author should document whether a plan property:
- Is mandatory or optional
- Precludes the use of another
- Affects recommended instance group to resource mappings
- You can also specify global properties at the service offering level, where
they are applied to every plan.
If there is a conflict between global and plan-level properties, the plan
properties take precedence.
(Optional) Provide an update block for each plan
- You might require plan-specific configuration for BOSH’s update instance operation. ODB passes the plan-specific update block to the service adapter.
- Plan-specific update blocks should have the same structure as the update block in a BOSH manifest. See Update Block in the BOSH documentation.
- The service author can define a default update block to be used when a plan-specific update block is not provided, if the service adapter supports configuring update blocks in the manifest.
Starter Snippet for the Service Catalog and Plans
Append the snippet below to the properties section of the broker job that you configured in Configure Your Broker above. Ensure that you provide the required information listed in Configure Your Service Catalog and Plan Composition above.
For examples of complete broker manifests, see Write a Broker Manifest above.
service_deployment:
releases:
- name: SERVICE-RELEASE
# exact release version:
version: SERVICE-RELEASE-VERSION
# service author specifies the list of jobs required:
jobs: [RELEASE-JOBS-NEEDED-FOR-DEPLOYMENT-AND-LIFECYCLE-ERRANDS]
# every instance group in the service deployment has the same stemcell:
stemcells:
- os: SERVICE-STEMCELL
# exact stemcell version:
version: &stemcellVersion SERVICE-STEMCELL-VERSION
service_catalog:
id: CF-MARKETPLACE-ID
service_name: CF-MARKETPLACE-SERVICE-OFFERING-NAME
service_description: CF-MARKETPLACE-DESCRIPTION
bindable: TRUE|FALSE
# optional:
plan_updatable: TRUE|FALSE
# optional:
tags: [TAGS]
# optional:
requires: [REQUIRED-PERMISSIONS]
# optional:
dashboard_client:
id: DASHBOARD-OAUTH-CLIENT-ID
secret: DASHBOARD-OAUTH-CLIENT-SECRET
redirect_uri: DASHBOARD-OAUTH-REDIRECT-URI
# optional:
metadata:
display_name: DISPLAY-NAME
image_url: IMAGE-URL
long_description: LONG-DESCRIPTION
provider_display_name: PROVIDER-DISPLAY-NAME
documentation_url: DOCUMENTATION-URL
support_url: SUPPORT-URL
# optional - applied to every plan:
global_properties: {}
# optional:
global_quotas:
# the maximum number of service instances across all plans:
service_instance_limit: INSTANCE-LIMIT
# optional - global resource usage limits:
resources:
# arbitrary hash of resource types:
ips:
# global limit for this resource type - reaching this limit depends on the resource type’s 'cost', which is defined in each plan:
limit: RESOURCE-LIMIT
memory:
limit: RESOURCE-LIMIT
# optional - applied to every plan.
maintenance_info:
# keys under public are visible in service catalog
public:
# reference to stemcellVersion anchor above
stemcell_version: *stemcellVersion
# arbitrary public maintenance_info
kubernetes_version: 1.13 # optional
# arbitrary public maintenance_info
docker_version: 18.06.1
# all keys under private are hashed to single SHA value in service catalog
private:
# example of private data that would require a service update to change
log_aggregrator_mtls_cert: *YAML_ANCHOR_TO_MTLS_CERT
# optional - should conform to semver
version: 1.2.3-rc2
description: "OS image update.\nExpect downtime."
plans:
- name: CF-MARKETPLACE-PLAN-NAME
# optional - used by the cf CLI to display whether this plan is "free" or "paid":
free: TRUE|FALSE
plan_id: CF-MARKETPLACE-PLAN-ID
description: CF-MARKETPLACE-DESCRIPTION
# optional - enable by default.
cf_service_access: ENABLE|DISABLE|MANUAL
# optional - if specified, this takes precedence over the bindable attribute of the service:
bindable: TRUE|FALSE
# optional:
metadata:
display_name: DISPLAY-NAME
bullets: [BULLET1, BULLET2]
costs:
- amount:
CURRENCY-CODE-STRING: CURRENCY-AMOUNT-FLOAT
unit: FREQUENCY-OF-COST
# optional:
quotas:
# the maximum number of service instances for this plan:
service_instance_limit: INSTANCE-LIMIT
# optional - resource usage limits for this plan:
resources:
# arbitrary hash of resource types:
memory:
# optional - overwrites global limit for this resource type:
limit: RESOURCE-LIMIT
# optional – the amount of the quota that each service instance of this plan uses:
cost: RESOURCE-COST
# resource mapping for the instance groups defined by the service author:
instance_groups:
- name: SERVICE-AUTHOR-PROVIDED-INSTANCE-GROUP-NAME
vm_type: VM-TYPE
# optional:
vm_extensions: [VM-EXTENSIONS]
instances: &instanceCount INSTANCE-COUNT
networks: [NETWORK]
azs: [AZ]
# optional:
persistent_disk_type: DISK
# optional:
- name: SERVICE-AUTHOR-PROVIDED-LIFECYCLE-ERRAND-NAME
lifecycle: errand
vm_type: VM-TYPE
instances: INSTANCE-COUNT
networks: [NETWORK]
azs: [AZ]
# valid property key-value pairs are defined by the service author:
properties: {}
# optional
maintenance_info:
# optional - keys merge with catalog level public maintenance_info keys
public:
# refers to anchor in instance group above
instance_count: *instanceCount
# optional
private: {}
# optional - should conform to semver
version: 1.2.3-rc3
# optional:
update:
# optional:
canaries: 1
# required:
max_in_flight: 2
# required:
canary_watch_time: 1000-30000
# required:
update_watch_time: 1000-30000
# optional:
serial: true
# optional:
lifecycle_errands:
# optional:
post_deploy:
- name: ERRAND-NAME
# optional - for co-locating errand:
instances: [INSTANCE-NAME, ...]
- name: ANOTHER_ERRAND_NAME
# optional:
pre_delete:
- name: ERRAND-NAME
# optional - for co-locating errand:
instances: [INSTANCE-NAME, ...]
(Optional) Enable HTTPS
Existing brokers operate in a secure network environment.
By default, brokers communicate with the platform over HTTP. This communication is usually not encrypted.
You can configure the broker to accept only HTTPS connections.
To enable HTTPS, provide a server certificate and private key in the broker manifest. For example:
instance_groups:
- name: broker
...
jobs:
- name: broker
...
properties:
...
tls:
certificate: |
SERVER-CERTIFICATE
private_key: |
SERVER-PRIVATE-KEY
When HTTPS is enabled, the broker only accepts connections that use TLS v1.2 and later. The broker also accepts only the following cipher suites:
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
(Optional) Enable Storing Manifest Secrets on BOSH CredHub
Note: This feature does not work if you have configured use_stdin
to be false.
To avoid writing secrets in plaintext in the manifest, you can use ODB-managed secrets to store secrets on BOSH CredHub. When using ODB-managed secrets, the service adapter generates secrets and uses ODB as a proxy to the CredHub config server. For information for service authors about how to store manifest secrets on CredHub, see (Optional) Store Secrets on BOSH CredHub.
Secrets in the manifest can be:
- BOSH variables
- Literal BOSH CredHub references
- Plain text
If you use BOSH variables or literal CredHub references in your manifest, do the following in the ODB manifest so that the service adapter can access the secrets:
Set the
enable_secure_manifests
flag totrue
.
For example:instance_groups: - name: broker ... jobs: - name: broker ... properties: ... enable_secure_manifests: true ...
Supply details for accessing the credentials stored in BOSH CredHub. Replace the placeholder text below with your values for accessing CredHub:
instance_groups: - name: broker ... jobs: - name: broker ... properties: ... enable_secure_manifests: true bosh_credhub_api: url: https://BOSH-CREDHUB-ADDRESS:8844/ root_ca_cert: BOSH-CREDHUB-CA-CERT authentication: uaa: client_credentials: client_id: BOSH-CREDHUB-CLIENT-ID client_secret: BOSH-CREDHUB-CLIENT-SECRET
(Optional) Enable Secure Binding
Note: This feature does not work if you have configured use_stdin
to be false.
If you enable secure binding, binding credentials are stored securely in runtime CredHub. When users create bindings or service keys, ODB passes a secure reference to the service credentials through the network instead of in plaintext.
Requirements
To store service credentials in runtime CredHub, your deployment must meet the following requirements:
It must be able to connect to runtime CredHub v1.6.x or later. This might be provided as part of your Cloud Foundry deployment.
Your instance group must have access to the local DNS provider. This is because the address for runtime CredHub is a local domain name.
Note: Pivotal recommends using BOSH DNS as a DNS provider. If you use PCF v2.4 or later, you cannot use consul as a DNS provider because consul server VMs have been removed in Pivotal Application Service (PAS) v2.4.
Procedure for Enabling Secure Binding
To enable secure binding:
Set up a new runtime CredHub client in Cloud Foundry UAA with
credhub.write
andcredhub.read
in its list of scopes. For how to do this, see Creating and Managing Users with the UAA CLI (UAAC) in the Cloud Foundry documentation.Update the broker job in the ODB manifest to consume the runtime CredHub link.
For example:instance_groups: - name: broker ... jobs: - name: broker consumes: credhub: from: credhub deployment: cf
Update the broker job in the ODB manifest to include the
secure_binding_credentials
section. The CA certificate can be a reference to the certificate in the cf deployment or inserted manually.
For example:instance_groups: - name: broker ... jobs: - name: broker ... properties: ... secure_binding_credentials: enabled: true authentication: uaa: client_id: NEW-CREDHUB-CLIENT-ID client_secret: NEW-CREDHUB-CLIENT-SECRET ca_cert: ((cf.uaa.ca_cert))
WhereNEW-CREDHUB-CLIENT-ID
andNEW-CREDHUB-CLIENT-SECRET
are the runtime CredHub client credentials you created in step 1.
For a more detailed manifest snippet, see Starter Snippet for Your Broker above.
How Credentials Are Stored on Runtime CredHub
The credentials for a given service binding are stored with the following format:
/C/:SERVICE-GUID/:SERVICE-INSTANCE-GUID/:BINDING-GUID/CREDENTIALS
The plaintext credentials are stored in runtime CredHub under this key, and the
key is available under the VCAP_SERVICES
environment variable for the app.
(Optional) Enable Plan Schemas
As of OSBAPI Spec v2.13 ODB supports enabling plan schemas. For more information, see OSBAPI Spec v2.13 in GitHub.
When this feature is enabled, the broker validates incoming configuration parameters against a schema during the provision, binding, and update of service instances. The broker produces an error if the parameters do not conform.
To enable plan schemas:
Ensure that the service adapter implements the command
generate-plan-schemas
. When it is not implemented, the broker fails to deploy. For more information about this command, see generate-plan-schemas.In the manifest, set the
enable_plan_schemas
flag totrue
as shown below. The default isfalse
.instance_groups: - name: broker ... jobs: - name: broker ... properties: ... enable_plan_schemas: true
For a more detailed manifest snippet, see Starter Snippet for Your Broker above.
(Optional) Register the Route to the Broker
You can register a route to the broker using the route_registrar
job from the
routing release.
The route_registrar
job:
- Load balances multiple instances of ODB using the Cloud Foundry router
- Allows access to ODB from the public internet
For more information, see route_registrar job.
To register the route, co-locate the route_registrar
job with on-demand-service-broker
:
- Download the routing release. See cf-routing Release for more information about doing so.
- Upload the routing release to your BOSH Director.
- Add the
route_registrar
job to your deployment manifest and configure it with an HTTP route. This creates a URI for your broker.Note: You must use the same port for the broker and the route. The broker defaults to 8080.
For how to configure theroute_registrar
job, see routing release in GitHub. - If you configure a route, set the
broker_uri
property in the register-broker errand.
(Optional) Set Service Instance Quotas
You can set service instance quotas to limit the number of service instances ODB can create.
There are two types service instances quotas:
Global quotas – limit the number of service instances across all plans
Plan quotas – limit the number of service instances for a given plan
Note: These limits do not include orphaned deployments. For more information, see List Orphan Deployments and Delete Orphaned Deployments.
When creating a service instance, ODB checks the global service instance limit. If this limit has not been reached, ODB checks the plan service instance limit. If no limits have been reached, the service instance is created.
Procedure for Setting Service Instance Quotas
To set service instance quotas, do the following in the manifest:
To set global quotas: add a
global_quotas
section to the service catalog:service_catalog: ... global_quotas: service_instance_limit: INSTANCE-LIMIT ...
To set plan quotas: add a
quotas
section to the plans that you want to limit:service_catalog: ... plans: - name: CF-MARKETPLACE-PLAN-NAME quotas: service_instance_limit: INSTANCE-LIMIT
Where INSTANCE-LIMIT
is the maximum number of service instances allowed.
For a more detailed manifest snippet, see the Starter Snippet for the Service Catalog and Plans above.
(Optional) Set Resource Quotas
You can set resource quotas to limit the amount of a particular resource that each service instance can use. If you want to limit physical resources, such as memory, persistent disk size, or the number of IP addresses in the network, setting resource quotas can give you more control than service instance quotas.
A resource quota is defined by an arbitrary resource type with two
associated keys, limit
and cost
.
The resource limit is the maximum amount of a resource that is permitted.
The resource cost represents how much of the resource limit that a service instance
of a plan consumes.
There are two types of resource quotas:
Global quotas – limit how much of a resource is available for all plans to consume. ODB allows new instances to be created until the sum of resources consumed reach the global quota, unless a plan quota is reached first. You cannot define resource costs at the global level.
Plan quotas – limit how much of a resource is available for a specific plan to consume. ODB allows new instances of a plan to be created until the resources consumed reach the plan’s quota. If there is no plan limit, then instances can be created until the global quota is reached. You can define resource costs at the plan level.
When creating a service instance, ODB checks the global resource limit for each resource type. If these limits have not been reached, ODB checks the plan resource limits. If no limits have been reached, the service instance is created.
Note: When calculating the amount of resources used, ODB does not take orphan deployments into consideration. For more information, see List Orphan Deployments and Delete Orphaned Deployments.
Procedure for Setting Resource Quotas
To set resource quotas, do the following in the manifest:
To set global quotas: add a
global_quotas
section to the service catalog:global_quotas: resources: RESOURCE-NAME: limit: RESOURCE-LIMIT
Where:
RESOURCE-NAME
is a string defining the resource you want to limit.RESOURCE-LIMIT
is a value for the maximum allowed for each resource.
service_catalog: ... global_quotas: resources: ips: limit: 50 memory: limit: 150
To set plan quotas: add a
quotas
section to the plans that you want to limit resources in:quotas: resources: RESOURCE-NAME: limit: RESOURCE-LIMIT # optional - if not set the limit defaults to the global limit cost: RESOURCE-COST
Where:
RESOURCE-NAME
is a string defining the resource you want to limit.RESOURCE-LIMIT
is a value for the maximum allowed for each resource.RESOURCE-COST
is a value of how much of the quota a service instance of the plan consumes for that resource.
service_catalog: ... plans: - name: my-plan quotas: resources: ips: cost: 2 # each service instance consumes 2, up to 50 "ips" from the global resource limit memory: limit: 25 # maximum limit of "memory" to be consumed by this plan cost: 5 # each service instance will consume 5, up to 25 of plan resource limit
For a more detailed manifest snippet, see the Starter Snippet for the Service Catalog and Plans above.
(Optional) Configure Service Metrics
The ODB BOSH release contains a metrics job that can be used to emit metrics when co-located with the Pivotal Cloud Foundry Service Metrics SDK. To do this, you must include the Loggregator release.
To download the Pivotal Cloud Foundry Service Metrics SDK, see Pivotal Network.
Add the following jobs to the broker instance group:
- name: service-metrics
release: service-metrics
properties:
service_metrics:
execution_interval_seconds: INTERVAL-BETWEEN-SUCCESSIVE-METRICS-COLLECTIONS
origin: ORIGIN-TAG-FOR-METRICS
monit_dependencies: [broker] # you should hardcode this
....snip....
#Add Loggregator configurations here. For example, see https://github.com/pivotal-cf/service-metrics-release/blob/master/manifests
....snip....
- name: service-metrics-adapter
release: ODB-RELEASE
properties:
# The broker URI valid for the broker certificate including http:// or https://
broker_uri: BROKER-URI
tls:
# The CA certificate to use when communicating with the broker
ca_cert: CA-CERT
disable_ssl_cert_verification: TRUE|FALSE # defaults to false
Where:
INTERVAL-BETWEEN-SUCCESSIVE-METRICS-COLLECTIONS
is the interval in seconds between successive metrics collections.ORIGIN-TAG-FOR-METRICS
is the origin tag for metrics.LOGGREGATOR-CONFIGURATION
is your Loggregator configuration. For example manifests, see service-metrics-release in GitHub.ODB-RELEASE
is the on-demand broker release.
For an example of how the service metrics can be configured for an on-demand-broker deployment, see the kafka-example-service-adapter-release manifest in GitHub.
Pivotal has tested this example configuration with Loggregator v58 and service-metrics v1.5.0.
For more information about service metrics, see Service Metrics for Pivotal Cloud Foundry.
Note: When service-metrics-adapter
is not configured, it
defaults to a BOSH-provided IP address or BOSH-provided BOSH DNS address, depending on in configuration on
the broker URI. See Impact on links
in the BOSH documentation.
When the broker is using TLS, the broker certificate must contain this BOSH provided address in
its Subject Alternative Names section, otherwise the certificate cannot be verified by Cloud Foundry. For
details about how to insert a BOSH DNS address into a config server generated certificate, see
BOSH DNS Addresses in Config Server Generated Certs in the BOSH documentation.
(Optional) Use BOSH DNS Addresses for Bindings
You can configure ODB to retrieve BOSH DNS addresses for service instances. These addresses are passed to the service adapter when you create or delete a binding.
Requirements
- A service that has this feature enabled in the service adapter
For information for service authors about how to enable this feature for their on-demand service, see Enable BOSH DNS Addresses for Bindings. - BOSH Director v266.12 or v267.6 and later, available in Ops Manager v2.2.5 and later
Procedure
To enable ODB to obtain BOSH DNS addresses, configure the binding_with_dns
property
in the manifest as follows on plans that require DNS addresses to create and delete bindings:
binding_with_dns:
- name: ADDRESS-NAME
link_provider: LINK-NAME
instance_group: INSTANCE-GROUP
properties:
azs: [AVAILABILITY-ZONES] # Optional
status: ADDRESS-STATUS # Optional
Where:
ADDRESS-NAME
is an arbitrary identifier used to identify the address when creating a binding.LINK-NAME
is the exposed name of the link. You can find this in the documentation for the service and underprovides.name
in the releasespec
file. You can override it in the deployment manifest by setting theas
property of the link.INSTANCE-GROUP
is the name of the instance group sharing the link. The resultant DNS address resolves to IP addresses of this instance group.AVAILABILITY-ZONES
is a list of availability zone names. When this is provided, the resultant DNS address resolves to IP addresses in these zones.ADDRESS-STATUS
is a filter for link address status. The permitted statuses arehealthy
,unhealthy
,all
, ordefault
. When this is provided, the resultant DNS address resolves to IP addresses with this status.
For example:
service_catalog:
...
plans:
...
- name: plan-requiring-dns-addresses
...
binding_with_dns: # add this section
- name: leader-address
link_provider: example-link-1
instance_group: leader-node
- name: follower-address
link_provider: example-link-2
instance_group: follower-node
properties:
azs: [z1, z2]
status: healthy
Each entry in binding_with_dns
is converted to a BOSH DNS address that is
passed to the service adapter when you create a binding.
(Optional) Enable Telemetry
The telemetry program enables Pivotal to collect data from customer installations to improve your
enterprise experience.
Collecting data at scale enables Pivotal to identify patterns and alert you to warning signals in
your installation.
For more information about the telemetry program, see Telemetry.
To enable your broker to send telemetry data, add the following to your deployment manifest:
instance_groups: - name: broker jobs: - name: broker properties: # ... enable_telemetry: true
About Broker Startup Checks
The ODB does the following startup checks:
It verifies that the CF and BOSH versions satisfy the minimum versions required. If your service offering includes lifecycle errands, the minimum required version for BOSH is higher. For more information, see Configure Your BOSH Director above.
If your system does not meet minimum requirements, you see an insufficient version error. For example:CF API error: Cloud Foundry API version is insufficient, ODB requires CF v238+.
It verifies that, for the service offering, no plan IDs have changed for plans that have existing service instances. If there are instances, you see the following error:
You cannot change the plan_id of a plan that has existing service instances.
About Broker Shutdown
The broker tries to wait for any incomplete HTTPS requests to complete before shutting down.
This reduces the risk of leaving orphan deployments in the event that
the BOSH Director does not respond to the initial bosh deploy
request.
You can determine how long the broker waits before being forced to shut down by
using the broker.shutdown_timeout
property in the manifest.
The default is 60 seconds.
For more information, see Write a Broker Manifest above.
About ODB and BOSH Process Manager (bpm)
Starting in ODB version v0.27.0, the broker binary adopted BOSH Process Manager (bpm) for better job isolation and security. Starting in ODB version v0.30.0, all broker management errands also use bpm. For more information, see bpm in the BOSH documentation.
For bpm to work with a broker:
- The bpm release must be included in the broker job. An example of this configuration can be found at the top of the manifest snippet in Starter Snippet for Your Broker.
- Because bpm restricts access to the current job, the broker needs to signify to bpm that it needs access to the service adapter config. For this, the
broker
job’sservice_adapter
configuration must specify themount_paths
to the service adapter. An example of this configuration can be found at the bottom of the manifest snippet in Starter Snippet for Your Broker.
For broker management errands that are not co-located with the broker, the bpm release must be included in each errand job.
Service Instance Lifecycle Errands
Note: This feature requires BOSH Director v261 or later.
Service instance lifecycle errands allow additional short-lived jobs to run as part of service instance deployment. A deployment is only considered successful if all lifecycle errands exit successfully.
The service adapter must offer the errands as part of the service instance deployment.
ODB supports the following lifecycle errands:
post_deploy
runs after creating or updating a service instance. An example use case is running a health check to ensure the service instance is functioning.
For more information about the workflow, see Create or Update Service Instance with Post-Deploy Errands.pre_delete
runs before the deletion of a service instance. An example use case is cleaning up data before a service shutdown. For more information about the workflow, see Delete a Service Instance with Pre-Delete Errands.
Enable Service Instance Lifecycle Errands
Service instance lifecycle errands are configured on a per-plan basis. Lifecycle errands do not run if you change a plan’s lifecycle errand configuration while an existing deployment is in progress.
To enable lifecycle errands, add each errand job in the following manifest places:
service_deployment
- The plan’s
lifecycle_errands
configuration - The plan’s
instance_groups
Below is an example manifest snippet that configures lifecycle errands for a plan:
service_deployment:
releases:
- name: SERVICE-RELEASE
version: SERVICE-RELEASE-VERSION
jobs:
- SERVICE-RELEASE-JOB
- POST-DEPLOY-ERRAND-JOB
- PRE-DELETE-ERRAND-JOB
- ANOTHER-POST-DEPLOY-ERRAND-JOB
service_catalog:
plans:
- name: CF-MARKETPLACE-PLAN-NAME
lifecycle_errands:
post_deploy:
- name: POST-DEPLOY-ERRAND-JOB
- name: ANOTHER-POST-DEPLOY-ERRAND-JOB
disabled: true
pre_delete:
- name: PRE-DELETE-ERRAND-JOB
instance_groups:
- name: SERVICE-RELEASE-JOB
...
- name: POST-DEPLOY-ERRAND-JOB
lifecycle: errand
vm_type: VM-TYPE
instances: INSTANCE-COUNT
networks: [NETWORK]
azs: [AZ]
- name: ANOTHER-POST-DEPLOY-ERRAND-JOB
lifecycle: errand
vm_type: VM-TYPE
instances: INSTANCE-COUNT
networks: [NETWORK]
azs: [AZ]
- name: PRE-DELETE-ERRAND-JOB
lifecycle: errand
vm_type: VM-TYPE
instances: INSTANCE-COUNT
networks: [NETWORK]
azs: [AZ]
Where POST-DEPLOY-ERRAND-JOB
is the errand job you want to add.
(Optional) Enable Co-located Errands
Note: This feature requires BOSH Director v263 or later.
You can run both post-deploy
and pre-delete
errands as co-located errands.
Co-located errands run on an existing service instance group instead of a separate one.
This avoids additional resource allocation.
Like other lifecycle errands, co-located errands are deployed on a per-plan basis.
Currently the ODB supports co-locating only the post-deploy
or pre-delete
errands.
For more information, see the Errands in the BOSH documentation.
To enable co-located errands for a plan, add each co-located errand job to the manifest as follows:
- Add the errand in
service_deployment
. - Add the errand in the plan’s
lifecycle_errands
configuration. - Set the instances the errand should run on in the
lifecycle_errands
.
Below is an example manifest that includes a co-located post-deploy errand:
service_deployment:
releases:
- name: SERVICE-RELEASE
version: SERVICE-RELEASE-VERSION
jobs:
- SERVICE-RELEASE-JOB
- CO-LOCATED-POST-DEPLOY-ERRAND-JOB
service_catalog:
plans:
- name: CF-MARKETPLACE-PLAN-NAME
lifecycle_errands:
post_deploy:
- name: CO-LOCATED-POST-DEPLOY-ERRAND-JOB
instances:
- SERVICE-RELEASE-JOB/0
- name: NON-CO-LOCATED-POST-DEPLOY-ERRAND
instance_groups:
- name: NON-CO-LOCATED-POST-DEPLOY-ERRAND
...
- name: SERVICE-RELEASE-JOB
...
Where CO-LOCATED-POST-DEPLOY-ERRAND-JOB
is the co-located errand you want to
run and SERVICE-RELEASE-JOB/0
is the instances you want the errand to run
on.