LATEST VERSION: 0.17.2 - CHANGELOG

Creating an On-Demand Service Tile

This documents the process for deploying an on-demand broker (ODB) with a service in a single tile, on a AWS installation of Ops Manager 1.8. We have built a reference Kafka tile.

Requirements

Before ODB, Ops Manager controlled the IP allocation of the private networks. When you use ODB in a tile, you will need at least two private networks:

  • a network where Ops Manager will deploy the ODB VM, and
  • a different network where the ODB will deploy service instance VMs.

The network for service instances should be flagged as a Service Network in Ops Manager.

Deploying Ops Manager to AWS

  1. Follow the default Ops Manager deployment docs, but with these modifications:
    1. Create a self-signed wildcard SSL certificate for a domain you control. This is often *.some-subdomain.cf-app.com.
    2. Upload the SSL cert (along with the associated private key) to AWS by following these instructions.
    3. Download the CloudFormation JSON and save it in the Ops Manager directory.
    4. Run the CloudFormation stack. Save any pertinent inputs (for example, BOSH DB credentials) you type into the web console into the Ops Manager directory.
    5. Launch an instance of the AMI. If possible, use an elastic IP so you can keep the same DNS record even if you must recreate the VM. Failing that, auto-assign a public IP.
    6. Create a DNS record for pcf.YOUR-WILDCARD-DOMAIN. Following the earlier example, the record would be for pcf.some-subdomain.cf-app.com. Point the record to the public IP of the Ops Manager VM.
  2. Log into Ops Manager and save the credentials.
  3. Configure the Ops Manager Director (BOSH) tile.
  4. Click “Apply Changes”, and save the BOSH init manifest for future reference. scp -i private_key.pem ubuntu@opsmanIP:/var/tempest/workspaces/default/deployments/bosh.yml bosh.yml

Deployment Configuration Tips

  1. The ELBs created by CloudFormation are both for CF, not Ops Manager. One of them will be configured with your wildcard certificate. This takes the place of HAProxy in AWS PCF deployments, and is therefore not used until you deploy the ERT tile.
  2. To target the Ops Manager Director from the Ops Manager VM, use bosh --ca-cert /var/tempest/workspaces/default/root_ca_certificate target 10.0.16.10.

Build a Tile

Follow the default build your own product tile documentation and configure the handcraft.yml with the accessors listed below. To access the $self accessors, the service-broker flag in the handcraft.yml must be set to true.

Note: If you are publishing a tile to be consumed by Ops Manager 1.8.x or 1.9.x, you will need to build your tile using releases with SHA-1 internal checksums. ODB releases are published using SHA-2 internal checksums. You can convert these releases to use SHA-1 internal checksums using the BOSH CLI command sha1ify-release.

Non-Exhaustive Accessors Reference

Ops Manager Director

These accessors are used to provide fields relating to the BOSH Director installation present.

Accessor Description
$director.hostname The Ops Manager Director’s hostname or IP address
$director.ca_public_key The Ops Manager Director’s root CA certificate. For more information, see How to configure SSL certificates for the ODB.

For example:

bosh:
  url: https://(( $director.hostname )):25555
  root_ca_cert: (( $director.ca_public_key ))

Self

These accessors are used to provide fields that belong to the specific tile (in this case, the broker tile).

Accessor Description
$self.uaa\_client_name Name of UAA client that can authenticate with the Ops Manager Director
$self.uaa_client_secret Name of UAA secret that can authenticate with the BOSH director
$self.service_network Service network configured for the on-demand instances

You must create the service network manually. Create a subnet on AWS and then add it to the Director by configuring the Director tile. Configuration options are in the tile, under Create Networks > ADD network.

$self accessors are enabled by setting service_broker: true at the top level of handcraft.yml.

Note: Setting `service_broker: true` will cause a redeployment of the BOSH director when installing or uninstalling the tile.

For example:

bosh:
  authentication:
    uaa:
      url: https://(( $director.hostname )):8443
      client_id: (( $self.uaa_client_name ))
      client_secret: (( $self.uaa_client_secret ))

CF CLI

These accessors are used to provide fields from the Elastic Runtime Tile (Cloud Foundry) present in the Ops Manager installation.

Accessor Description
..cf.ha\_proxy.skip\_cert\_verify.value Flag to skip SSL certificate verification for connections to the CF API
..cf.cloud\_controller.apps\_domain.value The application domain configured in the CF installation
..cf.cloud\_controller.system\_domain.value The system domain configured in the CF installation
..cf.uaa.system\_services\_credentials.identity Username of a CF user in the cloud\_controller.admin group, to be used by services
..cf.uaa.system\_services\_credentials.password Password of a CF user in the cloud\_controller.admin group, to be used by services

For example:

disable_ssl_cert_verification: (( ..cf.ha_proxy.skip_cert_verify.value ))
cf:
  url: https://api.(( ..cf.cloud_controller.system_domain.value ))
  authentication:
    url: https://uaa.(( ..cf.cloud_controller.system_domain.value ))
    user_credentials:
      username: (( ..cf.uaa.system_services_credentials.identity ))
      password: (( ..cf.uaa.system_services_credentials.password ))

Reference

For more accessors, see the ops-manager-example product

Public IP address for on-demand service instance groups

Ops Manager 1.9 RC1+ provides a VM extension called public_ip in the BOSH Director’s cloud config. This can be used in the ODB’s manifest to give instance groups a public IP address. This IP is only used for outgoing traffic to the internet from VMs with the public_ip extension. All internal traffic / incoming connections need to go over the private IP.

Here is an example showing how to allow operators to assign a public IP address to an on-demand service instance group in the tile handcraft:

form_types:
- name: example_form
  property_inputs:
  - reference: .broker.example_vm_extensions
    label: VM options
    description: List of VM options for Service Instances

job_types:
- name: broker
  templates:
  - name: broker
    release: on-demand-service-broker
    manifest: |
      service_catalog:
        plans:
        - name: example-plan
          instance_groups:
          - name: example-instance-group
            vm_extensions: (( .broker.example_vm_extensions.value ))
  property_blueprints:
  - name: example_vm_extensions
    type: multi_select_options
    configurable: true
    optional: true
    options:
    - name: "public_ip"
      label: "Internet Connected VMs (on supported IaaS providers)"

Floating stemcells

Ops Manager provides a feature called Floating Stemcells that allows PCF to quickly propagate a patched stemcell to all VMs in the deployment that have the same compatible stemcell. Both the broker deployment and the service instances deployed by the On-Demand Broker can make use of this feature. Enabling this feature can help ensure that all of your service instances are patched to the latest stemcell.

In order for the service instances to be installed automatically with the latest stemcell, you will need to make sure the upgrade-all-service-instances errand is ticked.

Here is an example of how to implement floating stemcells in handcraft.yml:

job_types:
  templates:
  - name: broker
    manifest: |
      service_deployment:
        releases:
        - name: release-name
          version: 1.0.0
          jobs: [job_server]
        stemcell:
          os: ubuntu-trusty
          version: (( $self.stemcell_version ))

Here is an example of how to configure the stemcell_criteria in binaries.yml:

---
name: example-on-demand-service
product_version: 1.0.0
stemcell_criteria:
  os: ubuntu-trusty
  version: '3312'
  enable_patch_security_updates: true

Note: Configuring enable_patch_security_updates to false will disable this feature.

On-Demand Broker errands

In the reference Kafka tile, you can see how the ODB release’s errands in use.

Specify the errands in the following order, as shown in the example Kafka tile:

Post-deploy:

  • register-broker
  • upgrade-all-service-instances

Pre-delete:

  • delete-all-service-instances-and-deregister-broker

These errands are documented in the operating section.

Create a pull request or raise an issue on the source for this page in GitHub