Skip to content

Extending a Pipeline to Install a Product

This how-to-guide will teach you how to add a product to an existing pipeline. This includes downloading the product from Pivnet, extracting configuration, and installing the configured product. If you don't already have an Ops Manager and deployed Director, check out Installing Ops Manager and Deploying the Director respectively.

Prerequisites

  1. A pipeline, such as one created in Installing Ops Manager or Upgrading an Existing Ops Manager.
  2. A fully configured Ops Manager and Director.
  3. The Platform Automation Toolkit Docker Image imported and ready to run.
  4. A glob pattern uniquely matching one product file on Tanzu Network.

Assumptions About Your Existing Pipeline

This guide assumes you're working from one of the pipelines created in previous guides, but you don't have to have exactly that pipeline. If your pipeline is different, though, you may run into trouble with some of our assumptions.

We assume:

  • Resource declarations for configuration, platform-automation-image and platform-automation-tasks.
  • A pivnet token stored in Credhub as a credential named pivnet_token.
  • A previous job responsible for deploying the director called apply-director-changes.
  • You have created an env.yml from the Configuring Env how-to guide. This file exists in the configuration resource.
  • You have a fly target named control-plane with an existing pipeline called foundation.
  • You have a source control repo that contains the foundation pipeline's pipeline.yml.

You should be able to use the pipeline YAML in this document with any pipeline, as long as you make sure the above names match up with what's in your pipeline, either by changing the example YAML or your pipeline.

Download Upload And Stage Product to Ops Manager

For this guide, we're going to add the TAS product.

Download

Before setting the pipeline, we will have to create a config file for download-product in order to download TAS from Tanzu Network.

Create a download-tas.yml.

1
2
3
4
5
6
---
pivnet-api-token: ((pivnet_token))
pivnet-file-glob: "*srt*.pivotal" # this guide installs Small Footprint TAS
pivnet-product-slug: elastic-runtime
product-version-regex: ^2\.9\..*$
stemcell-iaas: aws
1
2
3
4
5
6
---
pivnet-api-token: ((pivnet_token))
pivnet-file-glob: "*srt*.pivotal" # this guide installs Small Footprint TAS
pivnet-product-slug: elastic-runtime
product-version-regex: ^2\.9\..*$
stemcell-iaas: azure
1
2
3
4
5
6
---
pivnet-api-token: ((pivnet_token))
pivnet-file-glob: "*srt*.pivotal" # this guide installs Small Footprint TAS
pivnet-product-slug: elastic-runtime
product-version-regex: ^2\.9\..*$
stemcell-iaas: google
1
2
3
4
5
6
---
pivnet-api-token: ((pivnet_token))
pivnet-file-glob: "*srt*.pivotal" # this guide installs Small Footprint TAS
pivnet-product-slug: elastic-runtime
product-version-regex: ^2\.9\..*$
stemcell-iaas: openstack
1
2
3
4
5
6
---
pivnet-api-token: ((pivnet_token))
pivnet-file-glob: "*srt*.pivotal" # this guide installs Small Footprint TAS
pivnet-product-slug: elastic-runtime
product-version-regex: ^2\.9\..*$
stemcell-iaas: vsphere

Add and commit this file to the same directory as the previous guides. This file should be accessible from the configuration resource.

1
2
3
git add download-tas.yml
git commit -m "Add download-tas file for foundation"
git push

Now that we have a config file, we can add a new download-upload-and-stage-tas job in your pipeline.yml.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
jobs: # Do not duplicate this if it already exists in your pipeline.yml,
      # just add the following lines to the jobs section
- name: download-upload-and-stage-tas
  serial: true
  plan:
    - aggregate:
      - get: platform-automation-image
        params:
          unpack: true
      - get: platform-automation-tasks
        params:
          unpack: true
      - get: configuration
    - task: prepare-tasks-with-secrets
      image: platform-automation-image
      file: platform-automation-tasks/tasks/prepare-tasks-with-secrets.yml
      input_mapping:
        tasks: platform-automation-tasks
      output_mapping:
        tasks: platform-automation-tasks
      params:
        CONFIG_PATHS: config
    - task: download-tas
      image: platform-automation-image
      file: platform-automation-tasks/tasks/download-product.yml
      input_mapping:
        config: configuration
      params:
        CONFIG_FILE: download-tas.yml
      output_mapping:
        downloaded-product: tas-product
        downloaded-stemcell: tas-stemcell

Now that we have a runnable job, let's make a commit

1
2
git add pipeline.yml
git commit -m 'download tas and its stemcell'

Then we can set the pipeline

1
fly -t control-plane set-pipeline -p foundation -c pipeline.yml

If the pipeline sets without errors, run a git push of the config.

If fly set-pipeline returns an error

Fix any and all errors until the pipeline can be set. When the pipeline can be set properly, run

1
2
3
git add pipeline.yml
git commit --amend --no-edit
git push

Testing Your Pipeline

We generally want to try things out right away to see if they're working right. However, in this case, if you have a very slow internet connection and/or multiple Concourse workers, you might want to hold off until we've got the job doing more, so that if it works, you don't have to wait for the download again.

Upload and Stage

We have a product downloaded and (potentially) cached on a Concourse worker. The next step is to upload and stage that product to Ops Manager.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
jobs:
- name: download-upload-and-stage-tas
  serial: true
  plan:
    - aggregate:
      - get: platform-automation-image
        params:
          unpack: true
      - get: platform-automation-tasks
        params:
          unpack: true
      - get: configuration
    - task: prepare-tasks-with-secrets
      image: platform-automation-image
      file: platform-automation-tasks/tasks/prepare-tasks-with-secrets.yml
      input_mapping:
        tasks: platform-automation-tasks
      output_mapping:
        tasks: platform-automation-tasks
      params:
        CONFIG_PATHS: config
    - task: download-tas
      image: platform-automation-image
      file: platform-automation-tasks/tasks/download-product.yml
      input_mapping:
        config: configuration
      params:
        CONFIG_FILE: download-tas.yml
      output_mapping:
        downloaded-product: tas-product
        downloaded-stemcell: tas-stemcell
    - task: upload-tas-stemcell
      image: platform-automation-image
      file: platform-automation-tasks/tasks/upload-stemcell.yml
      input_mapping:
        env: configuration
        stemcell: tas-stemcell
      params:
        ENV_FILE: env.yml
    - task: upload-and-stage-tas
      image: platform-automation-image
      file: platform-automation-tasks/tasks/upload-and-stage-product.yml
      input_mapping:
        product: tas-product
        env: configuration

Then we can re-set the pipeline

1
fly -t control-plane set-pipeline -p foundation -c pipeline.yml

and if all is well, make a commit and push

1
2
3
git add pipeline.yml
git commit -m 'upload tas and stemcell to Ops Manager'
git push

Product Configuration

Before automating the configuration and install of the product, we need a config file. The simplest way is to choose your config options in the Ops Manager UI, then pull its resulting configuration.

Advanced Tile Config Option

For an alternative that generates the configuration from the product file, using ops files to select options, see the Config Template section.

Pulling Configuration from Ops Manager

Configure the product manually according to the product's install instructions. This guide installs tas. Other install instructions may be found in VMware Tanzu Docs.

Once the product is fully configured, do not apply changes, and continue this guide. Note: if you applied changes, it's fine, it'll just take a little longer.

om has a command called staged-config, which is used to extract staged product configuration from the Ops Manager UI. om requires a env.yml, which we already used in the upload-and-stage task.

Most products will contain the following top-level keys:

  • network-properties
  • product-properties
  • resource-config

The command can be run directly using Docker. We'll need to download the image to our local workstation, import it into Docker, and then run staged-config for the Tanzu Application Service product. For more information on Running Commands Locally, see the corresponding How-to Guide.

After the image has been downloaded from Tanzu Network we're going to need the product name recognized by Ops Manager. This can be found using om, but first we should import the image

1
2
export ENV_FILE=env.yml
docker import ${PLATFORM_AUTOMATION_IMAGE_TGZ} platform-automation-image

Then, we can run om staged-products to find the name of the product in Ops Manager.

1
2
docker run -it --rm -v $PWD:/workspace -w /workspace platform-automation-image \
om --env ${ENV_FILE} staged-products

The result should be a table that looks like the following

1
2
3
4
5
6
+---------------------------+-----------------+
|           NAME            |     VERSION     |
+---------------------------+-----------------+
| cf                        | <VERSION>       |
| p-bosh                    | <VERSION>       |
+---------------------------+-----------------+

p-bosh is the name of the director. As cf is the only other product on our Ops Manager, we can safely assume that this is the product name for TAS.

Using the product name cf, let's extract the current configuration from Ops Manager.

1
2
docker run -it --rm -v $PWD:/workspace -w /workspace platform-automation-image \
om --env ${ENV_FILE} staged-config --include-credentials --product-name cf > tas-config.yml

We have a configuration file for our tile ready to back up! Almost. There are a few more steps required before we're ready to commit.

Parameterizing the Config

Look through your tas-config.yml for any sensitive values. These values should be ((parameterized)) and saved off in a secrets store (in this example, we'll use Credhub).

You should still be logged into Credhub. If not, login. Be sure to note the space at the beginning of the line. This will ensure your valuable secrets are not saved in terminal history.

1
2
3
4
# note the starting space
 credhub login --server example.com \
    --client-name your-client-id \
    --client-secret your-client-secret

Logging in to credhub

Depending on your credential type, you may need to pass client-id and client-secret, as we do above, or username and password. We use the client approach because that's the credential type that automation should usually be working with. Nominally, a username represents a person, and a client represents a system; this isn't always exactly how things are in practice. Use whichever type of credential you have in your case. Note that if you exclude either set of flags, Credhub will interactively prompt for username and password, and hide the characters of your password when you type them. This method of entry can be better in some situations.

The example list of some sensitive values from our tas-config.yml are as follows, note that this is intentionally incomplete.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
product-properties:
  .properties.cloud_controller.encrypt_key:
    value:
      secret: my-super-secure-secret
  .properties.networking_poe_ssl_certs:
    value:
    - certificate:
        cert_pem: |-
          -----BEGIN CERTIFICATE-----
          my-cert
          -----END CERTIFICATE-----
        private_key_pem: |-
          -----BEGIN RSA PRIVATE KEY-----
          my-private-key
          -----END RSA PRIVATE KEY-----
      name: certificate

We'll start with the Cloud Controller encrypt key. As this is a value that you might wish to rotate at some point, we're going to store it off as a password type into Credhub.

1
2
3
4
5
# note the starting space
 credhub set \
   --name /concourse/your-team-name/cloud_controller_encrypt_key \
   --type password \
   --password my-super-secure-secret

To validate that we set this correctly, we should run.

1
2
# no need for an extra space
credhub get --name /concourse/your-team-name/cloud_controller_encrypt_key

and expect an output like

1
2
3
4
5
id: <guid>
name: /concourse/your-team-name/cloud_controller_encrypt_key
type: password
value: my-super-secure-secret
version_created_at: "<timestamp>"

We are then going to store off the Networking POE certs as a certificate type in Credhub. But first, we're going to save off the certificate and private key as plain text files to simplify this process. We named these files poe-cert.txt and poe-private-key.txt. There should be no formatting or indentation in these files, only new lines.

1
2
3
4
5
6
# note the starting space
 credhub set \
   --name /concourse/your-team-name/networking_poe_ssl_certs \
   --type rsa \
   --public poe-cert.txt \
   --private poe-private-key.txt

And again, we're going to validate that we set this correctly

1
2
# no need for an extra space
credhub get --name /concourse/your-team-name/networking_poe_ssl_certs

and expect and output like

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
id: <guid>
name: /concourse/your-team-name/networking_poe_ssl_certs
type: rsa
value:
  private_key: |
    -----BEGIN RSA PRIVATE KEY-----
    my-private-key
    -----END RSA PRIVATE KEY-----
  public_key: |
    -----BEGIN CERTIFICATE-----
    my-cert
    -----END CERTIFICATE-----
version_created_at: "<timestamp>"

Remove Credentials from Disk

Once we've validated that the certs are set correctly in Credhub, remember to delete poe-cert.txt and poe-private-key.txt from your working directory. This will prevent a potential security leak, or an accidental commit of those credentials.

Repeat this process for all sensitive values found in your tas-config.yml.

Once completed, we can remove those secrets from tas-config.yml and replace them with ((parameterized-values)). The parameterized value name should match the name in Credhub. For our example, we parameterized the config like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
product-properties:
  .properties.cloud_controller.encrypt_key:
    value:
      secret: ((cloud_controller_encrypt_key))
  .properties.networking_poe_ssl_certs:
    value:
    - certificate:
        cert_pem: ((networking_poe_ssl_certs.public_key))
        private_key_pem: ((networking_poe_ssl_certs.private_key))
      name: certificate

Once your tas-config.yml is parameterized to your liking, we can finally commit the config file.

1
2
3
git add tas-config.yml
git commit -m "Add tas-config file for foundation"
git push

Configure and Apply

With the hard part out of the way, we can now configure the product and apply changes.

First, we need to update the pipeline to have a configure-product step.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
jobs:
- name: download-upload-and-stage-tas
  serial: true
  plan:
    - aggregate:
      - get: platform-automation-image
        params:
          unpack: true
      - get: platform-automation-tasks
        params:
          unpack: true
      - get: configuration
    - task: prepare-tasks-with-secrets
      image: platform-automation-image
      file: platform-automation-tasks/tasks/prepare-tasks-with-secrets.yml
      input_mapping:
        tasks: platform-automation-tasks
      output_mapping:
        tasks: platform-automation-tasks
      params:
        CONFIG_PATHS: config
    - task: download-tas
      image: platform-automation-image
      file: platform-automation-tasks/tasks/download-product.yml
      input_mapping:
        config: configuration
      params:
        CONFIG_FILE: download-tas.yml
      output_mapping:
        downloaded-product: tas-product
        downloaded-stemcell: tas-stemcell
    - task: upload-tas-stemcell
      image: platform-automation-image
      file: platform-automation-tasks/tasks/upload-stemcell.yml
      input_mapping:
        env: configuration
        stemcell: tas-stemcell
      params:
        ENV_FILE: env/env.yml
    - task: upload-and-stage-tas
      image: platform-automation-image
      file: platform-automation-tasks/tasks/stage-product.yml
      input_mapping:
        product: tas-product
        env: configuration
- name: configure-tas
  serial: true
  plan:
    - aggregate:
      - get: platform-automation-image
        passed: [download-upload-and-stage-tas]
        trigger: true
        params:
          unpack: true
      - get: platform-automation-tasks
        params:
          unpack: true
      - get: configuration
        passed: [download-upload-and-stage-tas]
    - task: prepare-tasks-with-secrets
      image: platform-automation-image
      file: platform-automation-tasks/tasks/prepare-tasks-with-secrets.yml
      input_mapping:
        tasks: platform-automation-tasks
      output_mapping:
        tasks: platform-automation-tasks
      params:
        CONFIG_PATHS: config
    - task: configure-tas
      image: platform-automation-image
      file: platform-automation-tasks/tasks/configure-product.yml
      input_mapping:
        config: configuration
        env: configuration
      params:
        CONFIG_FILE: tas-config.yml

This new job will configure the TAS product with the config file we previously created.

Next, we need to add an apply-changes job so that these changes will be applied by the Ops Manager.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
- name: configure-tas
  serial: true
  plan:
    - aggregate:
      - get: platform-automation-image
        trigger: true
        params:
          unpack: true
      - get: platform-automation-tasks
        params:
          unpack: true
      - get: configuration
        passed: [download-upload-and-stage-tas]
    - task: prepare-tasks-with-secrets
      image: platform-automation-image
      file: platform-automation-tasks/tasks/prepare-tasks-with-secrets.yml
      input_mapping:
        tasks: platform-automation-tasks
      output_mapping:
        tasks: platform-automation-tasks
      params:
        CONFIG_PATHS: config
    - task: configure-tas
      image: platform-automation-image
      file: platform-automation-tasks/tasks/configure-product.yml
      input_mapping:
        config: configuration
        env: configuration
      params:
        CONFIG_FILE: tas-config.yml
- name: apply-changes
  serial: true
  plan:
    - aggregate:
      - get: platform-automation-image
        params:
          unpack: true
      - get: platform-automation-tasks
        params:
          unpack: true
      - get: configuration
        passed: [configure-tas]
    - task: prepare-tasks-with-secrets
      image: platform-automation-image
      file: platform-automation-tasks/tasks/prepare-tasks-with-secrets.yml
      input_mapping:
        tasks: platform-automation-tasks
      output_mapping:
        tasks: platform-automation-tasks
      params:
        CONFIG_PATHS: config
    - task: apply-changes
      image: platform-automation-image
      file: platform-automation-tasks/tasks/apply-changes.yml
      input_mapping:
        env: configuration

Adding Multiple Products

When adding multiple products, you can add the configure jobs as passed constraints to the apply-changes job so that they all are applied at once. Ops Manager will handle any inter-product dependency ordering. This will speed up your apply changes when compared with running an apply changes for each product separately.

Example: passed: [configure-tas, configure-tas-windows, configure-healthwatch]

Set the pipeline one final time, run the job, and see it pass.

1
fly -t control-plane set-pipeline -p foundation -c pipeline.yml

Commit the final changes to your repository.

1
2
3
git add pipeline.yml
git commit -m "configure-tas and apply-changes"
git push

You have now successfully added a product to your automation pipeline.

Advanced Concepts

Config Template

An alternative to the staged-config workflow outlined in the how-to guide is config-template.

config-template is an om command that creates a base config file with optional ops files from a given tile or pivnet slug.

This section will assume TAS, like the how-to guide above.

Experimental Feature

Please see the release notes for more information for the config-template command. Since this is marked as experimental, there were feature changes that did not affect the semver of the Platform Automation Toolkit.

In Platform Automation Toolkit v5.0.0, the command config-template is no longer experimental.

Generate the Config Template Directory

1
2
# note the leading space
 export PIVNET_API_TOKEN='your-vmware-tanzu-network-api-token'
1
2
3
4
5
6
7
docker run -it -v $HOME/configs:/configs platform-automation-image \
om config-template \
  --output-directory /configs/ \
  --pivnet-api-token "${PIVNET_API_TOKEN}" \
  --pivnet-product-slug elastic-runtime \
  --product-version '2.5.0' \
  --product-file-glob 'cf*.pivotal' # Only necessary if the product has multiple .pivotal files

This will create or update a directory at $HOME/configs/cf/2.5.0/.

cd into the directory to get started creating your config.

Interpolate a Config

The directory will contain a product.yml file. This is the template for the product configuration you're about to build. Open it in an editor of your choice. Get familiar with what's in there. The values will be variables intended to be interpolated from other sources, designated with the(()) syntax.

You can find the value for any property with a default in the product-default-vars.yml file. This file serves as a good example of a variable source. You'll need to create a vars file of your own for variables without default values. For the base template, you can get a list of required variables by running

1
2
3
4
5
6
docker run -it -v $HOME/configs:/configs platform-automation-image \
om interpolate \
  --config product.yml \
  -l product-default-vars.yml \
  -l resource-vars.yml \
  -l errand-vars.yml

Put all those vars in a file and give them the appropriate values. Once you've included all the variables, the output will be the finished template. For the rest of this guide, we will refer to these vars as required-vars.yml.

There may be situations that call for splitting your vars across multiple files. This can be useful if there are vars that need to be interpolated when you apply the configuration, rather than when you create the final template. You might consider creating a separate vars file for each of the following cases:

  • credentials (these vars can then be persisted separately/securely)
  • foundation-specific variables when using the same template for multiple foundations

You can use the --skip-missing flag when creating your final template using om interpolate to leave such vars to be rendered later.

If you're having trouble figuring out what the values should be, here are some approaches you can use:

  • Look in the template where the variable appears for some additional context of its value.
  • Look at the tile's online documentation
  • Upload the tile to an Ops Manager and visit the tile in the Ops Manager UI to see if that provides any hints.

    If you are still struggling, inspecting the html of the Ops Manager webpage can more accurately map the value names to the associated UI element.

When Using The Ops Manager Docs and UI

Be aware that the field names in the UI do not necessarily map directly to property names.

Optional Features

The above process will get you a default installation, with no optional features or variables, that is entirely deployed in a single Availability Zone (AZ).

In order to provide non-required variables, use multiple AZs, or make non-default selections for some options, you'll need to use some of the ops files in one of the following four directories:

features allow the enabling of selectors for a product. For example, enabling/disabling of an s3 bucket
network contains options for enabling 2-3 availability zones for network configuration
optional contains optional properties without defaults. For optional values that can be provided more than once, there's an ops file for each param count
resource contains configuration that can be applied to resource configuration. For example, BOSH VM extensions

For more information on BOSH VM Extensions, refer to the Creating a Director Config File How-to Guide.

To use an ops file, add -o with the path to the ops file you want to use to your interpolate command.

So, to enable TCP routing in Tanzu Application Service, you would add -o features/tcp_routing-enable.yml. For the rest of this guide, the vars for this feature are referred to as feature-vars.yml. If you run your complete command, you should again get a list of any newly-required variables.

1
2
3
4
5
6
7
8
9
docker run -it -v $HOME/configs:/configs platform-automation-image \
om interpolate \
  --config product.yml \
  -l product-default-vars.yml \
  -l resource-vars.yml \
  -l required-vars.yml \
  -o features/tcp_routing-enable.yml \
  -l feature-vars.yml \
  -l errand-vars.yml

Finalize Your Configuration

Once you've selected your ops files and created your vars files, decide which vars you want in the template and which you want to have interpolated later.

Create a final template and write it to a file, using only the vars you want to in the template, and using --skip-missing to allow the rest to remain as variables.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
docker run -it -v $HOME/configs:/configs platform-automation-image \
om interpolate \
  --config product.yml \
  -l product-default-vars.yml \
  -l resource-vars.yml \
  -l required-vars.yml \
  -o features/tcp_routing-enable.yml \
  -l feature-vars.yml \
  -l errand-vars.yml \
  --skip-missing \
  > pas-config-template.yml

You can check-in the resulting configuration to a git repo. For vars that do not include credentials, you can check those vars files in, as well. Handle vars that are secret more carefully.

You can then dispose of the config template directory.

Using Ops Files for Multi-Foundation

There are two recommended ways to support multiple foundation workflows: using secrets management or ops files. This section will explain how to support multiple foundations using ops files.

Starting with an incomplete Tanzu Application Service config from vSphere as an example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# base.yml
# An incomplete yaml response from om staged-config
product-name: cf

product-properties:
  .cloud_controller.apps_domain:
    value: ((cloud_controller_apps_domain))
  .cloud_controller.encrypt_key:
    value:
      secret: ((cloud_controller_encrypt_key.secret))
  .properties.security_acknowledgement:
    value: X
  .properties.cloud_controller_default_stack:
    value: default

network-properties:
  network:
    name: DEPLOYMENT
  other_availability_zones:
  - name: AZ01
  singleton_availability_zone:
    name: AZ01

resource-config:
  diego_cell:
    instances: 5
    instance_type:
      id: automatic
  uaa:
    instances: 1
    instance_type:
      id: automatic

For a single foundation deploy, leaving values such as ".cloud_controller.apps_domain" as-is would work fine. For multiple foundations, this value will be different per deployed foundation. Other values, such as .cloud_controller.encrypt_key have a secret that already have a placeholder from om. If different foundations have different load requirements, even the values in resource-config can be edited using ops files.

Using the example above, let's try filling in the existing placeholder for cloud_controller.apps_domain in our first foundation.

1
2
3
4
# replace-domain-ops-file.yml
- type: replace
  path: /product-properties/.cloud_controller.apps_domain/value?
  value: unique.foundation.one.domain

To test that the ops file will work in your base.yml, this can be done locally using bosh int:

1
 bosh int base.yml -o replace-domain.yml

This will output base.yml with the replaced(interpolated) values:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# interpolated-base.yml
network-properties:
  network:
    name: DEPLOYMENT
  other_availability_zones:
  - name: AZ01
  singleton_availability_zone:
    name: AZ01
product-name: cf
product-properties:
  .cloud_controller.apps_domain: unique.foundation.one.domain
  .cloud_controller.encrypt_key:
    value:
      secret: ((cloud_controller_encrypt_key.secret))
  .properties.cloud_controller_default_stack:
    value: default
  .properties.security_acknowledgement:
    value: X
resource-config:
  diego_cell:
    instance_type:
      id: automatic
    instances: 5
  uaa:
    instance_type:
      id: automatic
    instances: 1

Anything that needs to be different per deployment can be replaced via ops files as long as the path: is correct.

Upgrading products to new patch versions:

  • Configuration settings should not differ between successive patch versions within the same minor version line. Underlying properties or property names may change, but the tile's upgrade process automatically translates properties to the new fields and values.
  • VMware cannot guarantee the functionality of upgrade scripts in third-party products.

Replicating configuration settings from one product to the same product on a different foundation:

  • Because properties and property names can change between patch versions of a product, you can only safely apply configuration settings across products if their versions exactly match.