Skip to content

Writing a Pipeline to Upgrade an Existing Ops Manager

This how-to-guide shows you how to create a pipeline for upgrading an existing Ops Manager VM. If you don't have an Ops Manager VM, check out Installing Ops Manager.

Prerequisites

Over the course of this guide, we're going to use Platform Automation Toolkit to create a pipeline using Concourse.

Before we get started, you'll need a few things ready to go:

  1. A running Ops Manager VM that you would like to upgrade

  2. Credentials for an IaaS that Ops Manager is compatible with

    • It doesn't actually matter what IaaS you use for Ops Manager, as long as your Concourse can connect to it. Pipelines built with Platform Automation Toolkit can be platform-agnostic.
  3. A Concourse instance with access to a Credhub instance and to the Internet
  4. GitHub account
  5. Read/write credentials and bucket name for an S3 bucket
  6. An account on VMware Tanzu Network
  7. A MacOS workstation
    • with Docker installed
    • a text editor you like
    • a terminal emulator you like
    • a browser that works with Concourse, like Firefox or Chrome
    • and git

It will be very helpful to have a basic familiarity with the following. If you don't have basic familiarity with all these things, that's okay. We'll explain some basics, and link to resources to learn more:

A note on the prerequisites

While this guide uses Github to provide a git remote, and an S3 bucket as a blobstore, Platform Automation Toolkit supports arbitrary git providers and S3-compatible blobstores.

If you need to use an alternate one, that's okay.

We picked specific examples so we could describe some steps in detail. Some details may be different if you follow along with different providers. If you're comfortable navigating those differences on your own, go for it!

Check out our reference for using an S3-specific blobstore

Similarly, in this guide, we assume the MacOS operating system. This should all work fine on Linux, too, but there might be differences in the paths you'll need to figure out.

Creating a Concourse Pipeline

Platform Automation Toolkit's tasks and image are meant to be used in a Concourse pipeline. So, let's make one.

Using your bash command-line client, create a directory to keep your pipeline files in, and cd into it.

1
2
mkdir your-repo-name
cd !$

This repo name should relate to your situation and be specific enough to be navigable from your local workstation.

"!$"

!$ is a bash shortcut. Pronounced "bang, dollar-sign," it means "use the last argument from the most recent command." In this case, that's the directory we just created! This is not a Platform Automation Toolkit thing, this is just a bash tip dearly beloved of at least one Platform Automator.

Before we get started with the pipeline itself, we'll gather some variables in a file we can use throughout our pipeline.

Open your text editor and create vars.yml. Here's what it should look like to start, we can add things to this as we go:

1
2
3
platform-automation-bucket: your-bucket-name
credhub-server: https://your-credhub.example.com
opsman-url: https://pcf.foundation.example.com

Using a DNS

This example assumes that you're using DNS and hostnames. You can use IP addresses for all these resources instead, but you still need to provide the information as a URL, for example: https://120.121.123.124

Now, create a file called pipeline.yml.

Naming

We'll use pipeline.yml in our examples throughout this guide. However, you may create multiple pipelines over time. If there's a more sensible name for the pipeline you're working on, feel free to use that instead.

Write this at the top, and save the file. This is YAML for "the start of the document". It's optional, but traditional:

1
---

Now you have a pipeline file! Nominally! Well, look. It's valid YAML, at least.

Getting fly

Let's try to set it as a pipeline with fly, the Concourse command-line Interface (CLI).

First, check if we've got fly installed at all:

1
fly -v

If it gives you back a version number, great! Skip ahead to Setting The Pipeline

If it says something like -bash: fly: command not found, we have a little work to do: we've got to get fly.

Navigate to the address for your Concourse instance in a web browser. At this point, you don't even need to be signed in! If there are no public pipelines, you should see something like this:

Get Fly

If there are public pipelines, or if you're signed in and there are pipelines you can see, you'll see something similar in the lower-right hand corner.

Click the icon for your OS and save the file, mv the resulting file to somewhere in your $PATH, and use chmod to make it executable:

A note on command-line examples

Some of these, you can copy-paste directly into your terminal. Some of them won't work that way, or even if they did, would require you to edit them to replace our example values with your actual values. We recommend you type all of the bash examples in by hand, substituting values, if necessary, as you go. Don't forget that you can often hit the tab key to auto-complete the name of files that already exist; it makes all that typing just a little easier, and serves as a sort of command-line autocorrect.

1
2
mv ~/Downloads/fly /usr/local/bin/fly
chmod +x !$

Congrats! You got fly.

Okay but what did I just do?

FAIR QUESTION. You downloaded the fly binary, moved it into bash's PATH, which is where bash looks for things to execute when you type a command, and then added permissions that allow it to be executed. Now, the CLI is installed - and we won't have to do all that again, because fly has the ability to update itself, which we'll get into later.

Setting The Pipeline

Okay now let's try to set our pipeline with fly, the Concourse CLI.

fly keeps a list of Concourses it knows how to talk to. Let's see if the Concourse we want is already on the list:

1
fly targets

If you see the address of the Concourse you want to use in the list, note down its name, and use it in the login command:

1
fly -t control-plane login

Control-plane?

We're going to use the name control-plane for our Concourse in this guide. It's not a special name, it just happens to be the name of the Concourse we want to use in our target list.

If you don't see the Concourse you need, you can add it with the -c (--concourse-url)flag:

1
fly -t control-plane login -c https://your-concourse.example.com

You should see a login link you can click on to complete login from your browser.

Stay on target

The -t flag sets the name when used with login and -c. In the future, you can leave out the -c argument.

If you ever want to know what a short flag stands for, you can run the command with -h (--help) at the end.

Pipeline-setting time! We'll use the name "foundation" for this pipeline, but if your foundation has an actual name, use that instead.

1
fly -t control-plane set-pipeline -p foundation -c pipeline.yml

It should say no changes to apply, which is fair, since we gave it an empty YAML doc.

Version discrepancy

If fly says something about a "version discrepancy," "significant" or otherwise, just do as it says: run fly sync and try again. fly sync automatically updates the CLI with the version that matches the Concourse you're targeting. Useful!

Your First Job

Let's see Concourse actually do something, yeah?

Add this to your pipeline.yml, starting on the line after the ---:

1
wait: no nevermind let's get version control first

Good point. Don't actually add that to your pipeline config yet. Or if you have, delete it, so your whole pipeline looks like this again:

1
---

Reverting edits to our pipeline is something we'll probably want to do again. This is one of many reasons we want to keep our pipeline under version control.

So let's make this directory a git repo!

But First, git init

Git Repository Layout

The following describes a step-by-step approach for how to get set up with git.

For an example of the repository file structure for single and multiple foundation systems, please reference Git Repository Layout.

git should come back with information about the commit you just created:

1
2
git init
git commit --allow-empty -m "Empty initial commit"

If it gives you a config error instead, you might need to configure git a bit. Here's a good guide to initial setup. Get that done, and try again.

Now we can add our pipeline.yml, so in the future it's easy to get back to that soothing --- state.

1
2
git add pipeline.yml vars.yml
git commit -m "Add pipeline and starter vars"

Let's just make sure we're all tidy:

1
git status

git should come back with nothing to commit, working tree clean.

Great. Now we can safely make changes.

Git commits

git commits are the basic unit of code history.

Making frequent, small, commits with good commit messages makes it much easier to figure out why things are the way they are, and to return to the way things were in simpler, better times. Writing short commit messages that capture the intent of the change (in an imperative style) can be tough, but it really does make the pipeline's history much more legible, both to future-you, and to current-and-future teammates and collaborators.

The Test Task

Platform Automation Toolkit comes with a test task meant to validate that it's been installed correctly. Let's use it to get setup.

Add this to your pipeline.yml, starting on the line after the ---:

1
2
3
4
5
6
jobs:
- name: test
  plan:
    - task: test
      image: platform-automation-image
      file: platform-automation-tasks/tasks/test.yml

If we try to set this now, Concourse will take it:

1
fly -t control-plane set-pipeline -p foundation -c pipeline.yml

Now we should be able to see our pipeline in the Concourse UI. It'll be paused, so click the "play" button to unpause it. Then, click in to the gray box for our test job, and hit the "plus" button to schedule a build.

It should error immediately, with unknown artifact source: platform-automation-tasks. We didn't give it a source for our task file.

We've got a bit of pipeline code that Concourse accepts. Before we start doing the next part, this would be a good moment to make a commit:

1
2
git add pipeline.yml
git commit -m "Add (nonfunctional) test task"

With that done, we can try to get the inputs we need by adding get steps to the plan before the task, like so:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
jobs:
- name: test
  plan:
    - get: platform-automation-image
      resource: platform-automation
      params:
        globs: ["*image*.tgz"]
        unpack: true
    - get: platform-automation-tasks
      resource: platform-automation
      params:
        globs: ["*tasks*.zip"]
        unpack: true
    - task: test
      image: platform-automation-image
      file: platform-automation-tasks/tasks/test.yml

When using vSphere

There is a smaller vSphere container image available. To use it instead of the general purpose image, you can use this glob to get the image:

1
2
3
4
5
- get: platform-automation-image
  resource: platform-automation
  params:
    globs: ["vsphere-platform-automation-image*.tar.gz"]
    unpack: true

If we try to fly set this, fly will complain about invalid resources.

To actually make the image and file we want to use available, we'll need some Resources.

Adding Resources

Resources are Concourse's main approach to managing artifacts. We need an image, and the tasks directory - so we'll tell Concourse how to get these things by declaring Resources for them.

In this case, we'll be downloading the image and the tasks directory from Tanzu Network. Before we can declare the resources themselves, we have to teach Concourse to talk to Tanzu Network. (Many resource types are built in, but this one isn't.)

Add the following to your pipeline file. We'll put it above the jobs entry.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
resource_types:
- name: pivnet
  type: docker-image
  source:
    repository: pivotalcf/pivnet-resource
    tag: latest-final
resources:
- name: platform-automation
  type: pivnet
  source:
    product_slug: platform-automation
    api_token: ((pivnet-refresh-token))

The API token is a credential, which we'll pass via the command-line when setting the pipeline, so we don't accidentally check it in.

Grab a refresh token from your Tanzu Network profile (when logged in, click your username, then Edit Profile) and clicking "Request New Refresh Token." Then use that token in the following command:

Keep it secret, keep it safe

Bash commands that start with a space character are not saved in your history. This can be very useful for cases like this, where you want to pass a secret, but don't want it saved. Commands in this guide that contain a secret start with a space, which can be easy to miss.

1
2
3
4
5
# note the space before the command
 fly -t control-plane set-pipeline \
     -p foundation \
     -c pipeline.yml \
     -v pivnet-refresh-token=your-api-token

Warning

When you get your Tanzu Network token as described above, any previous Tanzu Network tokens you may have gotten will stop working. If you're using your Tanzu Network refresh token anywhere, retrieve it from your existing secret storage rather than getting a new one, or you'll end up needing to update it everywhere it's used.

Go back to the Concourse UI and trigger another build. This time, it should pass.

Commit time!

1
2
git add pipeline.yml
git commit -m "Add resources needed for test task"

We'd rather not pass our Tanzu Network token every time we need to set the pipeline. Fortunately, Concourse can integrate with secret storage services.

Let's put our API token in Credhub so Concourse can get it.

First we'll need to login:

Backslashes in bash examples

The following example has been broken across multiple lines by using backslash characters (\) to escape the newlines. We'll be doing this a lot to keep the examples readable. When you're typing these out, you can skip that and just put it all on one line.

Again, note the space at the start

1
2
3
4
# note the starting space
 credhub login --server example.com \
    --client-name your-client-id \
    --client-secret your-client-secret

Logging in to credhub

Depending on your credential type, you may need to pass client-id and client-secret, as we do above, or username and password. We use the client approach because that's the credential type that automation should usually be working with. Nominally, a username represents a person, and a client represents a system; this isn't always exactly how things are in practice. Use whichever type of credential you have in your case. Note that if you exclude either set of flags, Credhub will interactively prompt for username and password, and hide the characters of your password when you type them. This method of entry can be better in some situations.

Then, we can set the credential name to the path where Concourse will look for it:

1
2
3
4
5
# note the starting space
 credhub set \
         --name /concourse/your-team-name/pivnet-refresh-token \
         --type value \
         --value your-credhub-refresh-token

Now, let's set that pipeline again, without passing a secret this time.

1
2
3
fly -t control-plane set-pipeline \
    -p foundation \
    -c pipeline.yml

This should succeed, and the diff Concourse shows you should replace the literal credential with ((pivnet-refresh-token)).

Visit the UI again and re-run the test job; this should also succeed.

1
[activate-certificate-authority]: ../tasks.md#activate-certificate-authority

Exporting The Installation

We're finally in a position to do work!

While ultimately we want to upgrade Ops Manager, to do that safely we first need to download and persist an export of the current installation.

Export your installation routinely

We strongly recommend automatically exporting the Ops Manager installation and persisting it to your blobstore on a regular basis. This ensures that if you need to upgrade (or restore!) your Ops Manager for any reason, you'll have the latest installation info available. Later in this tutorial, we'll be adding a time trigger for exactly this reason.

Let's switch out the test job for one that exports our existing Ops Manager's installation state. We can switch the task out by changing:

  • the name of the job
  • the name of the task
  • the file of the task

export-installation has an additional required input. We need the env file used to talk to Ops Manager.

We'll write that file and make it available as a resource in a moment, for now, we'll just get it as if it's there.

It also has an additional output (the exported installation). Again, for now, we'll just write that like we have somewhere to put it.

Finally, while it's fine for test to run in parallel, export-installation shouldn't. So, we'll add serial: true to the job, too.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
jobs:
- name: export-installation
  serial: true
  plan:
    - get: platform-automation-image
      resource: platform-automation
      params:
        globs: ["*image*.tgz"]
        unpack: true
    - get: platform-automation-tasks
      resource: platform-automation
      params:
        globs: ["*tasks*.zip"]
        unpack: true
    - get: env
    - task: export-installation
      image: platform-automation-image
      file: platform-automation-tasks/tasks/export-installation.yml
    - put: installation
      params:
        file: installation/installation-*.zip

If we try to fly this up to Concourse, it will again complain about resources that don't exist.

So, let's make them.

The first new resource we need is the env file. We'll push our git repo to a remote on Github to make this (and later, other) configuration available to the pipelines.

Github has good instructions you can follow to create a new repository on Github. You can skip over the part about using git init to setup your repo, since we already did that.

Go ahead and setup your remote and use git push to make what we have available. We will use this repository to hold our single foundation specific configuration. We are using the "Single Repository for Each Foundation" pattern to structure our configurations.

You will also need to add the repository URL to vars.yml so we can reference it later, when we declare the corresponding resource.

1
pipeline-repo: git@github.com:username/your-repo-name

Now lets write an env.yml for your Ops Manager.

env.yml holds authentication and target information for a particular Ops Manager.

An example env.yml for username/password authentication is shown below with the required properties. Please reference Configuring Env for the entire list of properties that can be used with env.yml as well as an example of an env.yml that can be used with UAA (SAML, LDAP, etc.) authentication.

The property decryption-passphrase is required for import-installation, and therefore required for upgrade-opsman.

If your foundation uses authentication other than basic auth, please reference Inputs and Outputs for more detail on UAA-based authentication.

1
2
3
4
target: ((opsman-url))
username: ((opsman-username))
password: ((opsman-password))
decryption-passphrase: ((opsman-decryption-passphrase))

Add and commit the new env.yml file:

1
2
3
git add env.yml
git commit -m "Add environment file for foundation"
git push

Now that the env file we need is in our git remote, we need to add a resource to tell Concourse how to get it as env.

Since this is (probably) a private repo, we'll need to create a deploy key Concourse can use to access it. Follow Github's instructions for creating a deploy key.

Then, put the private key in Credhub so we can use it in our pipeline:

1
2
3
4
5
6
# note the starting space
 credhub set \
   --name /concourse/your-team-name/plat-auto-pipes-deploy-key \
   --type ssh \
   --private the/filepath/of/the/key-id_rsa \
   --public the/filepath/of/the/key-id_rsa.pub

Then, add this to the resources section of your pipeline file:

1
2
3
4
5
6
- name: env
  type: git
  source:
    uri: ((pipeline-repo))
    private_key: ((plat-auto-pipes-deploy-key.private_key))
    branch: main

We'll put the credentials we need in Credhub:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# note the starting space throughout
 credhub set \
   -n /concourse/your-team-name/foundation/opsman-username \
   -t value -v your-opsman-username
 credhub set \
   -n /concourse/your-team-name/foundation/opsman-password \
   -t value -v your-opsman-password
 credhub set \
   -n /concourse/your-team-name/foundation/opsman-decryption-passphrase \
   -t value -v your-opsman-decryption-passphrase

Credhub paths and pipeline names

Notice that we've added an element to the cred paths; now we're using the foundation name.

If you look at Concourse's lookup rules, you'll see that it searches the pipeline-specific path before the team path. Since our pipeline is named for the foundation it's used to manage, we can use this to scope access to our foundation-specific information to just this pipeline.

By contrast, the Tanzu Network token may be valuable across several pipelines (and associated foundations), so we scoped that to our team.

In order to perform interpolation in one of our input files, we'll need the credhub-interpolate task Earlier, we relied on Concourse's native integration with Credhub for interpolation. That worked because we needed to use the variable in the pipeline itself, not in one of our inputs.

We can add it to our job after we've retrieved our env input, but before the export-installation task:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
jobs:
- name: export-installation
  serial: true
  plan:
    - get: platform-automation-image
      resource: platform-automation
      params:
        globs: ["*image*.tgz"]
        unpack: true
    - get: platform-automation-tasks
      resource: platform-automation
      params:
        globs: ["*tasks*.zip"]
        unpack: true
    - get: env
    - task: credhub-interpolate
      image: platform-automation-image
      file: platform-automation-tasks/tasks/credhub-interpolate.yml
      params:
        CREDHUB_CLIENT: ((credhub-client))
        CREDHUB_SECRET: ((credhub-secret))
        CREDHUB_SERVER: https://your-credhub.example.com
        PREFIX: /concourse/your-team-name/foundation
      input_mapping:
        files: env
      output_mapping:
        interpolated-files: interpolated-env
    - task: export-installation
      image: platform-automation-image
      file: platform-automation-tasks/tasks/export-installation.yml
      input_mapping:
        env: interpolated-env
    - put: installation
      params:
        file: installation/installation-*.zip

output_mapping

The credhub-interpolate task for this job maps the output from the task (interpolated-files) to interpolated-env.

This can be used by the next task in the job to more explicitly define the inputs/outputs of each task. It is also okay to leave the output as interpolated-files if it is appropriately referenced in the next task

Notice the input mappings of the credhub-interpolate and export-installation tasks. This allows us to use the output of one task as in input of another.

An alternative to input_mappings is discussed in Configuration Management Strategies.

We now need to put our credhub_client and credhub_secret into Credhub, so Concourse's native integration can retrieve them and pass them as configuration to the credhub-interpolate task.

1
2
3
4
5
6
7
# note the starting space throughout
 credhub set \
        -n /concourse/your-team-name/credhub-client \
        -t value -v your-credhub-client
 credhub set \
        -n /concourse/your-team-name/credhub-secret \
        -t value -v your-credhub-secret

Now, the credhub-interpolate task will interpolate our config input, and pass it to export-installation as config.

The other new resource we need is a blobstore, so we can persist the exported installation.

We'll add an S3 resource to the resources section:

1
2
3
4
5
6
7
- name: installation
  type: s3
  source:
    access_key_id: ((s3-access-key-id))
    secret_access_key: ((s3-secret-key))
    bucket: ((platform-automation-bucket))
    regexp: installation-(.*).zip

Again, we'll need to save the credentials in Credhub:

1
2
3
4
5
6
7
# note the starting space throughout
 credhub set \
        -n /concourse/your-team-name/s3-access-key-id \
        -t value -v your-bucket-s3-access-key-id
 credhub set \
        -n /concourse/your-team-name/s3-secret-key \
        -t value -v your-s3-secret-key

This time (and in the future), when we set the pipeline with fly, we'll need to load vars from vars.yml.

1
2
3
4
5
# note the space before the command
 fly -t control-plane set-pipeline \
     -p foundation \
     -c pipeline.yml \
     -l vars.yml

Now you can manually trigger a build, and see it pass.

Bash command history

You'll be using this, the ultimate form of the fly command to set your pipeline, for the rest of the tutorial.

You can save yourself some typing by using your bash history (if you did not prepend your command with a space). You can cycle through previous commands with the up and down arrows. Alternatively, Ctrl-r will search your bash history. Just hit Ctrl-r, type fly, and it'll show you the last fly command you ran. Run it with enter. Instead of running it, you can hit Ctrl-r again to see the matching command before that.

This is also a good commit point:

1
2
3
git add pipeline.yml vars.yml
git commit -m "Export foundation installation in CI"
git push

Performing The Upgrade

Now that we have an exported installation, we'll create another Concourse job to do the upgrade itself. We want the export and the upgrade in separate jobs so they can be triggered (and re-run) independently.

We know this new job is going to center on the upgrade-opsman task. Click through to the task description, and write a new job that has get steps for our platform-automation resources and all the inputs we already know how to get:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
- name: upgrade-opsman
  serial: true
  plan:
  - get: platform-automation-image
    resource: platform-automation
      params:
      globs: ["*image*.tgz"]
      unpack: true
  - get: platform-automation-tasks
    resource: platform-automation
    params:
      globs: ["*tasks*.zip"]
      unpack: true
  - get: env
  - get: installation

We should be able to set this with fly and see it pass, but it doesn't do anything other than download the resources. Still, we can make a commit here:

1
2
3
git add pipeline.yml
git commit -m "Setup initial gets for upgrade job"
git push

Is this really a commit point though?

We like frequent, small commits that can be fly set and, ideally, go green.

This one doesn't actually do anything though, right? Fair, but: setting and running the job gives you feedback on your syntax and variable usage. It can catch typos, resources you forgot to add or misnamed, etc. Committing when you get to a working point helps keeps the diffs small, and the history tractable. Also, down the line, if you've got more than one pair working on a foundation, the small commits help you keep off one another's toes.

We don't demonstrate this workflow here, but it can even be useful to make a commit, use fly to see if it works, and then push it if and only if it works. If it doesn't, you can use git commit --amend once you've figured out why and fixed it. This workflow makes it easy to keep what is set on Concourse and what is pushed to your source control remote in sync.

Looking over the list of inputs for upgrade-opsman we still need three required inputs:

  1. state
  2. config
  3. image

The optional inputs are vars used with the config, so we'll get to those when we do config.

Let's start with the state file. We need to record the iaas we're on and the ID of the currently deployed Ops Manager VM. Different IaaS uniquely identify VMs differently; here are examples for what this file should look like, depending on your IaaS:

1
2
3
iaas: aws
# Instance ID of the AWS VM
vm_id: i-12345678987654321
1
2
3
iaas: azure
# Computer Name of the Azure VM
vm_id: vm_name
1
2
3
iaas: gcp
# Name of the VM in GCP
vm_id: vm_name
1
2
3
iaas: openstack
# Instance ID from the OpenStack Overview
vm_id: 12345678-9876-5432-1abc-defghijklmno
1
2
3
iaas: vsphere
# Path to the VM in vCenter
vm_id: /datacenter/vm/folder/vm_name

Find what you need for your IaaS, write it in your repo as state.yml, commit it, and push it:

1
2
3
git add state.yml
git commit -m "Add state file for foundation Ops Manager"
git push

We can map the env resource to upgrade-opsman's state input once we add the task.

But first, we've got two more inputs to arrange for.

We'll write an Ops Manager VM Configuration file to opsman.yml. The properties available vary by IaaS; regardless, you can often inspect your existing Ops Manager in your IaaS's console (or, if your Ops Manager was created with Terraform, look at your terraform outputs) to find the necessary values.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
opsman-configuration:
  aws:
    region: us-west-2
    vpc_subnet_id: subnet-0292bc845215c2cbf
    security_group_ids: [ sg-0354f804ba7c4bc41 ]
    key_pair_name: ops-manager-key  # used to ssh to VM
    iam_instance_profile_name: env_ops_manager

    # At least one IP address (public or private) needs to be assigned to the
    # VM. It is also permissible to assign both.
    public_ip: 1.2.3.4      # Reserved Elastic IP
    private_ip: 10.0.0.2

    # Optional
    # vm_name: ops-manager-vm    # default - ops-manager-vm
    # boot_disk_size: 100        # default - 200 (GB)
    # instance_type: m5.large    # default - m5.large
                                 # NOTE - not all regions support m5.large
    # assume_role: "arn:aws:iam::..." # necessary if a role is needed to authorize
                                      # the OpsMan VM instance profile
    # tags: {key: value}              # key-value pair of tags assigned to the
    #                                 # Ops Manager VM
    # Omit if using instance profiles
    # And instance profile OR access_key/secret_access_key is required
    # access_key_id: ((access-key-id))
    # secret_access_key: ((secret-access-key))

    # security_group_id: sg-123  # DEPRECATED - use security_group_ids
    # use_instance_profile: true # DEPRECATED - will use instance profile for
                                 # execution VM if access_key_id and
                                 # secret_access_key are not set

  # Optional Ops Manager UI Settings for upgrade-opsman
  # ssl-certificate: ...
  # pivotal-network-settings: ...
  # banner-settings: ...
  # syslog-settings: ...
  # rbac-settings: ...
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
opsman-configuration:
  azure:
    tenant_id: 3e52862f-a01e-4b97-98d5-f31a409df682
    subscription_id: 90f35f10-ea9e-4e80-aac4-d6778b995532
    client_id: 5782deb6-9195-4827-83ae-a13fda90aa0d
    client_secret: ((opsman-client-secret))
    location: westus
    resource_group: res-group
    storage_account: opsman                       # account name of container
    ssh_public_key: ssh-rsa AAAAB3NzaC1yc2EAZ...  # ssh key to access VM

    # Note that there are several environment-specific details in this path
    # This path can reach out to other resource groups if necessary
    subnet_id: /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Network/virtualNetworks/<VNET>/subnets/<SUBNET>

    # At least one IP address (public or private) needs to be assigned
    # to the VM. It is also permissible to assign both.
    private_ip: 10.0.0.3
    public_ip: 1.2.3.4

    # Optional
    # cloud_name: AzureCloud          # default - AzureCloud
    # storage_key: ((storage-key))    # only required if your client does not
                                      # have the needed storage permissions
    # container: opsmanagerimage      # storage account container name
                                      # default - opsmanagerimage
    # network_security_group: ops-manager-security-group
    # vm_name: ops-manager-vm         # default - ops-manager-vm
    # boot_disk_size: 200             # default - 200 (GB)
    # use_managed_disk: true          # this flag is only respected by the
                                      # create-vm and upgrade-opsman commands.
                                      # set to false if you want to create
                                      # the new opsman VM with an unmanaged
                                      # disk (not recommended). default - true
    # storage_sku: Premium_LRS        # this sets the SKU of the storage account
                                      # for the disk
                                      # Allowed values: Standard_LRS, Premium_LRS,
                                      # StandardSSD_LRS, UltraSSD_LRS
    # vm_size: Standard_DS1_v2        # the size of the Ops Manager VM
                                      # default - Standard_DS2_v2
                                      # Allowed values: https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-general
    # tags: Project=ECommerce         # Space-separated tags: key[=value] [key[=value] ...]. Use '' to
                                      # clear existing tags.
    # vpc_subnet: /subscriptions/...  # DEPRECATED - use subnet_id
    # use_unmanaged_disk: false       # DEPRECATED - use use_managed_disk

  # Optional Ops Manager UI Settings for upgrade-opsman
  # ssl-certificate: ...
  # pivotal-network-settings: ...
  # banner-settings: ...
  # syslog-settings: ...
  # rbac-settings: ...
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
opsman-configuration:
  gcp:
    # Either gcp_service_account_name or gcp_service_account json is required
    # You must remove whichever you don't use
    gcp_service_account_name: user@project-id.iam.gserviceaccount.com
    gcp_service_account: ((gcp-service-account-key-json))

    project: project-id
    region: us-central1
    zone: us-central1-b
    vpc_subnet: infrastructure-subnet

    # At least one IP address (public or private) needs to be assigned to the
    # VM. It is also permissible to assign both.
    public_ip: 1.2.3.4
    private_ip: 10.0.0.2

    ssh_public_key: ssh-rsa some-public-key... # RECOMMENDED, but not required
    tags: ops-manager                          # RECOMMENDED, but not required

    # Optional
    # vm_name: ops-manager-vm  # default - ops-manager-vm
    # custom_cpu: 2            # default - 2
    # custom_memory: 8         # default - 8
    # boot_disk_size: 100      # default - 100
    # scopes: ["my-scope"]
    # hostname: custom.hostname # info: https://cloud.google.com/compute/docs/instances/custom-hostname-vm

  # Optional Ops Manager UI Settings for upgrade-opsman
  # ssl-certificate: ...
  # pivotal-network-settings: ...
  # banner-settings: ...
  # syslog-settings: ...
  # rbac-settings: ...
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
opsman-configuration:
  openstack:
    project_name: project
    auth_url: http://os.example.com:5000/v2.0
    username: ((opsman-openstack-username))
    password: ((opsman-openstack-password))
    net_id: 26a13112-b6c2-11e8-96f8-529269fb1459
    security_group_name: opsman-sec-group
    key_pair_name: opsman-keypair

    # At least one IP address (public or private) needs to be assigned to the VM.
    public_ip: 1.2.3.4 # must be an already allocated floating IP
    private_ip: 10.0.0.3

    # Optional
    # availability_zone: zone-01
    # project_domain_name: default
    # user_domain_name: default
    # vm_name: ops-manager-vm       # default - ops-manager-vm
    # flavor: m1.xlarge             # default - m1.xlarge
    # identity_api_version: 2       # default - 3
    # insecure: true                # default - false

  # Optional Ops Manager UI Settings for upgrade-opsman
  # ssl-certificate: ...
  # pivotal-network-settings: ...
  # banner-settings: ...
  # syslog-settings: ...
  # rbac-settings: ...
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
opsman-configuration:
  vsphere:
    vcenter:
      ca_cert: cert                 # REQUIRED if insecure = 0 (secure)
      datacenter: example-dc
      datastore: example-ds-1
      folder: /example-dc/vm/Folder # RECOMMENDED, but not required
      url: vcenter.example.com
      username: ((vcenter-username))
      password: ((vcenter-password))
      resource_pool: /example-dc/host/example-cluster/Resources/example-pool
      # resource_pool can use a cluster - /example-dc/host/example-cluster

      # Optional
      # host: host      # DEPRECATED - Platform Automation cannot guarantee
                        # the location of the VM, given the nature of vSphere
      # insecure: 0     # default - 0 (secure) | 1 (insecure)

    disk_type: thin     # thin|thick
    dns: 8.8.8.8
    gateway: 192.168.10.1
    hostname: ops-manager.example.com
    netmask: 255.255.255.192
    network: example-virtual-network
    ntp: ntp.ubuntu.com
    private_ip: 10.0.0.10
    ssh_public_key: ssh-rsa ......   # REQUIRED Ops Manager >= 2.6

    # Optional
    # cpu: 1                         # default - 1
    # memory: 8                      # default - 8 (GB)
    # ssh_password: ((ssh-password)) # REQUIRED if ssh_public_key not defined
                                     # (Ops Manager < 2.6 ONLY)
    # vm_name: ops-manager-vm        # default - ops-manager-vm
    # disk_size: 200                 # default - 160 (GB), only larger values allowed

  # Optional Ops Manager UI Settings for upgrade-opsman
  # ssl-certificate: ...
  # pivotal-network-settings: ...
  # banner-settings: ...
  # syslog-settings: ...
  # rbac-settings: ...

Alternatively, you can auto-generate your opsman.yml using a p-automator command to output an opsman.yml file in the directory it is called from.

1
2
3
4
5
6
7
docker run -it --rm -v $PWD:/workspace -w /workspace platform-automation-image \
  p-automator export-opsman-config \
  --state-file generated-state/state.yml \
  --config-file opsman.yml \
  --aws-region "$AWS_REGION" \
  --aws-secret-access-key "$AWS_SECRET_ACCESS_KEY" \
  --aws-access-key-id "$AWS_ACCESS_KEY_ID"
1
2
3
4
5
6
7
8
9
docker run -it --rm -v $PWD:/workspace -w /workspace platform-automation-image \
  p-automator export-opsman-config \
  --state-file generated-state/state.yml \
  --config-file opsman.yml \
  --azure-subscription-id "$AZURE_SUBSCRIPTION_ID" \
  --azure-tenant-id "$AZURE_TENANT_ID" \
  --azure-client-id "$AZURE_CLIENT_ID" \
  --azure-client-secret "$AZURE_CLIENT_SECRET" \
  --azure-resource-group "$AZURE_RESOURCE_GROUP"
1
2
3
4
5
6
7
docker run -it --rm -v $PWD:/workspace -w /workspace platform-automation-image \
  p-automator export-opsman-config \
  --state-file generated-state/state.yml \
  --config-file opsman.yml \
  --gcp-zone "$GCP_ZONE" \
  --gcp-service-account-json <(echo "$GCP_SERVICE_ACCOUNT_JSON") \
  --gcp-project-id "$GCP_PROJECT_ID"
1
2
3
4
5
6
7
docker run -it --rm -v $PWD:/workspace -w /workspace platform-automation-image \
  p-automator export-opsman-config \
  --state-file generated-state/state.yml \
  --config-file opsman.yml \
  --vsphere-url "$VCENTER_URL" \
  --vsphere-username "$VCENTER_USERNAME" \
  --vsphere-password "$VCENTER_PASSWORD"

Once you have your config file, commit and push it:

1
2
3
git add opsman.yml
git commit -m "Add opsman config"
git push

Finally, we need the image for the new Ops Manager version.

We'll use the download-product task. It requires a config file to specify which Ops Manager to get, and to provide Tanzu Network credentials. Name this file download-opsman.yml:

1
2
3
4
5
---
pivnet-api-token: ((pivnet-refresh-token)) # interpolated from Credhub
pivnet-file-glob: "ops-manager*.ova"
pivnet-product-slug: ops-manager
product-version-regex: ^2\.5\.0.*$

You know the drill.

1
2
3
git add download-opsman.yml
git commit -m "Add download opsman config"
git push

Now, we can put it all together:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
- name: upgrade-opsman
  serial: true
  plan:
  - get: platform-automation-image
    resource: platform-automation
    params:
      globs: ["*image*.tgz"]
      unpack: true
  - get: platform-automation-tasks
    resource: platform-automation
    params:
      globs: ["*tasks*.zip"]
      unpack: true
  - get: env
  - get: installation
  - task: credhub-interpolate
    image: platform-automation-image
    file: platform-automation-tasks/tasks/credhub-interpolate.yml
    params:
      CREDHUB_CLIENT: ((credhub-client))
      CREDHUB_SECRET: ((credhub-secret))
      CREDHUB_SERVER: ((credhub-server))
      PREFIX: /concourse/your-team-name/foundation
    input_mapping:
      files: env
    output_mapping:
      interpolated-files: interpolated-configs
  - task: download-opsman-image
    image: platform-automation-image
    file: platform-automation-tasks/tasks/download-product.yml
    params:
      CONFIG_FILE: download-opsman.yml
    input_mapping:
      config: interpolated-configs
  - task: upgrade-opsman
    image: platform-automation-image
    file: platform-automation-tasks/tasks/upgrade-opsman.yml
    input_mapping:
      config: interpolated-configs
      image: downloaded-product
      secrets: interpolated-configs
      state: env

Defaults for tasks

We do not explicitly set the default parameters for upgrade-opsman in this example. Because opsman.yml is the default input to OPSMAN_CONFIG_FILE, env.yml is the default input to ENV_FILE, and state.yml is the default input to STATE_FILE, it is redundant to set this param in the pipeline. Refer to the task definitions for a full range of the available and default parameters.

Set the pipeline.

Before we run the job, we should ensure that state.yml is always persisted regardless of whether the upgrade-opsman job failed or passed. To do this, we can add the following section to the job:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
- name: upgrade-opsman
  serial: true
  plan:
  - get: platform-automation-image
    resource: platform-automation
    params:
      globs: ["*image*.tgz"]
      unpack: true
  - get: platform-automation-tasks
    resource: platform-automation
    params:
      globs: ["*tasks*.zip"]
      unpack: true
  - get: env
  - get: installation
  - task: credhub-interpolate
    image: platform-automation-image
    file: platform-automation-tasks/tasks/credhub-interpolate.yml
    params:
      CREDHUB_CLIENT: ((credhub-client))
      CREDHUB_SECRET: ((credhub-secret))
      CREDHUB_SERVER: ((credhub-server))
      PREFIX: /concourse/your-team-name/foundation
    input_mapping:
      files: env
    output_mapping:
      interpolated-files: interpolated-configs
  - task: download-opsman-image
    image: platform-automation-image
    file: platform-automation-tasks/tasks/download-product.yml
    params:
      CONFIG_FILE: download-opsman.yml
    input_mapping:
      config: interpolated-configs
  - task: upgrade-opsman
    image: platform-automation-image
    file: platform-automation-tasks/tasks/upgrade-opsman.yml
    input_mapping:
      config: interpolated-configs
      image: downloaded-product
      secrets: interpolated-configs
      state: env
  ensure:
    do:
    - task: make-commit
      image: platform-automation-image
      file: platform-automation-tasks/tasks/make-git-commit.yml
      input_mapping:
        repository: env
        file-source: generated-state
      output_mapping:
        repository-commit: env-commit
      params:
        FILE_SOURCE_PATH: state.yml
        FILE_DESTINATION_PATH: state.yml
        GIT_AUTHOR_EMAIL: "ci-user@example.com"
        GIT_AUTHOR_NAME: "CI User"
        COMMIT_MESSAGE: 'Update state file'
    - put: env
      params:
        repository: env-commit
        merge: true

Set the pipeline one final time, run the job, and see it pass.

1
2
3
git add pipeline.yml
git commit -m "Upgrade Ops Manager in CI"
git push

Your upgrade pipeline is now complete. You are now free to move on to the next steps of your automation journey.

1
[activate-certificate-authority]: ../tasks.md#activate-certificate-authority