Writing a Pipeline to Install Ops Manager
This how-to-guide shows you how to write a pipeline for installing a new Ops Manager. If you already have an Ops Manager VM, check out Upgrading an Existing Ops Manager.
Prerequisites
Over the course of this guide, we're going to use Platform Automation Toolkit to create a pipeline using Concourse.
Before we get started, you'll need a few things ready to go:
- Credentials for an IaaS that Ops Manager is compatible with
- It doesn't actually matter what IaaS you use for Ops Manager, as long as your Concourse can connect to it. Pipelines built with Platform Automation Toolkit can be platform-agnostic.
- A Concourse instance with access to a Credhub instance and to the Internet
- GitHub account
- Read/write credentials and bucket name for an S3 bucket
- An account on VMware Tanzu Network
- A MacOS workstation
- with Docker installed
- a text editor you like
- a terminal emulator you like
- a browser that works with Concourse, like Firefox or Chrome
- and
git
It will be very helpful to have a basic familiarity with the following. If you don't have basic familiarity with all these things, that's okay. We'll explain some basics, and link to resources to learn more:
A note on the prerequisites
While this guide uses Github to provide a git remote, and an S3 bucket as a blobstore, Platform Automation Toolkit supports arbitrary git providers and S3-compatible blobstores.
If you need to use an alternate one, that's okay.
We picked specific examples so we could describe some steps in detail. Some details may be different if you follow along with different providers. If you're comfortable navigating those differences on your own, go for it!
Check out our reference for using an S3-specific blobstore
Similarly, in this guide, we assume the MacOS operating system. This should all work fine on Linux, too, but there might be differences in the paths you'll need to figure out.
Creating a Concourse Pipeline
Platform Automation Toolkit's tasks and image are meant to be used in a Concourse pipeline. So, let's make one.
Using your bash command-line client,
create a directory to keep your pipeline files in, and cd
into it.
1 2 |
|
This repo name should relate to your situation and be specific enough to be navigable from your local workstation.
"!$
"
!$
is a bash shortcut.
Pronounced "bang, dollar-sign,"
it means "use the last argument from the most recent command."
In this case, that's the directory we just created!
This is not a Platform Automation Toolkit thing,
this is just a bash tip dearly beloved
of at least one Platform Automator.
Now, create a file called pipeline.yml
.
Naming
We'll use pipeline.yml
in our examples throughout this guide.
However, you may create multiple pipelines over time.
If there's a more sensible name for the pipeline you're working on,
feel free to use that instead.
Write this at the top, and save the file. This is YAML for "the start of the document". It's optional, but traditional:
1 |
|
Now you have a pipeline file! Nominally! Well, look. It's valid YAML, at least.
Getting fly
Let's try to set it as a pipeline with fly
,
the Concourse command-line Interface (CLI).
First, check if we've got fly
installed at all:
1 |
|
If it gives you back a version number, great! Skip ahead to Setting The Pipeline
If it says something like -bash: fly: command not found
,
we have a little work to do: we've got to get fly
.
Navigate to the address for your Concourse instance in a web browser. At this point, you don't even need to be signed in! If there are no public pipelines, you should see something like this:
If there are public pipelines, or if you're signed in and there are pipelines you can see, you'll see something similar in the lower-right hand corner.
Click the icon for your OS and save the file,
mv
the resulting file to somewhere in your $PATH
,
and use chmod
to make it executable:
A note on command-line examples
Some of these, you can copy-paste directly into your terminal.
Some of them won't work that way,
or even if they did, would require you to edit them to replace our example values
with your actual values.
We recommend you type all of the bash examples in by hand,
substituting values, if necessary, as you go.
Don't forget that you can often hit the tab
key
to auto-complete the name of files that already exist;
it makes all that typing just a little easier,
and serves as a sort of command-line autocorrect.
1 2 |
|
Congrats! You got fly
.
Okay but what did I just do?
FAIR QUESTION. You downloaded the fly
binary,
moved it into bash's PATH,
which is where bash looks for things to execute
when you type a command,
and then added permissions that allow it to be ex
ecuted.
Now, the CLI is installed -
and we won't have to do all that again,
because fly
has the ability to update itself,
which we'll get into later.
Setting The Pipeline
Okay now let's try to set our pipeline with fly
, the Concourse CLI.
fly
keeps a list of Concourses it knows how to talk to.
Let's see if the Concourse we want is already on the list:
1 |
|
If you see the address of the Concourse you want to use in the list, note down its name, and use it in the login command:
1 |
|
Control-plane?
We're going to use the name control-plane
for our Concourse in this guide.
It's not a special name,
it just happens to be the name
of the Concourse we want to use in our target list.
If you don't see the Concourse you need, you can add it with the -c
(--concourse-url
)flag:
1 |
|
You should see a login link you can click on to complete login from your browser.
Stay on target
The -t
flag sets the name when used with login
and -c
.
In the future, you can leave out the -c
argument.
If you ever want to know what a short flag stands for,
you can run the command with -h
(--help
) at the end.
Pipeline-setting time! We'll use the name "foundation" for this pipeline, but if your foundation has an actual name, use that instead.
1 |
|
It should say no changes to apply
,
which is fair, since we gave it an empty YAML doc.
Version discrepancy
If fly
says something about a "version discrepancy,"
"significant" or otherwise, just do as it says:
run fly sync
and try again.
fly sync
automatically updates the CLI
with the version that matches the Concourse you're targeting.
Useful!
Your First Job
Let's see Concourse actually do something, yeah?
Add this to your pipeline.yml
, starting on the line after the ---
:
1 |
|
Good point. Don't actually add that to your pipeline config yet. Or if you have, delete it, so your whole pipeline looks like this again:
1 |
|
Reverting edits to our pipeline is something we'll probably want to do again. This is one of many reasons we want to keep our pipeline under version control.
So let's make this directory a git repo!
But First, git init
Git Repository Layout
The following describes a step-by-step approach for how to get set up with git.
For an example of the repository file structure for single and multiple foundation systems, please reference Git Repository Layout.
git
should come back with information about the commit you just created:
1 2 |
|
If it gives you a config error instead,
you might need to configure git
a bit.
Here's a good guide
to initial setup.
Get that done, and try again.
Now we can add our pipeline.yml
,
so in the future it's easy to get back to that soothing ---
state.
1 2 |
|
Let's just make sure we're all tidy:
1 |
|
git
should come back with nothing to commit, working tree clean
.
Great. Now we can safely make changes.
Git commits
git
commits are the basic unit of code history.
Making frequent, small, commits with good commit messages makes it much easier to figure out why things are the way they are, and to return to the way things were in simpler, better times. Writing short commit messages that capture the intent of the change (in an imperative style) can be tough, but it really does make the pipeline's history much more legible, both to future-you, and to current-and-future teammates and collaborators.
The Test Task
Platform Automation Toolkit comes with a test
task
meant to validate that it's been installed correctly.
Let's use it to get setup.
Add this to your pipeline.yml
, starting on the line after the ---
:
1 2 3 4 5 6 |
|
If we try to set this now, Concourse will take it:
1 |
|
Now we should be able to see our pipeline
in the Concourse UI.
It'll be paused, so click the "play" button to unpause it.
Then, click in to the gray box for our test
job,
and hit the "plus" button to schedule a build.
It should error immediately, with unknown artifact source: platform-automation-tasks
.
We didn't give it a source for our task file.
We've got a bit of pipeline code that Concourse accepts. Before we start doing the next part, this would be a good moment to make a commit:
1 2 |
|
With that done,
we can try to get the inputs we need
by adding get
steps to the plan
before the task, like so:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
When using vSphere
There is a smaller vSphere container image available. To use it instead of the general purpose image, you can use this glob to get the image:
1 2 3 4 5 |
|
If we try to fly set
this,
fly
will complain about invalid resources.
To actually make the image
and file
we want to use available,
we'll need some Resources.
Adding Resources
Resources are Concourse's main approach to managing artifacts. We need an image, and the tasks directory - so we'll tell Concourse how to get these things by declaring Resources for them.
In this case, we'll be downloading the image and the tasks directory from Tanzu Network. Before we can declare the resources themselves, we have to teach Concourse to talk to Tanzu Network. (Many resource types are built in, but this one isn't.)
Add the following to your pipeline file.
We'll put it above the jobs
entry.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
The API token is a credential, which we'll pass via the command-line when setting the pipeline, so we don't accidentally check it in.
Grab a refresh token from your Tanzu Network profile
(when logged in, click your username, then Edit Profile
)
and clicking "Request New Refresh Token."
Then use that token in the following command:
Keep it secret, keep it safe
Bash commands that start with a space character are not saved in your history. This can be very useful for cases like this, where you want to pass a secret, but don't want it saved. Commands in this guide that contain a secret start with a space, which can be easy to miss.
1 2 3 4 5 |
|
Warning
When you get your Tanzu Network token as described above, any previous Tanzu Network tokens you may have gotten will stop working. If you're using your Tanzu Network refresh token anywhere, retrieve it from your existing secret storage rather than getting a new one, or you'll end up needing to update it everywhere it's used.
Go back to the Concourse UI and trigger another build. This time, it should pass.
Commit time!
1 2 |
|
We'd rather not pass our Tanzu Network token every time we need to set the pipeline. Fortunately, Concourse can integrate with secret storage services.
Let's put our API token in Credhub so Concourse can get it.
First we'll need to login:
Backslashes in bash examples
The following example has been broken across multiple lines
by using backslash characters (\
) to escape the newlines.
We'll be doing this a lot to keep the examples readable.
When you're typing these out,
you can skip that and just put it all on one line.
Again, note the space at the start
1 2 3 4 |
|
Logging in to credhub
Depending on your credential type,
you may need to pass client-id
and client-secret
,
as we do above,
or username
and password
.
We use the client
approach because that's the credential type
that automation should usually be working with.
Nominally, a username represents a person,
and a client represents a system;
this isn't always exactly how things are in practice.
Use whichever type of credential you have in your case.
Note that if you exclude either set of flags,
Credhub will interactively prompt for username
and password
,
and hide the characters of your password when you type them.
This method of entry can be better in some situations.
Then, we can set the credential name to the path where Concourse will look for it:
1 2 3 4 5 |
|
Now, let's set that pipeline again, without passing a secret this time.
1 2 3 |
|
This should succeed,
and the diff Concourse shows you should replace the literal credential
with ((pivnet-refresh-token))
.
Visit the UI again and re-run the test job; this should also succeed.
1 |
|
Downloading Ops Manager
We're finally in a position to do work!
Let's switch out the test job for one that downloads and installs Ops Manager. We can do this by changing:
- the
name
of the job - the
name
of the task - the
file
of the task
Our first task within the job should be download-product
.
It has an additional required input;
we need the config
file download-product
uses to talk to Tanzu Network.
We'll write that file and make it available as a resource in a moment,
for now, we'll just get
it
(and reference it in our params)
as if it's there.
It also has an additional output (the downloaded image).
We're just going to use it in a subsequent step,
so we don't have to put
it anywhere.
Finally, while it's fine for test
to run in parallel,
the install process shouldn't.
So, we'll add serial: true
to the job, too.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
If we try to fly
this up to Concourse,
it will again complain about resources that don't exist.
So, let's make them.
The first new resource we need is the config file. We'll push our git repo to a remote on Github to make this (and later, other) configuration available to the pipelines.
Github has good instructions
you can follow to create a new repository on Github.
You can skip over the part
about using git init
to setup your repo,
since we already did that.
Go ahead and setup your remote
and use git push
to make what we have available.
We will use this repository to hold our single foundation specific configuration.
We are using the "Single Repository for Each Foundation"
pattern to structure our configurations.
You will also need to add the repository URL to Credhub so we can reference it later when we declare the corresponding resource.
1 2 3 4 |
|
download-ops-manager.yml
holds creds for communicating with Tanzu Network,
and uniquely identifies an Ops Manager image to download.
An example download-ops-manager.yml
is shown below.
Create a download-ops-manager.yml
for the IaaS you are using.
1 2 3 4 5 |
|
1 2 3 4 5 |
|
1 2 3 4 5 |
|
1 2 3 4 5 |
|
1 2 3 4 5 |
|
Add and commit the new file:
1 2 3 |
|
Now that the download-ops-manager file we need is in git,
we need to add a resource to tell Concourse how to get it as config
.
Since this is (probably) a private repo, we'll need to create a deploy key Concourse can use to access it. Follow Github's instructions for creating a deploy key.
Then, put the private key in Credhub so we can use it in our pipeline:
1 2 3 4 5 6 |
|
Then, add this to the resources section of your pipeline file:
1 2 3 4 5 6 |
|
We'll need to put the Tanzu Network token in Credhub:
1 2 3 4 |
|
Credhub paths and pipeline names
Notice that we've added an element to the cred paths; now we're using the foundation name.
If you look at Concourse's lookup rules, you'll see that it searches the pipeline-specific path before the team path. Since our pipeline is named for the foundation it's used to manage, we can use this to scope access to our foundation-specific information to just this pipeline.
By contrast, the Tanzu Network token may be valuable across several pipelines (and associated foundations), so we scoped that to our team.
In order to perform interpolation in one of our input files,
we'll need the prepare-tasks-with-secrets
task
Earlier, we relied on Concourse's native integration with Credhub for interpolation.
That worked because we needed to use the variable
in the pipeline itself, not in one of our inputs.
We can add it to our job
after we've retrieved our download-ops-manager.yml
input,
but before the download-product
task:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
|
Notice the input mappings
of the prepare-tasks-with-secrets
task.
This allows us to use the output of one task
as in input of another.
An alternative to input_mappings
is discussed in
Configuration Management Strategies.
Now, the prepare-tasks-with-secrets
task
will find required credentials in the config files,
and modify the tasks,
so they will pull values from Concourse's integration of Credhub.
The job will download the product now. This is a good commit point.
1 2 3 |
|
Creating Resources for Your Ops Manager
Before Platform Automation Toolkit can create a VM for your Ops Manager installation, there are certain resources required by the VM creation and Ops Manager director installation processes. These resources are created directly on the IaaS of your choice, and read in as configuration for your Ops Manager.
There are two main ways of creating these resources, and you should use whichever method is right for you and your setup.
Terraform:
These are open source terraforming files
we recommend for use, as they are maintained by VMware.
These files are found in the open source paving
repo on GitHub.
This is the recommended way to get these resources set up as the output can directly be used in subsequent steps as property configuration.
The paving
repo provides instructions for use in the README
.
Any manual variables that you need to fill out
will be in a terraform.tfvars file
in the folder for the IaaS you are using
(for more specific instruction, please consult the README
for that IaaS).
If there are specific aspects of the paving
repo that does not work for you,
you can override some properties using an override.tf file.
Follow these steps to use the paving
repository:
-
Clone the repo on the command line:
1 2
cd ../ git clone https://github.com/pivotal/paving.git
-
In the checked out repository there are directories for each IaaS. Copy the terraform templates for the infrastructure of your choice to a new directory outside of the paving repo, so you can modify it:
1 2 3 4
# cp -Ra paving/${IAAS} paving-${IAAS} mkdir paving-${IAAS} cp -a paving/$IAAS/. paving-$IAAS cd paving-${IAAS}
IAAS
must be set to match one of the infrastructure directories at the top level of thepaving
repo - for example,aws
,azure
,gcp
, ornsxt
. -
Within the new directory, the
terraform.tfvars.example
file shows what values are required for that IaaS. Remove the.example
from the filename, and replace the examples with real values. -
Initialize Terraform which will download the required IaaS providers.
1
terraform init
-
Run
terraform refresh
to update the state with what currently exists on the IaaS.1 2
terraform refresh \ -var-file=terraform.tfvars
-
Next, you can run
terraform plan
to see what changes will be made to the infrastructure on the IaaS.1 2 3
terraform plan \ -out=terraform.tfplan \ -var-file=terraform.tfvars
-
Finally, you can run
terraform apply
to create the required infrastructure on the IaaS.1 2 3
terraform apply \ -parallelism=5 \ terraform.tfplan
-
Save off the output from
terraform output stable_config
into avars.yml
file inyour-repo-name
for future use:1
terraform output stable_config > ../your-repo-name/vars.yml
-
Return to your working directory for the post-terraform steps:
1
cd ../your-repo-name
-
Commit and push the updated
vars.yml
file:1 2 3
git add vars.yml git commit -m "Update vars.yml with terraform output" git push
Manual Installation:
VMware has extensive documentation to manually create the resources needed if you are unable or do not wish to use Terraform. As with the Terraform solution, however, there are different docs depending on the IaaS you are installing Ops Manager onto.
When going through the documentation required for your IaaS, be sure to stop before deploying the Ops Manager image. Platform Automation Toolkit will do this for you.
NOTE: if you need to install an earlier version of Ops Manager, select your desired version from the dropdown at the top of the page.
Creating the Ops Manager VM
Now that we have an Ops Manager image and the resources required to deploy a VM,
let's add the new task to the install-opsman
job.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
|
If we try to fly
this up to Concourse, it will again complain about resources that don't exist.
So, let's make them.
Looking over the list of inputs for create-vm
we still need two required inputs:
config
state
The optional inputs are vars used with the config, so we'll get to those when we do config
.
Let's start with the config file.
We'll write an Ops Manager VM Configuration file to opsman.yml
.
The properties available vary by IaaS, for example:
- IaaS credentials
- networking setup (IP address, subnet, security group, etc)
- ssh key
- datacenter/availability zone/region
Terraform Outputs
If you used the paving
repository from the Creating Resources for Your Ops Manager section,
the following steps will result in a filled out opsman.yml
.
-
Ops Manager needs to be deployed with IaaS specific configuration. Platform Automation Toolkit provides a configuration file format that looks like this:
Copy and paste the YAML below for your IaaS and save as
opsman.yml
.1 2 3 4 5 6 7 8 9 10 11 12 13 14
--- opsman-configuration: aws: access_key_id: ((access_key)) boot_disk_size: 100 iam_instance_profile_name: ((ops_manager_iam_instance_profile_name)) instance_type: m5.large key_pair_name: ((ops_manager_key_pair_name)) public_ip: ((ops_manager_public_ip)) region: ((region)) secret_access_key: ((secret_key)) security_group_ids: [((ops_manager_security_group_id))] vm_name: ((environment_name))-ops-manager-vm vpc_subnet_id: ((ops_manager_subnet_id))
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
--- opsman-configuration: azure: boot_disk_size: "100" client_id: ((client_id)) client_secret: ((client_secret)) cloud_name: ((iaas_configuration_environment_azurecloud)) container: ((ops_manager_container_name)) location: ((location)) network_security_group: ((ops_manager_security_group_name)) private_ip: ((ops_manager_private_ip)) public_ip: ((ops_manager_public_ip)) resource_group: ((resource_group_name)) ssh_public_key: ((ops_manager_ssh_public_key)) storage_account: ((ops_manager_storage_account_name)) storage_sku: "Premium_LRS" subnet_id: ((management_subnet_id)) subscription_id: ((subscription_id)) tenant_id: ((tenant_id)) use_managed_disk: "true" vm_name: "((resource_group_name))-ops-manager" vm_size: "Standard_DS2_v2"
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
--- opsman-configuration: gcp: boot_disk_size: 100 custom_cpu: 4 custom_memory: 16 gcp_service_account: ((service_account_key)) project: ((project)) public_ip: ((ops_manager_public_ip)) region: ((region)) ssh_public_key: ((ops_manager_ssh_public_key)) tags: ((ops_manager_tags)) vm_name: ((environment_name))-ops-manager-vm vpc_subnet: ((management_subnet_name)) zone: ((availability_zones.0))
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
--- opsman-configuration: vsphere: vcenter: datacenter: ((vcenter_datacenter)) datastore: ((vcenter_datastore)) folder: ((ops_manager_folder)) url: ((vcenter_host)) username: ((vcenter_username)) password: ((vcenter_password)) resource_pool: /((vcenter_datacenter))/host/((vcenter_cluster))/Resources/((vcenter_resource_pool)) insecure: ((allow_unverified_ssl)) disk_type: thin dns: ((ops_manager_dns_servers)) gateway: ((management_subnet_gateway)) hostname: ((ops_manager_dns)) netmask: ((ops_manager_netmask)) network: ((management_subnet_name)) ntp: ((ops_manager_ntp)) private_ip: ((ops_manager_private_ip)) ssh_public_key: ((ops_manager_ssh_public_key))
Where:
- The
((parameters))
in these examples map to outputs from theterraform-outputs.yml
, which can be provided via vars file for YAML interpolation in a subsequent step.
opsman.yml
for an unlisted IaaSFor a supported IaaS not listed above, reference the Platform Automation Toolkit docs.
- The
Manual Configuration
If you created your infrastructure manually
or would like additional configuration options,
these are the acceptable keys for the opsman.yml
file for each IaaS.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
|
Using the Ops Manager Config file
Once you have your config file, commit and push it:
1 2 3 |
|
The state
input is a placeholder
which will be filled in by the create-vm
task output.
This will be used later to keep track of the VM so it can be upgraded,
which you can learn about in the upgrade-how-to.
Add the following to your resources
section of your pipeline.yml
1 2 3 4 5 6 |
|
This resource definition will allow create-vm
to use the variables from vars.yml
in the opsman.yml
file.
The create-vm
task in the install-opsman
will need to be updated to
use the download-product
image,
Ops Manager configuration file,
variables file,
and the placeholder state file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
Defaults for tasks
We do not explicitly set the default parameters
for create-vm
in this example.
Because opsman.yml
is the default input to
OPSMAN_CONFIG_FILE
, it is redundant
to set this param in the pipeline.
Refer to the task definitions for a full range of the
available and default parameters.
Set the pipeline.
Before we run the job,
we should ensure
that state.yml
is always persisted
regardless of whether the install-opsman
job failed or passed.
To do this, we can add the following section to the job:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
|
Set the pipeline one final time, run the job, and see it pass.
1 2 3 |
|
Commit the final changes to your repository.
1 2 3 |
|
Your install pipeline is now complete. You are now free to move on to the next steps of your automation journey.
1 |
|