Skip to content

Setting up an External Worker on BOSH

This guide details the steps for setting up Concourse to take advantage of an external worker. To learn more about these concepts, view the Architecture section.

This guide is targeted toward:

  • Platform engineers who operate a Concourse and need to deploy an external worker for the benefit of their users
  • Concourse users who need to deploy their own external worker and join an existing Concourse cluster

There are a number of reasons why you might need an external worker:

  • Different platform/stemcell (different clouds, windows vs linux, etc.)
  • Network topology constraints (e.g. staging and prod are on isolated networks but I need to run Concourse jobs on both)
  • Unique workload constraints (untrusted code, heavy CPU/GPU, location-based for security/performance)
  • Organizational/process requirements (e.g. "every team must have its own worker")

Getting Started

For the common case of setting up an External Worker, you will need three pieces of information:

  • Web Public Key: to be given to the external worker
  • Worker Private Key: to be given to the external worker
  • Worker Public Key: to be given to the Concourse web VM

Let's start collecting these bits of information.


Find the Web Public Key of the Concourse web VM

The Web Public Key will start with ssh-rsa and appear in this format:

1
ssh-rsa AAAAB3NzaC1yc2E................

There are many ways to find out what this key is. Regardless of who you are, the following method may be the easiest way.

In order for a worker to register with a web node, the web node must be accessible on its configured worker gateway port (port 2222 by default).

  1. Put yourself on the network where you plan to deploy your external worker.

  2. Run a command like the following:

1
ssh-keyscan -p TSA-PORT WEB-ADDRESS

Where:

  • WEB-ADDRESS is a domain name or IP address where the web nodes can be reached
  • TSA-PORT is the TSA port configured on the web node of your Concourse. This is 2222 by default.

    This command may have multiple lines of output, but one should look like a public key. For example, the public key is highlighted in the sample output below:

    1
    2
    3
    4
    5
    $ ssh-keyscan -p 2222 ci.concourse-ci.org
    # ci.concourse-ci.org:2222 SSH-2.0-Go
    [ci.concourse-ci.org]:2222 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDiMRfkctT6v/KWRAQGZtICcWp6IToTSZ60siycdLHlBHAJtqGloj+C/rhFikEXmITfOi14lfTqVfbgjXoP1QURbtpXDgdmMxYznztj5t5nNPjwWtlbGqwHibmAigEIMwICYHY/LUqKYXD1DVAP/AYFqb7QCF+s5t4jTjnlYWldncFjNoiR3f4XB3/Bz/BVVL8WSLNkCSb8gUN3H6+tCCGLz91dcNXT8t1H39h+/PskqrNaU+BKF1NJrNHii7DNXTUoagiXDGVH1/TMn211jksJ0TjVgY2dmGO7VRAo4AE03xxFaNg6gcD0jwwPjTFGDAt8p+J/1a8JzdBRCc9ogiSEOe/AxJHMDyi5twP4s8P8f1KjnZzJJ145SKGy3Rv7/JofKSLr2tAID5963HEUZeXIA4tLmXoiQc7w8zV9wWEf7h5Mrf2bLOfDfodghXq+olpFsrkGZGlMLszBggp86ZaECR6AzDlhl9v9PMARbZmNdaH10cI5wiMExH/O4lakXC6Z+CVOaYUBp80oh/kqlADbN4lYYVyvWodGpLAbHWzQpPNooyu2GHERyNGLWssFaByDPV0G3qnRfFAvExwHjL53U0uztOFE7w9KXYWuQB0B7ZCLCeC1QQ1iUSafnS5dJYu0fivAzezopSA/UqezKKA58Mud8SraJXEQ1C5//oR4sQ==
    # ci.concourse-ci.org:2222 SSH-2.0-Go
    # ci.concourse-ci.org:2222 SSH-2.0-Go
    

If you run the Concourse cluster that the worker will be added to:

Assuming your Concourse web nodes are deployed via BOSH and you have access to the BOSH Director that manages them, you should be able to retrieve the web public key using conventional BOSH methods.

  1. View the manifest for the Concourse web VM:

    1
    bosh -d DEPLOYMENT manifest > concourse.yml
    

    Where:

    • DEPLOYMENT is the deployment name where the Concourse web nodes are running
  2. Open the manifest using any text editor. Take note of the worker_gateway.host_key property on the web job in your Concourse deployment. Usually this value is not explicitly visible in a BOSH manifest. You'll likely see a BOSH variable:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    instance_groups:
    ...
    - name: web
      ...
      jobs:
      - release: concourse
        name: web
        properties:
          worker_gateway:
            host_key: ((tsa_host_key))
    

A BOSH variable means that the key you need will be in the credential store you are using. This could be a file (if your convention is to deploy using the --vars-file or --vars-store flag) or in CredHub. At this point we assume you know how to access the value of any BOSH variables, whether that's through Credhub or a vars file.

By this point you should have a copy of the Web's Public Key. We will need it when creating the external worker.

Create an SSH key pair for the Concourse external worker

There are many ways that you can create the SSH key pair. Regardless of how you generate the key you will need to share the resulting Worker Public Key with the Concourse web node(s). Generate the Worker SSH key pair with one of the following methods:

Using the Concourse binary

If you have a copy of the Concourse binary you can have it generate an SSH key pair with the following command:

1
2
3
$ concourse generate-key -t ssh -f worker_key
wrote private key to worker_key
wrote ssh public key to worker_key.pub

Getting the Concourse Binary (Linux)

Download the Concourse BOSH release from VMware Tanzu Network. In the same directory as the the tarball run the following commands:

1
2
3
4
$ tar xf concourse-bosh-release-*.tgz
$ tar xf packages/concourse.tgz
$ tar xf concourse/concourse-*-linux-amd64.tgz
$ cd concourse/bin/

You will now have a copy of the Concourse linux binary that you can use to generate the worker key-pair.

Success

Upon success you will be left with two new files in your current directory:

  • worker_key: the Worker Private Key
  • worker_key.pub: the Worker Public key

Using ssh-keygen

On Linux or macOS you can run the following command in your terminal to create a new set of keys:

1
ssh-keygen -t rsa -m PEM -f worker_key

Passphrases

You will be prompted to enter a passphrase when creating the key pair. DO NOT enter a passphrase for your worker keys. Press the ENTER key twice to pass through both passphare prompts.

Success

Upon success, the command will output a SHA256 and randomart image for your reference. You will be left with two new files in your current directory:

  • worker_key: the Worker Private Key
  • worker_key.pub: the Worker Public key

Using BOSH's Credhub

This section assumes you understand how BOSH generates variables and how to set and retrieve those variables from Credhub. When deploying the external worker you can have BOSH generate the variables for you by applying the following ops-file to your external worker:

1
2
3
4
5
- type: replace
  path: /variables?/-
  value:
    name: worker_key
    type: ssh

This ops-file is also available in the concourse-bosh-deployment repository if you're using that repository to deploy your external worker.

If you're following this method you'll need to retrieve the value of the public_key field from the worker_key variable and make it available for the Concourse Web VM later on when we re-deploy it.

Now we have all three pieces of information needed to create an external Concourse worker. Next steps will update the Concourse Web VM and create the external worker.


Update Concourse Web VM Manifest

Add The Worker Public Key

Concourse workers have two optional pieces of metadata that you can use to classify a worker with:

  • Tags: none or many tags can be specified. Tags are usually used to specify that certain workloads should be run on some worker(s) (e.g. a worker that has access to a private network)
  • Team: workers can be assigned to one team. Useful for isolating all workloads from individual teams.

If your worker is going to be a team worker jump down to the Team Workers section. Otherwise follow the steps in the Tagged/Untagged Workers section.

Tagged/Untagged Workers

Add the Worker Public Key, the contents of the concourse_worker.pub file, to the authorized_keys property of the web job in your Concourse web deployment manifest.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
instance_groups:
...
- name: web
  ...
  jobs:
  - release: concourse
    name: web
    properties:
      worker_gateway:
        host_key: ((tsa_host_key))
        authorized_keys:
          - ((worker_key.public_key))
          - EXTERNAL_WORKER_KEY

Where:

  • EXTERNAL_WORKER_KEY is the contents of the Worker Public Key

With the manifest updated, redeploy the Concourse Web VM's. Move to the next section, Download & Upload Concourse Release.

Team Workers

Add the Worker Public Key, the contents of the concourse_worker.pub file, to the team_authorized_keys property of the web job in your Concourse web deployment manifest.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
instance_groups:
...
- name: web
  ...
  jobs:
  - release: concourse
    name: web
    properties:
      worker_gateway:
        host_key: ((tsa_host_key))
        authorized_keys:
          - ((worker_key.public_key))
        team_authorized_keys:
          TEAM_NAME:
            - EXTERNAL_WORKER_KEY

Where:

  • TEAM_NAME is the name of team in Concourse as shown when running fly teams
  • EXTERNAL_WORKER_KEY is the contents of the Worker Public Key

With the manifest updated, redeploy the Concourse Web VM's.


Download & Upload Concourse Release

  1. If you haven't already, download the appropriate Concourse release from VMware Tanzu Network. For example, if you were to use the Concourse v6.3.0 release, you should end up with a file called concourse-bosh-release-6.3.0.tgz in your Downloads directory.

  2. Use the bosh upload-release command to upload the Concourse tarball that you downloaded from VMware Tanzu Network.

    For example, with the latest release, Concourse v6.3.0:

    1
    bosh -e BOSH-ENVIRONMENT upload-release ~/Downloads/concourse-bosh-release-6.3.0.tgz
    

    Click here for more information about uploading releases.


Setup concourse-bosh-deployment directory on your local machine

The concourse-bosh-deployment repository has a sample BOSH manifest, versions.yml file, and a selection of deployment-modifying operations files. Using these sample files makes it much faster and easier to get started.

  1. Clone the concourse-bosh-deployment repo by running the following snippet on the command line:

    1
    git clone https://github.com/concourse/concourse-bosh-deployment.git
    
  2. Move to the concourse-bosh-deployment directory:

    1
    cd concourse-bosh-deployment
    

    All the paths used in this tutorial are relative to this directory.

  3. Checkout the release that corresponds to the version of Concourse you want to install. For example, if you're installing the latest release, Concourse v6.3.0:

    1
    git checkout v6.3.0
    

    For a list of all Concourse releases, see concourse-bosh-deployment in GitHub.

    Success

    Checking out a release rather than a branch means that git produces the following output:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    Note: checking out 'RELEASE'.
    
    You are in 'detached HEAD' state. You can look around, make experimental
    changes and commit them, and you can discard any commits you make in this
    state without impacting any branches by performing another checkout.
    
    If you want to create a new branch to retain commits you create, you may
    do so (now or later) by using -b with the checkout command again. Example:
    
        git checkout -b <new-branch-name>
    
    HEAD is now at HASH... COMMIT-MESSAGE
    

Create The External Worker

Create The Manifest & Populate Variables

Using the concourse-bosh-deployment repository we can quickly deploy a worker using the cluster/external-worker.yml manifest.

If your worker is going to be a team worker jump down to the Team Workers section. Otherwise follow the steps in the Tagged/Untagged Workers section.

Tagged/Untagged Workers

  1. Store the Web Public Key and the Worker Private Key in a secrets.yml file or BOSH's credential manager. If you store it in a secrets.yml file, create the file with the following YAML content:

    1
    2
    3
    4
    5
    tsa_host_key:
      public_key: <WEB_PUBLIC_KEY>
    
    worker_key:
      private_key: <WORKER_PRIVATE_KEY>
    

    Where:

    • WEB_PUBLIC_KEY is the Web Public Key from the Concourse Web VM
    • WORKER_PRIVATE_KEY is the Worker Private Key previously generated
  2. Create a vars file called worker-vars.yml with the following content:

    1
    2
    3
    4
    5
    6
    7
    deployment_name: DEPLOYMENT-NAME
    external_worker_network_name: NETWORK-NAME
    worker_vm_type: VM-TYPE
    instances: INSTANCES
    azs: [AZS]
    tsa_host: WEB-HOSTNAME
    worker_tags: [WORKER-TAGS]
    

    Where:

    • DEPLOYMENT-NAME is the name of your choice for your Concourse deployment
    • NETWORK-NAME is the name of the networks property in cloud-config.yml
    • VM-TYPE is the name of one of the VM types in your cloud-config.yml file
    • INSTANCES is the number of external works you want
    • AZS is the availability zone, defined in cloud-config.yml, that you wish to place the external worker(s) in
    • WEB-HOSTNAME is the domain or IP address that the external worker can use to connect to the Concourse Web VM's
    • WORKER-TAGS is an array of tags to tag the external worker with. Can contain zero or more tags

Move onto the next section, Deploy The Worker.

Team Workers

  1. Store the Web Public Key and the Worker Private Key in a secrets.yml file or BOSH's credential manager. If you store it in a secrets.yml file, create the file with the following YAML content:

    1
    2
    3
    4
    5
    tsa_host_key:
      public_key: <WEB_PUBLIC_KEY>
    
    worker_key:
      private_key: <WORKER_PRIVATE_KEY>
    

    Where:

    • WEB_PUBLIC_KEY is the Web Public Key from the Concourse Web VM
    • WORKER_PRIVATE_KEY is the Worker Private Key previously generated
  2. Create a vars file called worker-vars.yml with the following content:

    1
    2
    3
    4
    5
    6
    7
    8
    deployment_name: DEPLOYMENT-NAME
    external_worker_network_name: NETWORK-NAME
    worker_vm_type: VM-TYPE
    instances: INSTANCES
    azs: [AZS]
    tsa_host: WEB-HOSTNAME
    worker_tags: [WORKER-TAGS]
    team_name: TEAM_NAME
    

    Where:

    • DEPLOYMENT-NAME is the name of your choice for your Concourse deployment
    • NETWORK-NAME is the name of the networks property in cloud-config.yml
    • VM-TYPE is the name of one of the VM types in your cloud-config.yml file
    • INSTANCES is the number of external works you want
    • AZS is the availability zone, defined in cloud-config.yml, that you wish to place the external worker(s) in
    • WEB-HOSTNAME is the domain or IP address that the external worker can use to connect to the Concourse Web VM's
    • WORKER-TAGS is an array of tags to tag the external worker with. Can contain zero or more tags
    • TEAM_NAME is the name of the team this worker will be assigned to. Should match one of the team names when running fly teams

Deploy The Worker

If you're using BOSH variables to pass in the Worker Private Key and Web Public Key, run one of the following BOSH deploy commands:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# Tagged/Untagged worker
bosh -e $BOSH_ENVIRONMENT deploy -d DEPLOYMENT-NAME external-worker.yml \
  -l ../versions.yml \
  -l worker-vars.yml \
  -l secrets.yml

# Team worker
bosh -e $BOSH_ENVIRONMENT deploy -d DEPLOYMENT-NAME external-worker.yml \
  -l ../versions.yml \
  -l worker-vars.yml \
  -l ./operations/worker-team-name.yml \
  -l secrets.yml

If you're using Credhub to pass in the Worker Private Key and Web Public Key, your deploy command will look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# Tagged/Untagged worker
bosh -e $BOSH_ENVIRONMENT deploy -d DEPLOYMENT-NAME external-worker.yml \
  -l ../versions.yml \
  -l worker-vars.yml

# Team worker
bosh -e $BOSH_ENVIRONMENT deploy -d DEPLOYMENT-NAME external-worker.yml \
  -l ../versions.yml \
  -l ./operations/worker-team-name.yml \
  -l worker-vars.yml

Where:

  • DEPLOYMENT-NAME is the name of your choice for your Concourse deployment

Make sure all your file paths are correct. For more information, see Deploying in the BOSH documentation.

Different Concourse deployments require different environment variables and operations files. If you get an error, check the error message for clues about additional variables that need to be set. Check out the open-source Concourse documentation for additional information.

Upon successfully running your deploy script, a success message is displayed indicating that the Concourse external worker has been created.


Verify The External Worker Connected To The Concourse Cluster

  1. If you tagged your workers then you can run fly -t <TARGET> workers and check for your tagged worker.

    1
    2
    3
    4
    5
    6
    $ fly -t TARGET workers
    name                                 containers  platform  tags  team  state    version  age
    00411b6d-dddf-46f0-a57f-b1ce56c2420e 19          linux     TAG   none  running  2.2      1h
    2108cbfe-0970-46be-884e-c288f0122674 20          linux     none  none  running  2.2      1d
    36a53431-a5e2-44c9-a60a-48b55b28cf5b 21          linux     none  none  running  2.2      1d
    40e8b592-41ec-476a-add8-246b6d0545ea 17          linux     none  none  running  2.2      1d
    
  2. If you didn't tag your worker then you can verify the worker connected to the cluster by checking for the name of the VM in the output of fly workers

  3. Find the name of the external worker VM using bosh vms.

    1
    2
    3
    4
    5
    $ bosh -e $BOSH_ENVIRONMENT -d DEPLOYMENT-NAME vms
    Deployment 'DEPLOYMENT-NAME'
    
    Instance                                    Process State  AZ  IPs         VM CID                                   VM Type   Active
    worker/00411b6d-4de7-4bd3-b1e2-eb04d9308ab8     running        z1  10.1.0.88   vm-45d59267-cb76-423a-5e97-07d68e4a7013  database  true
    

    Take note of the first part of the VM's GUID. For the above output that would be 00411b6d. Search the output of fly workers for a worker that starts with the first part of GUID from the previous bosh vms command.

    1
    2
    3
    4
    5
    6
    $ fly -t TARGET workers
    name                                 containers  platform  tags  team  state    version  age
    00411b6d-dddf-46f0-a57f-b1ce56c2420e 19          linux     none  none  running  2.2      1h
    2108cbfe-0970-46be-884e-c288f0122674 20          linux     none  none  running  2.2      1d
    36a53431-a5e2-44c9-a60a-48b55b28cf5b 21          linux     none  none  running  2.2      1d
    40e8b592-41ec-476a-add8-246b6d0545ea 17          linux     none  none  running  2.2      1d