Skip to content

Install Concourse with Helm

Concourse for VMware Tanzu has traditionally been distributed as a BOSH Release aimed at allowing an operator to deploy Concourse directly from a BOSH director to virtual machines (VMs). Concourse now also provides a Helm Chart release, which instead targets Kubernetes clusters using the templating and release management of Helm.

Introduction

Helm is the package manager for Kubernetes, a tool that streamlines installing and managing Kubernetes applications. It creates Kubernetes objects that can be submitted to Kubernetes clusters, and materialized into a Concourse deployment using Kubernetes constructs (Deployments, StatefulSets, PersistentVolumeClaims, etc).

A Helm Chart is a versioned package of pre-configured Kubernetes resources. Deploying Concourse via Helm Chart makes it less complicated to deploy, scale, maintain, or upgrade your deployment in the future. This guide aims to walk an operator through the step by step process of deploying with Helm.

If you have not read the Prerequisites and Background Information page, please do so before continuing. It contains important information about required tools and settings to make this process work with VMware Tanzu Kubernetes Grid Integrated Edition (TKGi).

Privileged containers

Because Concourse manages its own Linux containers, the worker processes must have superuser privileges, and your cluster must be configured to allow this.

The presence of privileged pods can be a security concern for a Kubernetes cluster. VMware's recommendation is to run a Helm-installed Concourse in its own dedicated cluster in order to avoid any interference from the worker pods to other Kubernetes workloads.

Managing Linux containers without superuser privileges is a subject of active discussion in the Kubernetes community. Research on this topic is scheduled on the Concourse roadmap, so it is possible this requirement may be dropped in a future release.

Cluster Creation Guide

If you have not already created your cluster, you can begin the process now so that it completes in the background while you proceed with the other steps below.

While the process of creating a cluster can vary depending on your needs, with TKGi you can get started by following these commands:

1
tkgi login -k -u USERNAME -p PASSWORD -a API-HOSTNAME
1
tkgi create-cluster CLUSTER-NAME -e CLUSTER-DOMAIN-NAME -p PLAN-NAME

Where:

  • USERNAME is your TKGi username
  • PASSWORD is your TKGi password
  • API-HOSTNAME is your TKGi API host name
  • CLUSTER-NAME is a name you choose for your cluster
  • CLUSTER-DOMAIN-NAME is a domain name you choose for your cluster.
  • PLAN-NAME is one of the plans returned if you run tkgi plans

See the Creating Clusters guide for the version of TKGi that you're working with for more information.


Prerequisites for Deploying on Kubernetes with Helm

The Pivotal Concourse team has tested deploying with Helm using the following prerequisites:

Enabling privileged containers on TKGi

To enable the ability to have privileged containers on TKGi, the plan configured to be used in the cluster must be changed.

Head to the plan configuration in OpsManager, and mark the Allow Privileged checkbox near the end:

You can verify if it worked by inspecting the pod security policy, which should indicate that privileged mode is enabled.

1
2
3
4
5
$ kubectl describe psp pks-privileged
Name:         pks-privileged
...
Spec:
  Allow Privilege Escalation:  true

Download, Tag, and Push Images to Internal Registry

Download Concourse Helm Chart and load images into Docker

  1. If you have not already done so, visit VMware Tanzu Network and download the Concourse Helm Chart.

  2. Unarchive the Helm Chart tarball to a local directory. For example, with version v6.3.0, the tarball will be called concourse-6.3.0.tgz.

    1
    mkdir concourse-helm
    
    1
    tar xvzf ./concourse-6.3.0.tgz -C ./concourse-helm
    
    1
    cd ./concourse-helm
    
  3. Load the container images into a local Docker client by running the following docker load commands one at a time:

    1
    2
    3
    docker load -i ./images/concourse.tar
    docker load -i ./images/postgres.tar
    docker load -i ./images/helm.tar
    

    These images are quite large, and there will be no output until Docker is done loading.

    Success

    Once the loading finishes, you'll see:

    1
    Loaded image: IMAGE-NAME
    

Tag and push the loaded images to internal registry

Registry Authentication

This step assumes that the current docker client has already authenticated against the internal registry through a regular docker login.

In addition to logging in, if you're using a registry with self-signed certificates, you should also make sure your registry has been added to the 'Insecure Registries' section of the Daemon tab in the Docker settings ui for your current workstation.

For more information about certificates and secure registry concerns, see this article: Test an insecure registry

  1. Begin by exporting a pair of variables to your shell to be reused throughout this process. In your terminal, run the following commands:

    1
    2
    export INTERNAL_REGISTRY=INTERNAL-REGISTRY
    export PROJECT=PROJECT-NAME
    

    Where:

    • INTERNAL-REGISTRY is the domain of your internal registry - if you are using Harbor, this must correspond to the URL (without scheme)
    • PROJECT-NAME is the name of the project in your registry. If the project does not exist already you will need to make it.
    1
    export USERNAME=DOCKERHUB-USERNAME
    

    Where:

    • DOCKERHUB-USERNAME is your username on hub.docker.io
  2. The .tar file you downloaded contains a directory called images. You need to extract the tag of each image so that you can appropriately tag the images with the internal registry and project name details from the last step.

    To do this, run the following commands:

    1
    2
    3
    export CONCOURSE_IMAGE_TAG=$(cat ./images/concourse.tar.name | cut -d ':' -f 2)
    export POSTGRES_IMAGE_TAG=$(cat ./images/postgres.tar.name | cut -d ':' -f 2)
    export HELM_IMAGE_TAG=$(cat ./images/helm.tar.name | cut -d ':' -f 2)
    
  3. Tag the images so their names include the internal registry address:

    1
    2
    3
    docker tag concourse/concourse:$CONCOURSE_IMAGE_TAG $INTERNAL_REGISTRY/$PROJECT/concourse:$CONCOURSE_IMAGE_TAG
    docker tag dev.registry.pivotal.io/concourse/postgres:$POSTGRES_IMAGE_TAG $INTERNAL_REGISTRY/$PROJECT/postgres:$POSTGRES_IMAGE_TAG
    docker tag dev.registry.pivotal.io/concourse/helm:$HELM_IMAGE_TAG $INTERNAL_REGISTRY/$PROJECT/helm:$HELM_IMAGE_TAG
    
    1
    2
    3
    docker tag concourse/concourse:$CONCOURSE_IMAGE_TAG $USERNAME/concourse:$CONCOURSE_IMAGE_TAG
    docker tag dev.registry.pivotal.io/concourse/postgres:$POSTGRES_IMAGE_TAG $USERNAME/postgres:$POSTGRES_IMAGE_TAG
    docker tag dev.registry.pivotal.io/concourse/helm:$HELM_IMAGE_TAG $USERNAME/helm:$HELM_IMAGE_TAG
    
  4. Push the images to the internal registry by running the following commands in your terminal:

    1
    2
    3
    docker push $INTERNAL_REGISTRY/$PROJECT/concourse:$CONCOURSE_IMAGE_TAG
    docker push $INTERNAL_REGISTRY/$PROJECT/postgres:$POSTGRES_IMAGE_TAG
    docker push $INTERNAL_REGISTRY/$PROJECT/helm:$HELM_IMAGE_TAG
    
    1
    2
    3
    docker push $USERNAME/concourse:$CONCOURSE_IMAGE_TAG
    docker push $USERNAME/postgres:$POSTGRES_IMAGE_TAG
    docker push $USERNAME/helm:$HELM_IMAGE_TAG
    

    You must have the necessary credentials (and authorization) to push to the targeted project.


Prepare the Kubernetes Cluster

  1. Login to TKGi and get the cluster credentials

    1
    tkgi login -k -u USER -p PASSWORD -a API-HOSTNAME
    
    1
    tkgi get-credentials CLUSTER-NAME
    
    1
    kubectl config use-context CLUSTER-NAME
    

    Next, you need to create a default StorageClass if one does not already exist.

    Tip

    You can check if you already have a StorageClass by running kubectl get sc.

  2. Create a file called storage-class.yml. For example, with vim, run:

    1
    vim storage-class.yml
    
  3. Populate the file with the following:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    ---
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: concourse-storage-class
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: kubernetes.io/vsphere-volume
    parameters:
      datastore: DATASTORE-FROM-VSPHERE
    

    Where:

    • DATASTORE-FROM-VSPHERE is a valid VSphere datastore
  4. Use the following kubectl command to create the storage class on your cluster:

    1
    kubectl create -f storage-class.yml
    

Success

This command should return a response like:

1
storageclass.storage.k8s.io/concourse-storage-class created

Configure Tiller and Pull Container Images

Helm is composed of both a client-side CLI (helm) and a server-side component (tiller), and is included in the .tar file you downloaded under the ./helm directory. This section explains how to upload the container image to a private or public registry, and configure Tiller appropriately.

This process differs slightly for private registries and public registries. Choose one of the following links to jump to the instructions that match your deployment and registry strategy:

After configuring Tiller for your registry, navigate to Install Tiller using Helm. The instructions are the same whether the registry is private or public.

Configure Tiller for Private Registry

  1. If the registry requires authentication, you must first create a service account configured with credentials to access that registry. Create a file called tiller-config.yml. For example, from your command line with vim, run the following command:

    1
    vim tiller-config.yml
    
  2. Copy and paste the following snippet into the file.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: tiller
      namespace: kube-system
    imagePullSecrets:
      - name: "regcred"
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: tiller
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
      - kind: ServiceAccount
        name: tiller
        namespace: kube-system
    

    This configuration defines a service account for tiller to use that will pull images using registry credentials (to be defined below).

  3. Save and close tiller-config.yml.

  4. You must generate a secret in both the kube-system namespace (for the Tiller) and the default namespace (for Concourse).

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    kubectl create secret docker-registry regcred \
        --docker-server=$INTERNAL_REGISTRY \
        --docker-username=$USERNAME \
        --docker-password=$password \
        --namespace=kube-system
    
    kubectl create secret docker-registry regcred \
        --docker-server=$INTERNAL_REGISTRY \
        --docker-username=$USERNAME \
        --docker-password=$password \
        --namespace=default
    
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    kubectl create secret docker-registry regcred \
        --docker-server=docker.io \
        --docker-username=$USERNAME \
        --docker-password=$password \
        --namespace=kube-system
    
    kubectl create secret docker-registry regcred \
        --docker-server=docker.io \
        --docker-username=$USERNAME \
        --docker-password=$password \
        --namespace=default
    

    This secret contains the credentials that can be used to fetch images from the internal registry that you pushed images to previously.

Configure Tiller for Public Registry

Alternatively, if your registry is public and requires no authentication, perform the following steps instead:

  1. Create a file called tiller-config.yml. For example, from your command line with vim, run the following command:

    1
    vim tiller-config.yml
    
  2. Copy and paste the following snippet into the file.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: tiller
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: tiller
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
      - kind: ServiceAccount
        name: tiller
        namespace: kube-system
    

    This configuration defines a service account for tiller to use to pull images without any registry credentials.

  3. Save and close tiller-config.yml.

Install Tiller using Helm

After creating tiller-config.yml, perform the following steps:

  1. Login to TKGi and get the cluster credentials

    1
    tkgi login -k -u USER -p PASSWORD -a API-HOSTNAME
    
    1
    tkgi get-credentials CLUSTER-NAME
    
    1
    kubectl config use-context CLUSTER-NAME
    

  2. Run kubectl to apply tiller-config.yml:

    1
    kubectl create -f tiller-config.yml
    

    Success

    When successful, you should see the following response:

    1
    2
    serviceaccount/tiller created
    clusterrolebinding.rbac.authorization.k8s.io/tiller created
    
  3. Install tiller using helm:

    1
    2
    3
    helm init \
        --tiller-image $INTERNAL_REGISTRY/$PROJECT/helm:$HELM_IMAGE_TAG \
        --service-account tiller
    

    Where:

    • INTERNAL_REGISTRY is the name of your internal registry
    • PROJECT is the name of the project containing your images
    1
    2
    3
    helm init \
        --tiller-image $USERNAME/helm:$HELM_IMAGE_TAG \
        --service-account tiller
    

    Where:

    • USERNAME is your hub.docker.io username

Success

Verify that tiller is running by executing helm version. If everything is running properly, this command will displaying the Client and Server versions:

1
2
Client: &version.Version{SemVer:"v2.16.7", GitCommit:"5f2584fd3d35552c4af26036f0c464191287986b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.7", GitCommit:"5f2584fd3d35552c4af26036f0c464191287986b", GitTreeState:"clean"}

Installing Concourse via Helm Chart

  1. Create a deployment configuration file named deployment-values.yml

    1
    vim deployment-values.yml
    

    Insert the following snippet:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    ---
    image: INTERNAL_REGISTRY/PROJECT/concourse
    imageTag: CONCOURSE_IMAGE_TAG
    imagePullSecrets: ["regcred"] # Remove if registry is public
    postgresql:
      image:
        registry: INTERNAL_REGISTRY
        repository: PROJECT/postgres
        tag: POSTGRES_IMAGE_TAG
        pullSecrets: ["regcred"] # Remove if registry is public
    

    Where:

    • INTERNAL_REGISTRY/PROJECT is your registry address and project.

    • CONCOURSE_IMAGE_TAG is the output of

      1
      cat ./images/concourse.tar.name | cut -d ':' -f 2
      

    • POSTGRES_IMAGE_TAG is the output of

      1
      cat ./images/postgres.tar.name | cut -d ':' -f 2
      

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    ---
    image: USERNAME/concourse
    imageTag: CONCOURSE_IMAGE_TAG
    imagePullSecrets: ["regcred"] # Remove if registry is public
    postgresql:
      image:
        registry: docker.io
        repository: USERNAME/postgres
        tag: POSTGRES_IMAGE_TAG
        pullSecrets: ["regcred"] # Remove if registry is public
    

    Where:

    • USERNAME is your Docker Hub username.

    • CONCOURSE_IMAGE_TAG is the output of

      1
      cat ./images/concourse.tar.name | cut -d ':' -f 2
      

    • POSTGRES_IMAGE_TAG is the output of

      1
      cat ./images/postgres.tar.name | cut -d ':' -f 2
      

  2. Save and close deployment-values.yml

  3. Deploy with helm

    1
    2
    3
    4
    helm install \
        --name DEPLOYMENT-NAME \
        --values ./deployment-values.yml \
        ./charts
    

    Where:

    • DEPLOYMENT-NAME is the name of your choosing for your Concourse Deployment

    Successful Deployment

    If the helm install command is successful, you will see the following response followed by more information about your cluster:

    1
    2
    3
    4
    5
    NAME: DEPLOYMENT-NAME
    LAST DEPLOYED: DEPLOYMENT-DATE
    NAMESPACE: default
    STATUS: DEPLOYED
    ...
    

Recommendations

Aside from the typical recommendations for any Concourse installation (see Running a web node and Running a worker node), given the peculiarities of Kubernetes, Pivotal recommends a few tweaks to the deployments of Concourse on Kubernetes.

For Concourse's workers:

  • Give each Concourse worker pod an entire machine for itself
    • As each worker is responsible for its own set of Garden containers, having multiple Concourse workers in the same Kubernetes node would inevitably lead to having too many Linux containers under the same machine, impacting overall performance
    • This can be achieved with Kubernetes by a combination of affinity rules (by configuring the worker.affinity key, and/or the worker.hardAntiAffinity helper key in the values.yaml file) and taints.

For Concourse's web instances:

  • Have Concourse web pods allocated in different nodes
    • Concourse web instances at time have to stream volumes between workers, thus, needing a lot of dedicated bandwidth depending on the workloads
    • This can be achieved with Kubernetes affinity rules, configurable through the web.affinity key in values.yaml.