Skip to content

Install Concourse with Helm

Concourse for VMware Tanzu has traditionally been distributed as a BOSH Release aimed at allowing an operator to deploy Concourse directly from a BOSH director to virtual machines (VMs). Concourse now also provides a Helm Chart release, which instead targets Kubernetes clusters using the templating and release management of Helm.

Introduction

Helm is the package manager for Kubernetes, a tool that streamlines installing and managing Kubernetes applications. It creates Kubernetes objects that can be submitted to Kubernetes clusters, and materialized into a Concourse deployment using Kubernetes constructs (Deployments, StatefulSets, PersistentVolumeClaims, etc).

A Helm Chart is a versioned package of pre-configured Kubernetes resources. Deploying Concourse via Helm Chart makes it less complicated to deploy, scale, maintain, or upgrade your deployment in the future. This guide aims to walk an operator through the step by step process of deploying with Helm.

If you have not read the Prerequisites and Background Information page, please do so before continuing. It contains important information about required tools and settings to make this process work with Pivotal Container Service (PKS).

Privileged containers

Because Concourse manages its own Linux containers, the worker processes must have superuser privileges, and your cluster must be configured to allow this.

The presence of privileged pods can be a security concern for a Kubernetes cluster. VMware's recommendation is to run a Helm-installed Concourse in its own dedicated cluster in order to avoid any interference from the worker pods to other Kubernetes workloads.

Managing Linux containers without superuser privileges is a subject of active discussion in the Kubernetes community. Research on this topic is scheduled on the Concourse roadmap, so it is possible this requirement may be dropped in a future release.

Cluster Creation Guide

If you have not already created your cluster, you can begin the process now so that it completes in the background while you proceed with the other steps below.

While the process of creating a cluster can vary depending on your needs, with PKS you can get started by following these commands:

1
pks login -k -u USERNAME -p PASSWORD -a API-HOSTNAME
1
pks create-cluster CLUSTER-NAME -e CLUSTER-DOMAIN-NAME -p PLAN-NAME

Where:

  • USERNAME is your PKS username
  • PASSWORD is your PKS password
  • API-HOSTNAME is your PKS API host name
  • CLUSTER-NAME is a name you choose for your cluster
  • CLUSTER-DOMAIN-NAME is a domain name you choose for your cluster.
  • PLAN-NAME is one of the plans returned if you run pks plans

See the Creating Clusters guide for the version of PKS that you're working with for more information.


Prerequisites for Deploying on Kubernetes with Helm

The Pivotal Concourse team has tested deploying with Helm using the following prerequisites:

Enabling privileged containers on PKS

To enable the ability to have privileged containers on PKS, the plan configured to be used in the cluster must be changed.

Head to the plan configuration in OpsManager, and mark the Allow Privileged checkbox near the end:

You can verify if it worked by inspecting the pod security policy, which should indicate that privileged mode is enabled.

1
2
3
4
5
$ kubectl describe psp pks-privileged
Name:  pks-privileged

Settings:
  Allow Privileged:        true

Download, Tag, and Push Images to Internal Registry

Download Concourse Helm Chart and load images into Docker

  1. If you have not already done so, visit VMware Tanzu Network and download the Concourse Helm Chart.

  2. Unarchive the Helm Chart tarball to a local directory. For example, with version v5.5.8, the tarball will be called concourse-5.5.8-helm.tgz.

    1
    mkdir concourse-helm
    
    1
    tar xvzf ./concourse-5.5.8-helm.tgz -C ./concourse-helm
    
    1
    cd ./concourse-helm
    

  3. Load the container images into a local Docker client by running the following docker load commands one at a time:

    1
    docker load -i ./images/concourse.tar
    
    1
    docker load -i ./images/postgres.tar
    
    1
    docker load -i ./images/helm.tar
    

    These images are quite large, and there will be no output until Docker is done loading.

    Success

    Once the loading finishes, you'll see:

    1
    Loaded image: IMAGE-NAME
    

Tag and push the loaded images to internal registry

Registry Authentication

This step assumes that the current docker client has already authenticated against the internal registry through a regular docker login.

In addition to logging in, if you're using a registry with self-signed certificates, you should also make sure your registry has been added to the 'Insecure Registries' section of the Daemon tab in the Docker settings ui for your current workstation.

For more information about certificates and secure registry concerns, see this article: Test an insecure registry

  1. Begin by exporting a pair of variables to your shell to be reused throughout this process. In your terminal, run the following two commands:

    1
    export INTERNAL_REGISTRY=INTERNAL-REGISTRY
    
    1
    export PROJECT=PROJECT-NAME
    

    Where:

    • INTERNAL-REGISTRY is the domain of your internal registry - if you are using Harbor, this must correspond to the URL (without scheme)
    • PROJECT-NAME is the name of the project in your registry. If the project does not exist already you will need to make it.
  2. The .tar file you downloaded contains a directory called images. You need to extract the tag of each image so that you can appropriately tag the images with the internal registry and project name details from the last step.

    To do this, run the following command for each of the images in the /images/ directory:

    1
    cat ./images/IMAGE.tar.name | cut -d ':' -f 2
    

    Where:

    • IMAGE is one of the *.tar.name files inside the images folder

    Take note of the result for each image - this is corresponding IMAGE-TAG that will be used with each image in the next steps.

  3. Tag the images so their names include the internal registry address:

    1
    2
    3
    docker tag concourse/concourse:IMAGE-TAG $INTERNAL_REGISTRY/$PROJECT/concourse:IMAGE-TAG
    docker tag concourse/postgres:IMAGE-TAG $INTERNAL_REGISTRY/$PROJECT/postgres:IMAGE-TAG
    docker tag concourse/helm:IMAGE-TAG $INTERNAL_REGISTRY/$PROJECT/helm:IMAGE-TAG
    
    1
    2
    3
    docker tag concourse/concourse:IMAGE-TAG USERNAME/concourse:IMAGE-TAG
    docker tag concourse/postgres:IMAGE-TAG USERNAME/postgres:IMAGE-TAG
    docker tag concourse/helm:IMAGE-TAG USERNAME/helm:IMAGE-TAG
    

    Where:

    • IMAGE-TAG is the corresponding result from step 2.
    • (in the Docker Hub example) USERNAME is your Docker Hub username
  4. Push the images to the internal registry by running the following commands in your terminal:

    1
    2
    3
    docker push $INTERNAL_REGISTRY/$PROJECT/concourse:IMAGE-TAG
    docker push $INTERNAL_REGISTRY/$PROJECT/postgres:IMAGE-TAG
    docker push $INTERNAL_REGISTRY/$PROJECT/helm:IMAGE-TAG
    
    1
    2
    3
    docker push USERNAME/concourse:IMAGE-TAG
    docker push USERNAME/postgres:IMAGE-TAG
    docker push USERNAME/helm:IMAGE-TAG
    

    Where:

    • IMAGE-TAG is the corresponding result from step 2.
    • (in the Docker Hub example) USERNAME is your Docker Hub username

    You must have the necessary credentials (and authorization) to push to the targetted project.


Prepare the Kubernetes Cluster

  1. Create a file called tiller-config.yml. For example, from your command line with Vim, run the following command:

    1
    vim tiller-config.yml
    
  2. Copy and paste the following snippet into the file.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: tiller
      namespace: kube-system
    imagePullSecrets:
      - name: "regcred"
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: tiller
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
      - kind: ServiceAccount
        name: tiller
        namespace: kube-system
    

    This configuration will create a service account for tiller to use.

  3. Save and close tiller-config.yml.

  4. Login to PKS and get the cluster credentials

    1
    pks login -k -u USER -p PASSWORD -a API-HOSTNAME
    
    1
    pks get-credentials CLUSTER-NAME
    
    1
    kubectl config use-context CLUSTER-NAME
    

  5. Run kubectl to apply tiller-config.yml:

    1
    kubectl create -f tiller-config.yml
    

    Success

    When successful, you should see the following response:

    1
    2
    serviceaccount/tiller created
    clusterrolebinding.rbac.authorization.k8s.io/tiller created
    

    Next, you need to create a default StorageClass.

    Tip

    You can check if you already have a StorageClass by running kubectl get sc.

  6. Create a file called storage-class.yml. For example, with Vim, run:

    1
    vim storage-class.yml
    
  7. Populate the file with the following:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    ---
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: concourse-storage-class
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: kubernetes.io/vsphere-volume
    parameters:
      datastore: DATASTORE-FROM-VSPHERE
    

    Where:

    • DATASTORE-FROM-VSPHERE is a valid VSphere datastore
  8. Use the following kubectl command to create the storage class on your cluster:

    1
    kubectl create -f storage-class.yml
    

Success

This command should return a response like:

1
storageclass.storage.k8s.io/concourse-storage-class created

Pull Container Images and Configure Tiller

Helm is composed of both a client-side CLI (helm) and a server-side component (tiller). This section explains how to upload the container image to a private or public Harbor, and configure Tiller appropriately.

This process differs slightly for private registries and public registries. Choose one of the following links to jump to the instructions that match your deployment and registry strategy:

Pull Images from Private Registry

  1. If the internal registry requires authentication, you must generate a secret.

    1
    2
    3
    4
    5
    kubectl create secret docker-registry regcred \
        --docker-server=$INTERNAL_REGISTRY \
        --docker-username=$username \
        --docker-password=$password \
        --namespace=kube-system
    

    This secret contains the credentials that can be used to fetch images from the internal registry that you pushed images to previously.

  2. Modify the default Tiller deployment to include imagePullSecrets so that Kubernetes nodes can fetch Tiller from a private registry that requires authentication.

    1. Download a copy of tiller's deployment manifest to tiller.yml

      1
      helm init --tiller-image $INTERNAL_REGISTRY/$PROJECT/helm:IMAGE-TAG --service-account tiller --dry-run --debug > tiller.yml
      
    2. In the section with kind: Deployment, find the key spec.template.spec and add the imagePullSecrets key under spec:

      1
      2
      3
      4
      5
      6
      7
      8
      ---
      apiVersion: extensions/v1beta1
      kind: Deployment
      spec:
          spec:
            imagePullSecrets:
              - name: "regcred"
            automountServiceAccountToken: true
      
    3. Submit the deployment configuration to Kubernetes.

      1
      kubectl apply -f ./tiller.yml
      

Success

Verify that tiller is running by executing helm version. If everything is running properly, this command will displaying the Client and Server versions:

1
2
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}

Pull Images from Public Registry

Alternatively, if your internal registry is public and requires no authentication, perform the following step instead:

  1. Install tiller using Helm:

    1
    2
    3
    helm init \
    --tiller-image $INTERNAL_REGISTRY/$PROJECT/helm:IMAGE-TAG \
    --service-account tiller
    

    Where:

    • INTERNAL_REGISTRY is the name of your internal registry
    • PROJECT is the name of the project containing your images
    • IMAGE-NAME is the result of $(cat ./images/helm.tar.name)

    Success

    To verify if Tiller is installed correctly, run the following command from your terminal:

    1
    kubectl get pods -n kube-system
    

    In the response, look for a pod NAME starting with tiller-deploy with a READY state of 1/1:

    1
    2
    3
    NAME                                    READY   STATUS    RESTARTS   AGE
    ...
    tiller-deploy-796d466d66-cmqhc          1/1     Running   0          16s
    
  2. Create docker-registry secret [Private repository only]

    Create a secret with the credentials for your internal registry in the default namespace, where Concourse will be deployed.

    1
    2
    3
    4
    5
    kubectl create secret docker-registry regcred \
        --docker-server=$INTERNAL_REGISTRY \
        --docker-username=$username \
        --docker-password=$password \
        --namespace=default
    

Installing Concourse via Helm Chart

  1. Create a deployment configuration file named deployment-values.yml

    1
    vim deployment-values.yml
    

    Insert the following snippet:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    ---
    image: INTERNAL_REGISTRY/PROJECT/concourse
    imageTag: CONCOURSE-IMAGE-TAG
    imagePullSecrets: ["regcred"] #remove this line for public image registry
    postgresql:
      image:
        registry: INTERNAL_REGISTRY
        repository: PROJECT/postgres
        tag: POSTGRES-IMAGE-TAG
        pullSecrets: ["regcred"] # remove for public image registry
    

    Where:

    • CONCOURSE-IMAGE-TAG is the output of

      1
      cat ./images/concourse.tar.name | cut -d ':' -f 2
      
    • INTERNAL-REGISTRY/PROJECT is your registry address and project.

    • imagePullSecrets is the pull secret created in step 6a. Remove if using a public repository.
    • POSTGRES-IMAGE-TAG is the output of

      1
      cat ./images/postgres.tar.name | cut -d ':' -f 2
      
    • pullSecrets is the pull secret created in step 6a. Remove if using a public repository.

  2. Save and close deployment-values.yml

  3. Deploy with kubectl

    1
    2
    3
    4
    helm install \
    --name DEPLOYMENT-NAME \
    --values ./deployment-values.yml \
    ./charts
    

    Where:

    • DEPLOYMENT-NAME is the name of your choosing for your Concourse Deployment

    Successful Deployment

    If the helm install command is successful, you will see the following response followed by more information about your cluster:

    1
    2
    3
    4
    5
    NAME: DEPLOYMENT-NAME
    LAST DEPLOYED: DEPLOYMENT-DATE
    NAMESPACE: default
    STATUS: DEPLOYED
    ...
    

Recommendations

Aside from the typical recommendations for any Concourse installation (see Running a web node and Running a worker node), given the peculiarities of Kubernetes, Pivotal recommends a few tweaks to the deployments of Concourse on Kubernetes.

For Concourse's workers:

  • Give each Concourse worker pod an entire machine for itself
    • As each worker is responsible for its own set of Garden containers, having multiple Concourse workers in the same Kubernetes node would inevitably lead to having too many Linux containers under the same machine, impacting overall performance
    • This can be achieved with Kubernetes by a combination of affinity rules (by configuring the worker.affinity key, and/or the worker.hardAntiAffinity helper key in the values.yaml file) and taints.

For Concourse's web instances:

  • Have Concourse web pods allocated in different nodes
    • Concourse web instances at time have to stream volumes between workers, thus, needing a lot of dedicated bandwidth depending on the workloads
    • This can be achieved with Kubernetes affinity rules, configurable through the web.affinity key in values.yaml.