Install Concourse with Helm
Concourse for VMware Tanzu has traditionally been distributed as a BOSH Release aimed at allowing an operator to deploy Concourse directly from a BOSH director to virtual machines (VMs). Concourse now also provides a Helm Chart release, which instead targets Kubernetes clusters using the templating and release management of Helm. As of v6.7.2 the chart now uses Helm v3. v6.3 can still be deployed with Helm v2 but v6.7.2 and onwards requires Helm v3.
Introduction
Helm is the package manager for Kubernetes, a tool that streamlines installing and managing Kubernetes applications. It creates Kubernetes objects that can be submitted to Kubernetes clusters, and materialized into a Concourse deployment using Kubernetes constructs (Deployments, StatefulSets, PersistentVolumeClaims, etc).
A Helm Chart is a versioned package of pre-configured Kubernetes resources. Deploying Concourse via Helm Chart makes it less complicated to deploy, scale, maintain, or upgrade your deployment in the future. This guide aims to walk an operator through the step by step process of deploying with Helm.
If you have not read the Prerequisites and Background Information page, please do so before continuing. It contains important information about required tools and settings to make this process work with VMware Tanzu Kubernetes Grid Integrated Edition (TKGi).
Privileged containers
Because Concourse manages its own Linux containers, the worker processes must have superuser privileges, and your cluster must be configured to allow this.
The presence of privileged pods can be a security concern for a Kubernetes cluster. VMware's recommendation is to run a Helm-installed Concourse in its own dedicated cluster in order to avoid any interference from the worker pods to other Kubernetes workloads.
Managing Linux containers without superuser privileges is a subject of active discussion in the Kubernetes community. Research on this topic is scheduled on the Concourse roadmap, so it is possible this requirement may be dropped in a future release.
Cluster Creation Guide
If you have not already created your cluster, you can begin the process now so that it completes in the background while you proceed with the other steps below.
While the process of creating a cluster can vary depending on your needs, with TKGi you can get started by following these commands:
1 |
|
1 |
|
Where:
- USERNAME is your TKGi username
- PASSWORD is your TKGi password
- API-HOSTNAME is your TKGi API host name
- CLUSTER-NAME is a name you choose for your cluster
- CLUSTER-DOMAIN-NAME is a domain name you choose for your cluster.
- PLAN-NAME is one of the plans returned if you run
tkgi plans
See the Creating Clusters guide for the version of TKGi that you're working with for more information.
Prerequisites for Deploying on Kubernetes with Helm
The VMware Concourse team has tested deploying with Helm using the following prerequisites:
- Kubernetes cluster (1.11+)
- VMware recommends VMware Tanzu Kubernetes Grid Integrated Edition (TKGi)
- This process has been tested with TKGi 1.7
- TKGi must have support for privileged containers activated. See enabling privileged containers on TKGi for more information
- kubectl v1.15
- For help, read the Install and Set Up kubectl guide.
- Concourse Helm Chart. Download this from VMware Tanzu Network.
- Private container registry (optional)
- VMware recommends Harbor
- Docker CLI
- Helm CLI
- Installing Helm: https://helm.sh/docs/intro/install/
- Check Kubernetes version compatibility here
Enabling privileged containers on TKGi
To enable the ability to have privileged containers on TKGi, the plan configured to be used in the cluster must be changed.
Head to the plan configuration in OpsManager, and mark the Allow Privileged
checkbox near the end:
You can verify if it worked by inspecting the pod security policy, which should indicate that privileged mode is enabled.
1 2 3 4 5 |
|
Download, Tag, and Push Images to Internal Registry
Download Concourse Helm Chart and load images into Docker
-
If you have not already done so, visit VMware Tanzu Network and download the Concourse Helm Chart.
-
Unarchive the Helm Chart tarball to a local directory. For example, with version v6.7.3, the tarball will be called
concourse-6.7.3.tgz
.1
mkdir concourse-helm
1
tar xvzf ./concourse-6.7.3.tgz -C ./concourse-helm
1
cd ./concourse-helm
-
Load the container images into a local Docker client by running the following
docker load
commands one at a time:1 2
docker load -i ./images/concourse.tar docker load -i ./images/postgres.tar
If you are using Helm v2 (i.e. Concourse versions < v6.7.x) you'll also need to load the
helm
image:1
docker load -i ./images/helm.tar
These images are quite large, and there will be no output until Docker is done loading.
Success
Once the loading finishes, you'll see:
1
Loaded image: IMAGE-NAME
Tag and push the loaded images to internal registry
Registry Authentication
This step assumes that the current docker
client has already authenticated against the internal registry through a regular docker login
.
In addition to logging in, if you're using a registry with self-signed certificates, you should also make sure your registry has been added to the 'Insecure Registries' section of the Daemon tab in the Docker settings ui for your current workstation.
For more information about certificates and secure registry concerns, see this article: Test an insecure registry
-
Begin by exporting a pair of variables to your shell to be reused throughout this process. In your terminal, run the following commands:
1 2
export INTERNAL_REGISTRY=INTERNAL-REGISTRY export PROJECT=PROJECT-NAME
Where:
INTERNAL-REGISTRY
is the domain of your internal registry - if you are using Harbor, this must correspond to the URL (without scheme)PROJECT-NAME
is the name of the project in your registry. If the project does not exist already you will need to make it.
1
export USERNAME=DOCKERHUB-USERNAME
Where:
DOCKERHUB-USERNAME
is your username on hub.docker.io
-
The
.tar
file you downloaded contains a directory calledimages
. You need to extract the tag of each image so that you can appropriately tag the images with the internal registry and project name details from the last step.To do this, run the following commands:
1 2
export CONCOURSE_IMAGE_TAG=$(cat ./images/concourse.tar.name | cut -d ':' -f 2) export POSTGRES_IMAGE_TAG=$(cat ./images/postgres.tar.name | cut -d ':' -f 2)
If you are using Helm v2 (i.e. Concourse versions < v6.7.x) you'll also need to tag the
helm
image:1
export HELM_IMAGE_TAG=$(cat ./images/helm.tar.name | cut -d ':' -f 2)
-
Tag the images so their names include the internal registry address:
1 2
docker tag concourse/concourse:$CONCOURSE_IMAGE_TAG $INTERNAL_REGISTRY/$PROJECT/concourse:$CONCOURSE_IMAGE_TAG docker tag dev.registry.pivotal.io/concourse/postgres:$POSTGRES_IMAGE_TAG $INTERNAL_REGISTRY/$PROJECT/postgres:$POSTGRES_IMAGE_TAG
1 2
docker tag concourse/concourse:$CONCOURSE_IMAGE_TAG $USERNAME/concourse:$CONCOURSE_IMAGE_TAG docker tag dev.registry.pivotal.io/concourse/postgres:$POSTGRES_IMAGE_TAG $USERNAME/postgres:$POSTGRES_IMAGE_TAG
If you are using Helm v2 (i.e. Concourse versions < v6.7.x) you'll also need to tag the
helm
image:1
docker tag dev.registry.pivotal.io/concourse/helm:$HELM_IMAGE_TAG $INTERNAL_REGISTRY/$PROJECT/helm:$HELM_IMAGE_TAG
1
docker tag dev.registry.pivotal.io/concourse/helm:$HELM_IMAGE_TAG $USERNAME/helm:$HELM_IMAGE_TAG
-
Push the images to the internal registry by running the following commands in your terminal:
1 2
docker push $INTERNAL_REGISTRY/$PROJECT/concourse:$CONCOURSE_IMAGE_TAG docker push $INTERNAL_REGISTRY/$PROJECT/postgres:$POSTGRES_IMAGE_TAG
1 2
docker push $USERNAME/concourse:$CONCOURSE_IMAGE_TAG docker push $USERNAME/postgres:$POSTGRES_IMAGE_TAG
If you are using Helm v2 (i.e. Concourse versions < v6.7.x) you'll also need to tag the
helm
image:1
docker push $INTERNAL_REGISTRY/$PROJECT/helm:$HELM_IMAGE_TAG
1
docker push $USERNAME/helm:$HELM_IMAGE_TAG
You must have the necessary credentials (and authorization) to push to the targeted project.
Prepare the Kubernetes Cluster
-
Login to TKGi and get the cluster credentials
1
tkgi login -k -u USER -p PASSWORD -a API-HOSTNAME
1
tkgi get-credentials CLUSTER-NAME
1
kubectl config use-context CLUSTER-NAME
Next, you need to create a default
StorageClass
if one does not already exist.Tip
You can check if you already have a
StorageClass
by runningkubectl get sc
. -
Create a file called
storage-class.yml
. For example, withvim
, run:1
vim storage-class.yml
-
Populate the file with the following:
1 2 3 4 5 6 7 8 9 10
--- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: concourse-storage-class annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: kubernetes.io/vsphere-volume parameters: datastore: DATASTORE-FROM-VSPHERE
Where:
DATASTORE-FROM-VSPHERE
is a valid VSphere datastore
-
Use the following
kubectl
command to create the storage class on your cluster:1
kubectl create -f storage-class.yml
Success
This command should return a response like:
1 |
|
Configure Tiller and Pull Container Images
This section only applies to Concourse v6.3 installed using Helm v2
Skip to installing the Helm Chart for Concourse v6.7.x and Helm v3
Helm is composed of both a client-side CLI (helm
) and a server-side component (tiller
), and is included in the .tar
file you downloaded under the ./helm
directory. This section explains how to upload the container image to a private or public registry, and configure Tiller appropriately.
This process differs slightly for private registries and public registries. Choose one of the following links to jump to the instructions that match your deployment and registry strategy:
- Option A: Configuration Instructions for Private Registries
- Option B: Configuration Instructions for Public Registries
After configuring Tiller for your registry, navigate to Install Tiller using Helm. The instructions are the same whether the registry is private or public.
Configure Tiller for Private Registry
-
If the registry requires authentication, you must first create a service account configured with credentials to access that registry. Create a file called
tiller-config.yml
. For example, from your command line withvim
, run the following command:1
vim tiller-config.yml
-
Copy and paste the following snippet into the file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system imagePullSecrets: - name: "regcred" --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
This configuration defines a service account for tiller to use that will pull images using registry credentials (to be defined below).
-
Save and close
tiller-config.yml
. -
You must generate a secret in both the
kube-system
namespace (for the Tiller) and thedefault
namespace (for Concourse).1 2 3 4 5 6 7 8 9 10 11
kubectl create secret docker-registry regcred \ --docker-server=$INTERNAL_REGISTRY \ --docker-username=$USERNAME \ --docker-password=$password \ --namespace=kube-system kubectl create secret docker-registry regcred \ --docker-server=$INTERNAL_REGISTRY \ --docker-username=$USERNAME \ --docker-password=$password \ --namespace=default
1 2 3 4 5 6 7 8 9 10 11
kubectl create secret docker-registry regcred \ --docker-server=docker.io \ --docker-username=$USERNAME \ --docker-password=$password \ --namespace=kube-system kubectl create secret docker-registry regcred \ --docker-server=docker.io \ --docker-username=$USERNAME \ --docker-password=$password \ --namespace=default
This secret contains the credentials that can be used to fetch images from the internal registry that you pushed images to previously.
Configure Tiller for Public Registry
Alternatively, if your registry is public and requires no authentication, perform the following steps instead:
-
Create a file called
tiller-config.yml
. For example, from your command line withvim
, run the following command:1
vim tiller-config.yml
-
Copy and paste the following snippet into the file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
This configuration defines a service account for tiller to use to pull images without any registry credentials.
-
Save and close
tiller-config.yml
.
Install Tiller using Helm
After creating tiller-config.yml
, perform the following steps:
-
Login to TKGi and get the cluster credentials
1
tkgi login -k -u USER -p PASSWORD -a API-HOSTNAME
1
tkgi get-credentials CLUSTER-NAME
1
kubectl config use-context CLUSTER-NAME
-
Run
kubectl
to applytiller-config.yml
:1
kubectl create -f tiller-config.yml
Success
When successful, you should see the following response:
1 2
serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created
-
Install tiller using
helm
:1 2 3 4
helm init \ --stable-repo-url https://charts.helm.sh/stable \ --tiller-image $INTERNAL_REGISTRY/$PROJECT/helm:$HELM_IMAGE_TAG \ --service-account tiller
Where:
INTERNAL_REGISTRY
is the name of your internal registryPROJECT
is the name of the project containing your images
1 2 3 4
helm init \ --stable-repo-url https://charts.helm.sh/stable \ --tiller-image $USERNAME/helm:$HELM_IMAGE_TAG \ --service-account tiller
Where:
USERNAME
is your hub.docker.io username
Success
Verify that tiller is running by executing helm version
. If everything is running properly, this command will displaying the Client and Server versions:
1 2 |
|
Installing Concourse via Helm Chart
-
Create a deployment configuration file named
deployment-values.yml
1
vim deployment-values.yml
Insert the following snippet:
1 2 3 4 5 6 7 8 9 10
--- image: INTERNAL_REGISTRY/PROJECT/concourse imageTag: CONCOURSE_IMAGE_TAG imagePullSecrets: ["regcred"] # Remove if registry is public postgresql: image: registry: INTERNAL_REGISTRY repository: PROJECT/postgres tag: POSTGRES_IMAGE_TAG pullSecrets: ["regcred"] # Remove if registry is public
Where:
-
INTERNAL_REGISTRY/PROJECT
is your registry address and project. -
CONCOURSE_IMAGE_TAG
is the output of1
cat ./images/concourse.tar.name | cut -d ':' -f 2
-
POSTGRES_IMAGE_TAG
is the output of1
cat ./images/postgres.tar.name | cut -d ':' -f 2
1 2 3 4 5 6 7 8 9 10
--- image: USERNAME/concourse imageTag: CONCOURSE_IMAGE_TAG imagePullSecrets: ["regcred"] # Remove if registry is public postgresql: image: registry: docker.io repository: USERNAME/postgres tag: POSTGRES_IMAGE_TAG pullSecrets: ["regcred"] # Remove if registry is public
Where:
-
USERNAME
is your Docker Hub username. -
CONCOURSE_IMAGE_TAG
is the output of1
cat ./images/concourse.tar.name | cut -d ':' -f 2
-
POSTGRES_IMAGE_TAG
is the output of1
cat ./images/postgres.tar.name | cut -d ':' -f 2
-
-
Save and close
deployment-values.yml
-
Deploy with
helm
1 2 3 4 5
helm install \ DEPLOYMENT-NAME \ --create-namespace \ --values ./deployment-values.yml \ ./charts
1 2 3 4
helm install \ --name DEPLOYMENT-NAME \ --values ./deployment-values.yml \ ./charts
Where:
DEPLOYMENT-NAME
is the name of your choosing for your Concourse Deployment
Successful Deployment
If the
helm install
command is successful, you will see the following response followed by more information about your cluster:1 2 3 4 5
NAME: DEPLOYMENT-NAME LAST DEPLOYED: DEPLOYMENT-DATE NAMESPACE: default STATUS: DEPLOYED ...
Recommendations
Aside from the typical recommendations for any Concourse installation (see Running a web node and Running a worker node), given the peculiarities of Kubernetes, VMware recommends a few tweaks to the deployments of Concourse on Kubernetes.
For Concourse's workers:
- Give each Concourse worker pod an entire machine for itself
- As each worker is responsible for its own set of Garden containers, having multiple Concourse workers in the same Kubernetes node would inevitably lead to having too many Linux containers under the same machine, impacting overall performance
- This can be achieved with Kubernetes by a combination of affinity rules (by configuring the
worker.affinity
key, and/or theworker.hardAntiAffinity
helper key in thevalues.yaml
file) and taints.
For Concourse's web instances:
- Have Concourse web pods allocated in different nodes
- Concourse web instances at time have to stream volumes between workers, thus, needing a lot of dedicated bandwidth depending on the workloads
- This can be achieved with Kubernetes affinity rules, configurable through the
web.affinity
key invalues.yaml
.