Installing Tanzu Build Service

Page last updated:

This topic describes how to install and configure Tanzu Build Service.

Overview

Build Service can be installed on any Kubernetes cluster (v1.14 or later) including PKS, GKE and AKS clusters. The installation instructions are divided between the PKS installation (which uses oidc for authentication) and the install on other hosted Kubernetes clusters.

Prerequisites

Before you install Build Service, you must:

  • Ensure your Kubernetes cluster is configured with PersistentVolumes. Configure the cache size per image to 2 GB. Build Service utilizes PersistentVolumeClaims to cache build artifacts, which reduces the time of subsequent builds. For more information, see Persistent Volumes in the Kubernetes documentation.

  • Download the Duffle executable for your operating system from the Tanzu Build Service page on Tanzu Network.

  • Download the Build Service Bundle from the Tanzu Build Service page on Tanzu Network.

  • Download the Build Service Dependencies from the Tanzu Build Service Dependencies page on Tanzu Network.

Installing on TKG/GKE/AKS

Create a kubernetes cluster that you would like to install build service onto and target the cluster as follow:

kubectl config use-context <CLUSTER-NAME>

Create a Credentials File

Create a credentials file to provide the location of the local kubeconfig and the location of the TLS certificate credentials to duffle during the Build Service installation.

To create a credentials file:

  1. Navigate to the /tmp folder and create a file named credentials.yml.

  2. Add the properties shown in the example below to the credentials.yml file:

    name: build-service-credentials
    credentials:
     - name: kube_config
       source:
         path: "PATH-TO-KUBECONFIG"
       destination:
         path: "/root/.kube/config"
     - name: ca_cert
       source:
         path: "PATH-TO-CA"
       destination:
         path: "/cnab/app/cert/ca.crt"
    

    Where:

    • PATH-TO-KUBECONFIG is the path to the kubeconfig configuration file on your local machine. This file is required to enable Build Service to interact with the target cluster.
    • PATH-TO-CA is the path to the CA. This CA is required to enable Build Service to interact with internally deployed registries. This is the CA that was used while deploying the registry.

      Note: All local paths in the credentials file must be absolute.

Relocate Images to a Registry

This procedure relocates images from the Build Service bundle that you downloaded from Tanzu Network to an internal image registry.

To move the images from the Build Service bundle to an internal image registry:

  1. Log in to the image registry where you want to store the images by running:

    docker login IMAGE-REGISTRY
    

    Where IMAGE-REGISTRY is the name of the image registry where you want to store the images.

  2. Push the images to the image registry by running:

    duffle relocate -f /tmp/build-service-${version}.tgz -m /tmp/relocated.json -p IMAGE-REGISTRY
    

    Note: When relocating to a harbor/gcr registry, the IMAGE-REGISTRY will need to be appended with the destination folder for the images eg. IMAGE-REGISTRY/build-service-imgs

    For example:

    • Dockerhub duffle relocate -f /tmp/build-service-0.1.0.tgz -m /tmp/relocated.json -p my-dockerhub-repo
    • GCR duffle relocate -f /tmp/build-service-0.1.0.tgz -m /tmp/relocated.json -p gcr.io/my-project/build-service
    • Artifactory duffle relocate -f /tmp/build-service-0.1.0.tgz -m /tmp/relocated.json -p artifactory.com/my-project/build-service
    • Harbor duffle relocate -f /tmp/build-service-0.1.0.tgz -m /tmp/relocated.json -p harbor.io/my-project/build-service

Run Duffle Install

Use Duffle to install Build Service and define the required Build Service parameters by running:

duffle install BUILD-SERVICE-INSTALLATION-NAME -c /tmp/credentials.yml  \
    --set kubernetes_env=CLUSTER-NAME \
    --set docker_registry=DOCKER-REGISTRY \
    --set docker_repository=DOCKER-REPOSITORY \
    --set registry_username="REGISTRY-USERNAME" \
    --set registry_password="REGISTRY-PASSWORD" \
    --set custom_builder_image="BUILDER-IMAGE-TAG" \
    -f /tmp/build-service-${version}.tgz \
    -m /tmp/relocated.json

Where:

  • BUILD-SERVICE-INSTALLATION-NAME is the name you give your Build Service installation.
  • CLUSTER-NAME is the name of the Kubernetes cluster where Build Service is installed.
  • DOCKER-REGISTRY is the domain of the image registry that you configured.

    Note: For Docker Hub, the domain must be index.docker.io. Do not include subpaths in the registry. gcr.io and acr.io are examples of valid fields for the registry.

  • DOCKER-REPOSITORY is the image registry where stack images and store buildpackages will be relocated.

    Note: This is identical to the IMAGE_REGISTRY argument provided during duffle relocation.

  • REGISTRY-USERNAME is the username you use to access the registry. gcr.io expects _json_key as the username when using JSON key file authentication.
  • REGISTRY-PASSWORD is the password you use to access the registry.

    Note: Secrets in Tanzu Build Service for more information about how the registry username and password are used in Tanzu Build Service.

  • BUILDER-IMAGE-TAG is the fully-qualified tag where Build Service will create a default cluster-wide builder.

    Note: Please provide the fully-qualified image tag. For example, if installing using Dockerhub, you would provide my-dockerhub-org/default-builder. If installing using GCR, you would provide gcr.io/my-project/build-service/default-builder.

Other optional parameters you can add using the --set flag:

  • admin_users is a comma separated list of users who will be granted admin privileges on Build Service.
  • admin_groups: a comma separated list of groups that will be granted admin privileges on Build Service.
  • http_proxy: The HTTP proxy to use for network traffic.
  • https_proxy: The HTTPS proxy to use for network traffic.
  • no_proxy: A comma-separated list of hostnames that should not use a proxy.

Installing on PKS

  • Install PKS v1.6 or later. For more information, see Installing Enterprise PKS in the PKS documentation.

  • To install Build Service, you must configure a User Account and Authentication (UAA) client credentials.

  • Optional: configure your PKS tile with oidc as described here

Retrieve PKS Cluster Credentials

This procedure retrieves the credentials that authenticate communication between kubectl and the PKS cluster where Build Service runs.

To retrieve the PKS cluster credentials:

  1. Log in to PKS and get the latest kubeconfig by running:

    pks get-kubeconfig CLUSTER-NAME -a API-URI -u USERNAME -p PASSWORD --ca-cert PATH-TO-CERTIFICATE
    

    Where:

    • API-URI is the PKS API server URI.
    • USERNAME is name of your cluster admin.
    • PASSWORD is the cluster admin password.
    • PATH-TO-CERTIFICATE is the path to your root CA certificate.
    • CLUSTER-NAME is the name of the PKS cluster where Build Service runs.

    This command sets the context to the CLUSTER-NAME provided.

    For example:

    $ pks get-kubeconfig build-cluster -a api.pks.example.com -u alana -p P4ssW0rd --ca-cert /var/tempest/workspaces/default/root_ca_certificate

Create a Credentials File

Create a credentials file to provide the location of the local kubeconfig and the location of the TLS certificate credentials to duffle during the Build Service installation.

To create a credentials file:

  1. Navigate to the /tmp folder and create a file named credentials.yml.

  2. Add the properties shown in the example below to the credentials.yml file:

    name: build-service-credentials
    credentials:
     - name: kube_config
       source:
         path: "PATH-TO-KUBECONFIG"
       destination:
         path: "/root/.kube/config"
     - name: ca_cert
       source:
         path: "PATH-TO-CA"
       destination:
         path: "/cnab/app/cert/ca.crt"
    

    Where:

    • PATH-TO-KUBECONFIG is the path to the kubeconfig configuration file on your local machine. This file is required to enable Build Service to interact with the target cluster.
    • PATH-TO-CA is the path to the CA. This CA is required to enable Build Service to interact with internally deployed registries. This is the CA that was used while deploying the registry.

      Note: All local paths in the credentials file must be absolute.

Relocate Images to a Registry

This procedure relocates images from the Build Service bundle that you downloaded from Tanzu Network to an internal image registry.

To move the images from the Build Service bundle to an internal image registry:

  1. Log in to the image registry where you want to store the images by running:

    docker login IMAGE-REGISTRY
    

    Where IMAGE-REGISTRY is the name of the image registry where you want to store the images.

  2. Push the images to the image registry by running:

    duffle relocate -f /tmp/build-service-${version}.tgz -m /tmp/relocated.json -p IMAGE-REGISTRY
    

    Note: When relocating to a harbor/gcr registry, the IMAGE-REGISTRY will need to be appended with the destination folder for the images eg. IMAGE-REGISTRY/build-service-imgs

    For example:

    • Dockerhub duffle relocate -f /tmp/build-service-0.1.0.tgz -m /tmp/relocated.json -p my-dockerhub-repo
    • GCR duffle relocate -f /tmp/build-service-0.1.0.tgz -m /tmp/relocated.json -p gcr.io/my-project/build-service
    • Artifactory duffle relocate -f /tmp/build-service-0.1.0.tgz -m /tmp/relocated.json -p artifactory.com/my-project/build-service
    • Harbor duffle relocate -f /tmp/build-service-0.1.0.tgz -m /tmp/relocated.json -p harbor.io/my-project/build-service

Run Duffle Install

Use Duffle to install Build Service and define the required Build Service parameters by running:

duffle install BUILD-SERVICE-INSTALLATION-NAME -c /tmp/credentials.yml  \
    --set kubernetes_env=CLUSTER-NAME \
    --set docker_registry=DOCKER-REGISTRY \
    --set docker_repository=DOCKER-REPOSITORY \
    --set registry_username=REGISTRY-USERNAME \
    --set registry_password=REGISTRY-PASSWORD \
    --set custom_builder_image=BUILDER-IMAGE-TAG \
    -f /tmp/build-service-${version}.tgz \
    -m /tmp/relocated.json

Where:

  • BUILD-SERVICE-INSTALLATION-NAME is the name you give your Build Service installation.
  • CLUSTER-NAME is the name of the Kubernetes cluster where Build Service is installed.
  • DOCKER-REGISTRY is the domain of the image registry that you configured.

    Note: For Docker Hub, the domain must be index.docker.io. Do not include subpaths in the registry. gcr.io and acr.io are examples of valid fields for the registry.

  • DOCKER-REPOSITORY is the image registry where stack images and store buildpackages will be relocated.

    Note: This is identical to the IMAGE_REGISTRY argument provided during duffle relocation.

  • REGISTRY-USERNAME is the username you use to access the registry. gcr.io expects _json_key as the username when using JSON key file authentication.
  • REGISTRY-PASSWORD is the password you use to access the registry.

    Note: See Secrets in Tanzu Build Service for more information about how the registry username and password are used in Tanzu Build Service.

  • BUILDER-IMAGE-TAG is the fully-qualified tag where Build Service will create a default cluster-wide builder.

    Note: Please provide the fully-qualified image tag. For example, if installing using Dockerhub, you would provide my-dockerhub-org/default-builder. If installing using GCR, you would provide gcr.io/my-project/build-service/default-builder.

Other optional parameters you can add using the --set flag:

  • oidc_username_prefix is the UAA OIDC Username Prefix (required if OIDC was configured for PKS).
  • oidc_group_prefix is the UAA OIDC Groups Prefix (required if OIDC was configured for PKS).
  • admin_users is a comma separated list of users who will be granted admin privileges on Build Service.
  • admin_groups: a comma separated list of groups that will be granted admin privileges on Build Service.
  • http_proxy: The HTTP proxy to use for network traffic.
  • https_proxy: The HTTPS proxy to use for network traffic.
  • no_proxy: A comma-separated list of hostnames that should not use a proxy.

Verify Installation

Verify your Build Service installation by first targeting the cluster Build Service has been installed on.

To verify your Build Service installation:

  1. Download the pb binary from the Tanzu Build Service page on Tanzu Network.

  2. List the builders available in your installation:

    pb builder list

You should see an output that looks as follows:

Cluster Builders
----------------
default

Updating Build Service Dependencies

Visit the Build Service dependencies tile on PivNet. Build Service can be updated with those artifacts both directly against the PivNet registry or via the downloaded versions of those images.

Online update of Dependencies

If the pb CLI has access to pull images from the Pivnet Registry, the stack images and buildpacks used by build service can be updated using the following commands.

Stack Update

Update the stack:

pb stack update --build-image registry.pivotal.io/tbs-dependencies/build@sha256:<image-sha> --run-image registry.pivotal.io/tbs-dependencies/run@sha256:<image-sha>

Note: Both build and run images need to be provided to update the stack

The updated stack can be viewed with the following command:

pb stack status

Store Update

Update the store:

pb store add registry.pivotal.io/tbs-dependencies/<buildpack-name>:<buildpack-tag>

Additionally, multiple buildpacks can be added to Build Service by passing multiple image references

pb store add registry.pivotal.io/tbs-dependencies/<buildpack1>:<buildpack1-tag> registry.pivotal.io/tbs-dependencies/<buildpack2>:<buildpack2-tag> registry.pivotal.io/tbs-dependencies/<buildpack3>:<buildpack3-tag>

To list the buildpacks now available to build service:

pb store list

Offline update of Dependencies

If the pb CLI cannot access the images in the Pivnet Registry, the stack images and buildpacks used by build service can be updated by first downloading those images and saving them as .tar files. These file can be provided to the pb CLI to upload to build service.

Stack Update

Fetch the stack images into the docker daemon:

docker pull registry.pivotal.io/tbs-dependencies/build@sha256:<image-sha>
docker pull registry.pivotal.io/tbs-dependencies/run@sha256:<image-sha>

Save those images to disk:

docker save registry.pivotal.io/tbs-dependencies/build@sha256:<image-sha> > build.tar
docker save registry.pivotal.io/tbs-dependencies/run@sha256:<image-sha> > run.tar

Update the stack with the saved image:

pb stack update --build-image build.tar --run-image run.tar --local

Note: Both build and run images need to be provided to update the stack

The updated stack can be viewed with the following command:

pb stack status

Store Update

Fetch the buildpack images into the docker daemon:

docker pull registry.pivotal.io/tbs-dependencies/<buildpack1>:<buildpack1-tag>
docker pull registry.pivotal.io/tbs-dependencies/<buildpack2:<buildpack2-tag>

Save those images to disk:

docker save registry.pivotal.io/tbs-dependencies/<buildpack1>:<buildpack1-tag> > buildpack1.tar
docker save registry.pivotal.io/tbs-dependencies/<buildpack2>:<buildpack2-tag> > buildpack2.tar

Update the store with the previously saved images:

pb store add buildpack1.tar --local

Additionally, multiple buildpacks can be added to Build Service by passing multiple image references

pb store add buildpack1.tar buildpack2.tar --local

To list the buildpacks now available to build service:

pb store list

Ensuring the Run Image is Readable

Build Service relies on the run-image being publicly readable or readable with the registry credentials configured in a project/namespace for the builds to be executed successfully.

The location of the run image can be identified by running the following command:

pb stack status

To update the the location of the run image:

  1. Using kubectl update the buildservice.pivotal.io/defaultRepository annotation on the build-service-stack Stack resource to a location that can be accessed publicly.
  2. Re-run pb stack update with the most recent build image and run image from the build-service-dependencies page.