Installing CloudBees Core for VMware Tanzu (PKS)

This topic describes how to install CloudBees Core for VMware Tanzu (PKS).

Ensure that you have read the Requirements before proceeding any further on this page.

Obtain Cluster Access

A Pivotal Container Service (PKS) cluster is required to install CloudBees Core for VMware Tanzu (PKS). To obtain access to your cluster, do the following:

  1. Log in to PKS: pks login -a <pks-url> -u <username> -p <password> -k

  2. Update your ~/.kube/config: pks get-credentials <cluster-name>

  3. Configure your Kubernetes client: kubectl config use-context <cluster-name>

Install Nginx Ingress Controller

CloudBees Core depends on the Nginx Ingress Controller for handling HTTP (user) and JNLP (agent) traffic routing within the cluster. While not strictly required, it is recommended to use an external load balancer, which distributes traffic amongst available Kubernetes Nodes (Ingresses). Your IaaS might provision a load balancer automatically while following the steps below, but in some cases, you will need to manually configure a load balancer.

You can follow the Google Tutorial, the official documentation, or try the following approach that uses Helm with RBAC:

  1. Prepare a service account for Tiller, the server-side component of Helm: kubectl create serviceaccount --namespace kube-system tiller

  2. Make Tiller an admin: kubectl create clusterrolebinding tiller-cluster-role --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

  3. Install helm (e.g. via Homebrew), then run helm init. It will use your current Kubernetes context that was set earlier in this guide.

  4. Get Tiller ready for the next step: kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

  5. Re-initialize Helm with the Tiller service account and upgrade Tiller: helm init --service-account tiller --upgrade

  6. Wait for the Tiller pod to be ready: kubectl -n kube-system get pods

  7. Create a namespace for the Nginx Ingress Controller: kubectl create namespace ingress-nginx

  8. Install the Nginx Ingress Controller: helm install --namespace ingress-nginx --name nginx-ingress stable/nginx-ingress --set rbac.create=true --set controller.service.externalTrafficPolicy=Local

  9. Wait for the Ingress Controller to be ready and get the External IP: kubectl get service nginx-ingress-controller -n ingress-nginx

The External IP belongs to the external load balancer. Use it to access the cluster and also to configure DNS, if required.

Install CloudBees Core

CloudBees Core is installed by applying standard Kubernetes manifests (.yaml files) to the cluster after making a few small edits, as needed. During the installation process, Kubernetes will download Docker images from either Docker Hub or a private registry, depending on what is present in the image: sections of the .yaml when it is applied to your cluster. The default is Docker Hub, but CloudBees Core Docker images are available as part of this distribution in case you’d like to install them into a private registry. Now, to install CloudBees Core, do the following:

  1. Set an environment variable to store your domain name. Try External IP with if you don’t have a domain name readily available: DOMAIN_NAME=<DNS Name or External>

  2. Create a storage class for CloudBees Core if you don’t have one already. Note that the .yaml below is specific to your IaaS:

    Create a local file named ssd-storage.yaml like the following:

    kind: StorageClass
        name: ssd
        type: pd-ssd
    reclaimPolicy: Delete
    volumeBindingMode: Immediate

    Then apply it to your cluster:kubectl create -f ssd-storage.yaml

  3. Download the CloudBees Core Kubernetes manifests from PivNet and unzip them: tar -xvzf cloudbees-core_<version>_kubernetes.tgz

  4. Update cloudbees-core.yml to include information about your storage class:

    In the Operations Center volumeClaimTemplates: section, change the storageClassName: to the name: of the storage class you created earlier:

    - metadata:
        name: jenkins-home
        accessModes: [ "ReadWriteOnce" ]
            storage: 20Gi
        storageClassName: ssd

    To configure Managed Masters to use this storage class as the default, search for the commented out section:

    # To allocate masters using a non-default storage class, add the following
    # -Dcom.cloudbees.masterprovisioning.kubernetes.KubernetesMasterProvisioning.storageClassName=some-storage-class

    Then follow the instructions in the comment, adding the following to the value: section:

  5. Execute this command to update cloudbees-core.yml with the correct DNS name:
    sed -e s,,$DOMAIN_NAME,g < cloudbees-core.yml > tmp && mv tmp cloudbees-core.yml

  6. Create the namespace for CloudBees Core, label it, and use it from now on (3 commands):
    kubectl create namespace cloudbees-core-pks
    kubectl label namespace cloudbees-core-pks name=cloudbees-core-pks
    kubectl config set-context $(kubectl config current-context) --namespace=cloudbees-core-pks

  7. (Optional) Create self-signed certificate to enable SSL termination at the Ingress level:

    Create a server.config file with the DOMAIN NAME from earlier:

    [ req ]
    [ req_distinguished_name ]
    L="San Jose"

    Create the certificate request: openssl req -config server.config -new -newkey rsa:2048 -nodes -keyout server.key -out server.csr
    Create the certificate: openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
    Add the certificate and key as a Kubernetes secret:
    kubectl create secret tls cloudbees-core-pks-tls --cert=server.crt --key=server.key
    Update the following section of cloudbees-core.yml according to the instructions:

    # To enable SSL offloading at ingress level, uncomment the following 5 lines
    #- hosts:
    #  -
    #  # Name of the secret containing the certificate to be used
    #  secretName: cje-example-com-tls
  8. Apply cloudbees-core.yml to your cluster:
    kubectl apply -f cloudbees-core.yml

  9. Monitor the status of Operations Center: kubectl rollout status sts cjoc

  10. From a web browser, navigate to https://<DOMAIN_NAME>/cjoc to launch the setup wizard.

  11. Get the initial admin password for the wizard:
    kubectl exec cjoc-0 -- cat /var/jenkins_home/secrets/initialAdminPassword

  12. Follow the steps in the wizard. To get a license, see License.

Congratulations! You’ve successfully installed CloudBees Core. See Using CloudBees Core for VMware Tanzu (PKS) for more information on what to do next.

(Optional) Install Images into Private Registry

  1. Download artifacts from PivNet:

    • cloudbees-cloud-core-oc.<version>.tgz
    • cloudbees-core-mm.<version>.tgz

  2. Load the image into Docker, tag it, then push to your registry (3 commands):
    docker load -i cloudbees-cloud-core-oc.<version>.tgz
    docker tag cloudbees-cloud-core-oc.<version>.tgz<version>.tgz
    docker push<version>.tgz

  3. Do the following before you reach step 8 in Install CloudBees Core:

    Change the Operations Center image: contents in cloudbees-core.yml from:

    image: cloudbees/cloudbees-cloud-core-oc:<version>



The Managed Master image is not seen in cloudbees-core.yml because Managed Masters are provisioned on-demand with Operations Center. To use the Managed Master image from your private registry for master provisioning, navigate to the http://<cloudbees-url>/cjoc/config screen and update the Container Master Provisioning section accordingly. The first item in the list is the default Managed Master image.