Installing PFS on PKS on GCP

This topic describes how to install Pivotal Function Service (PFS) on Pivotal Container Service (PKS) deployed to GCP.

PKS can be deployed on other IaaS platforms including Azure and AWS. These platforms will also be supported in future releases of PFS.

Requirements

  • The pks CLI has been installed.
  • The kubectl CLI has been installed at version 1.10 or later.
  • The Google Cloud SDK which provides the gcloud CLI has been installed.
  • The duffle CNAB runtime CLI has been downloaded and installed.
  • The PFS thick bundle has been downloaded.
  • The PFS thick bundle has been been relocated to a GCR repo in the current GCP project and the relocation mapping file saved.

Validate Google Cloud APIs

In order to perform the install the GCP user account needs to have the “Owner” role for the GCP project.

Verify that the Google Cloud APIs, and Container Registry API are enabled in the current GCP project.

gcloud services list
NAME                              TITLE
cloudapis.googleapis.com          Google Cloud APIs
containerregistry.googleapis.com  Container Registry API
. . .

If necessary enable these services using the gcloud services enable command.

gcloud services enable \
    cloudapis.googleapis.com \
    containerregistry.googleapis.com

PKS Login

Log into the PKS environment using your usual credentials. E.g. To log in targeting the PKS API server pks-api.example.com as user admin with password adminpassword, the following command would be run.

pks login -a pks-api.example.com -u admin -p adminpassword

Create PKS cluster

Create a new PKS cluster with a plan intended for large workloads. E.g. A master node VM with 2 CPUs and 8GB of memory and four worker nodes each with 2 CPUs and 8GB of memory. To create a new cluster called mycluster using the large plan, and an external hostname of myhostname.example.com run:

pks create-cluster mycluster --external-hostname myhostname.example.com --plan large

Track the progress of the create using the pks cluster command. For example, to check on the status of a cluster named mycluster run pks cluster mycluster

It can take up to 30 minutes for cluster creation to complete.

Configure GCP Load Balancer

Configure a GCP load balancer for the created cluster. This step and the following step are required to make the new PKS on GCP cluster available on the network. Consult your PKS environment administrator if you need assistance.

  • Run the pks cluster command with your cluster name and note the value of Kubernetes Master IP(s). In the example below the master node IP address is 10.0.11.10

    pks cluster mycluster
    
    Name:                     mycluster
    Plan Name:                large
    UUID:                     65bc21d8-819f-483a-a08b-7c55b500b0a2
    Last Action:              CREATE
    Last Action State:        succeeded
    Last Action Description:  Instance provisioning completed
    Kubernetes Master Host:   myhostname.example.com
    Kubernetes Master Port:   8443
    Worker Nodes:             4
    Kubernetes Master IP(s):  10.0.11.10
    Network Profile Name:
    
  • In your Google Cloud Platform Console navigate to Compute Engine > VM instances and locate the master VM by filtering on Internal IP with the value of the master node IP address. Note the name of this master node VM which will be of the form vm-<guid> and also its zone.

  • Navigate to Network services > Load balancing and click Create Load Balancer.

  • In the TCP Load Balancing pane, click Start configuration.

  • Accept all of the defaults on the next page and click Continue.

  • In the resulting page give the load balancer a name

  • Click the Backend configuration section and set the region value to be consistent with the zone of the cluster master VM (e.g. if the master VM is in zone europe-west1-c then set the region to be europe-west-1).

  • Click the Select existing instances tab and then select your cluster’s master node VM in the dropdown. Create PKS cluster load balancer

  • Click Frontend configuration and give it the same name as the load balancer.

  • In the IP dropdown select the option to Create IP address.

  • In the resulting reserve a new static IP address box give the same name as the frontend load balancer and then click Reserve.

  • Enter the value 8443 in the Port field.

  • Click Done.

  • Review the new load balancer settings and then click Create.

It will take several seconds before the new load balancer becomes available.

Configure DNS for the cluster

  • In the Google Cloud Platform Console page for the cluster load balancer (Network services > Load balancing) note the IP address of the Frontend of the load balancer.
  • Navigate to Network services > Cloud DNS and click the appropriate DNS zone for your PKS cluster (its DNS name will be the domain of the PKS environment Ops Manager).
  • Click Add record set.
  • In the next page add a prefix to the DNS name so that the complete name matches the --external-hostname value used when the cluster was created.
  • In the IPv4 Address field enter the IP address (only the IP address, do not include the port number) of the Frontend of the cluster’s load balancer.
  • Click Create.

    It can take several minutes for the DNS record information to propagate around the network. During this time “unable to connect” or “no such host” errors may occur when attempting to use kubectl with the cluster.

Retrieve cluster credentials

Use the pks CLI to retrieve credentials and change your kubectl context to your PKS cluster. To change context to a PKS cluster named mycluster run the following:

pks get-credentials mycluster

Verify that the current context is as expected using kubectl:

kubectl config current-context
mycluster

Enable image pulls from GCR

In order to authorise your PKS cluster to pull images from your local GCP project’s GCR registry it is necessary to update the IAM permissions of the service account used by the worker nodes.

  • To determine this service account go to the PCF Ops Manager set up when your PKS environment was created, click the Pivotal Container Service tile, and then navigate to the Kubernetes Cloud Provider configuration page. The GCP Worker Service Account ID field will contain the ID of the worker nodes’ service account.

    Kubernetes Cloud Provider configuration page

  • Export the name of the serviceaccount and the ID of your current GCP project, and use the gsutil iam ch command to add the objectViewer role, permitting the service account to read from the GCS storage bucket backing your project’s GCR registry.

    export WORKER_SERVICE_ACCOUNT=mycluster-pks-worker-node@my-gcp-project.iam.gserviceaccount.com
    export GCP_PROJECT_ID=$(gcloud config get-value core/project)
    
    gsutil iam ch \
        serviceAccount:$WORKER_SERVICE_ACCOUNT:objectViewer \
        gs://artifacts.$GCP_PROJECT_ID.appspot.com
    

    or in Windows Powershell

    $Env:WORKER_SERVICE_ACCOUNT=mycluster-pks-worker-node@my-gcp-project.iam.gserviceaccount.com
    $Env:GCP_PROJECT_ID=(gcloud config get-value core/project)
    
    gsutil iam ch \
        serviceAccount:$Env:WORKER_SERVICE_ACCOUNT:objectViewer \
        gs://artifacts.$Env:GCP_PROJECT_ID.appspot.com
    

Configure duffle

Set the environment variables required by the duffle Kubernetes driver, create a namespace for duffle, create a service account for duffle and give it cluster-admin permissions.

export SERVICE_ACCOUNT=duffle-runtime
export KUBE_NAMESPACE=duffle
kubectl create namespace $KUBE_NAMESPACE
kubectl create serviceaccount "${SERVICE_ACCOUNT}" -n "${KUBE_NAMESPACE}"
kubectl create clusterrolebinding "${SERVICE_ACCOUNT}-cluster-admin" --clusterrole cluster-admin --serviceaccount "${KUBE_NAMESPACE}:${SERVICE_ACCOUNT}"

Install PFS

Change to the directory with the downloaded PFS thick bundle and run duffle install with the relocation mapping file created during image relocation.

duffle install my-pfs pfs-bundle-thick.tgz --bundle-is-file \
  --relocation-mapping pfs-relmap.json \
  --driver k8s
Executing install action...
time="2019-08-15T10:00:56Z" level=info msg="Installing bundle components"
time="2019-08-15T10:00:56Z" level=info
time="2019-08-15T10:00:56Z" level=info msg="installing istio..."
time="2019-08-15T10:00:57Z" level=info msg="done installing istio"
time="2019-08-15T10:00:57Z" level=info msg="installing knative-build..."
time="2019-08-15T10:00:59Z" level=info msg="done installing knative-build"
time="2019-08-15T10:00:59Z" level=info msg="installing knative-serving..."
time="2019-08-15T10:01:14Z" level=info msg="done installing knative-serving"
time="2019-08-15T10:01:14Z" level=info msg="installing riff-system..."
time="2019-08-15T10:01:15Z" level=info msg="done installing riff-system"
time="2019-08-15T10:01:15Z" level=info msg="installing riff-application-build-template..."
time="2019-08-15T10:01:15Z" level=info msg="done installing riff-application-build-template"
time="2019-08-15T10:01:15Z" level=info msg="installing riff-function-build-template..."
time="2019-08-15T10:01:16Z" level=info msg="done installing riff-function-build-template"
time="2019-08-15T10:01:16Z" level=info msg="Kubernetes Application Bundle installed\n\n"

After the command completes pods should be successfully running in the istio-system, knative-build, knative-serving, and kube-system namespaces similar to the output from kubectl get pods shown below.

kubectl get pods --all-namespaces
NAMESPACE         NAME                                                 READY   STATUS    RESTARTS   AGE
istio-system      cluster-local-gateway-7c46bdbc48-kgdq9               1/1     Running   0          5m9s
istio-system      istio-ingressgateway-5c879898cd-rqvgz                2/2     Running   0          5m9s
istio-system      istio-pilot-96844b8bc-td7mw                          1/1     Running   0          5m9s
knative-build     build-controller-54bc4d89b8-mckjd                    1/1     Running   0          5m8s
knative-build     build-webhook-69cb7d8685-5prkt                       1/1     Running   0          5m8s
knative-serving   activator-6c76ff6dcd-j7z7d                           1/1     Running   0          5m6s
knative-serving   autoscaler-5b58449d8d-h2hh6                          1/1     Running   0          5m6s
knative-serving   controller-5bf877dcc5-xmm4r                          1/1     Running   0          5m6s
knative-serving   networking-certmanager-85ddd75579-d75sl              1/1     Running   0          5m6s
knative-serving   networking-istio-55f6f5c9c5-s5x5x                    1/1     Running   0          5m6s
knative-serving   webhook-6fdbf9ff8f-9v684                             1/1     Running   0          5m6s
kube-system        heapster-6d5f964dbd-fgrlk                     1/1       Running   0          1h
kube-system        kube-dns-6b697fcdbd-76p4d                     3/3       Running   0          1h
kube-system        kubernetes-dashboard-785584f46b-bd68t         1/1       Running   0          1h
kube-system        metrics-server-5f68584c5b-n9wh4               1/1       Running   0          1h
kube-system        monitoring-influxdb-54759946d4-wv29g          1/1       Running   0          1h
kube-system        telemetry-agent-68c6647967-886bd              1/1       Running   0          1h
pks-system         fluent-bit-dmrcv                              1/1       Running   0          1h
pks-system         fluent-bit-fcvhk                              1/1       Running   0          1h
pks-system         fluent-bit-lbwns                              1/1       Running   0          1h
pks-system         fluent-bit-s8j7r                              1/1       Running   0          1h
pks-system         sink-controller-7c85744bd6-4lbwq              1/1       Running   0          1h

PFS is now installed. Next you need configure a container registry.

Configure GCR Credentials for a Namespace

  1. Get the ID of your current GCP project:

    GCP_PROJECT_ID=$(gcloud config get-value core/project)
    

    or in Windows PowerShell

    $Env:GCP_PROJECT_ID=(gcloud config get-value core/project)
    
  2. Create a service account with the required permission to push function images to GCR. To create a GCP service account named push-image run:

    gcloud iam service-accounts create push-image
    

    To grant the push-image account the storage.admin role run the following command:

    gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
        --member serviceAccount:push-image@$GCP_PROJECT_ID.iam.gserviceaccount.com \
        --role roles/storage.admin
    

    or in Windows PowerShell

    gcloud projects add-iam-policy-binding $Env:GCP_PROJECT_ID `
        --member serviceAccount:push-image@$Env:GCP_PROJECT_ID.iam.gserviceaccount.com `
        --role roles/storage.admin
    
  3. Create a private authentication key for the push service account and store it in a local file. To create a new key for the push-image service account and have it stored in a file called gcr-storage-admin.json run the following:

    gcloud iam service-accounts keys create \
        --iam-account "push-image@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
        gcr-storage-admin.json
    

    or in Windows PowerShell

    gcloud iam service-accounts keys create `
        --iam-account "push-image@$Env:GCP_PROJECT_ID.iam.gserviceaccount.com" `
        gcr-storage-admin.json
    
  4. Use the pfs CLI to apply the credentials to a Kubernetes namespace. The following command initializes the default namespace. Pass the path to the previously created private authentication key file using the --gcr flag. --set-default-image-prefix defines a default GCR registry prefix for naming new container images.

    pfs credential apply my-creds --gcr gcr-storage-admin.json --set-default-image-prefix
    

Optional: Enable Outbound Network Access

Knative blocks all outbound traffic by default. For PFS functions to call services outside the cluster, it is necessary to enable outbound network access. Details on how to do that are given in the Knative guide for configuring outbound network access. See Troubleshooting PFS for details on how to verify the outbound traffic configuration.

You can now create your first function.