Installing PFS on GKE

This topic describes how to install Pivotal Function Service (PFS) on Google Kubernetes Engine (GKE).

Requirements

  • The kubectl CLI has been installed at version 1.10 or later.
  • The Google Cloud SDK which provides the gcloud CLI has been installed.
  • The duffle CNAB runtime CLI has been downloaded and installed.
  • The PFS thick bundle has been downloaded.
  • The PFS thick bundle has been been relocated to a GCR repo in the current GCP project and the relocation mapping file saved.

Installation Steps

In order to perform the install the GCP user account needs to have the “Owner” role for the GCP project.
  1. Verify that the Google Cloud APIs, Kubernetes Engine API, and Container Registry API are enabled in the current GCP project.

    gcloud services list
    
    NAME                              TITLE
    cloudapis.googleapis.com          Google Cloud APIs
    container.googleapis.com          Kubernetes Engine API
    containerregistry.googleapis.com  Container Registry API
    . . .
    

    If necessary enable these services using the gcloud services enable command.

    gcloud services enable cloudapis.googleapis.com container.googleapis.com containerregistry.googleapis.com
    
  2. Create a new GKE cluster if a suitable one does not already exist. In order to meet Knative guidelines Kubernetes v1.11 or newer is required. For a evaluation purposes, a single zone cluster with 3 nodes, each with 2 vCPUs and 7.5GB of memory should be sufficient.

    The following command will create a cluster named ‘my-gke-cluster’ in the 'us-east1-c’ zone, using the most recent version of Kubernetes available in GKE.

    gcloud container clusters create my-gke-cluster \
        --cluster-version=latest --machine-type=n1-standard-2 \
        --enable-autoscaling --min-nodes=1 --max-nodes=3 --enable-autorepair \
        --scopes=cloud-platform --num-nodes=3 --zone=us-east1-c
    

    or for Windows PowerShell

    gcloud container clusters create my-gke-cluster `
      --cluster-version=latest --machine-type=n1-standard-2 `
      --enable-autoscaling --min-nodes=1 --max-nodes=3 --enable-autorepair `
      --scopes=cloud-platform --num-nodes=3 --zone=us-east1-c
    

    For a list of regions and quotas for your project run gcloud compute regions list. For a list of zones, run gcloud compute zones list. The command above requires a zone with 6 available CPUs.

  3. Use kubectl to verify that the gcloud clusters create command set your kubectl context to the target GKE cluster.

    kubectl config current-context
    

    If necessary you can create a new context with your credentials using the following gcloud command for a cluster called my-gke-cluster.

    gcloud container clusters get-credentials my-gke-cluster
    
  4. Grant yourself cluster-admin permissions.

    kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value core/account)
    
  5. Set the environment variables required by the duffle Kubernetes driver, create a namespace for duffle, create a service account for duffle and give it cluster-admin permissions.

    export SERVICE_ACCOUNT=duffle-runtime
    export KUBE_NAMESPACE=duffle
    kubectl create namespace $KUBE_NAMESPACE
    kubectl create serviceaccount "${SERVICE_ACCOUNT}" -n "${KUBE_NAMESPACE}"
    kubectl create clusterrolebinding "${SERVICE_ACCOUNT}-cluster-admin" --clusterrole cluster-admin --serviceaccount "${KUBE_NAMESPACE}:${SERVICE_ACCOUNT}"
    
  6. Change to the directory with the downloaded PFS thick bundle and run duffle install with the relocation mapping file created during image relocation.

    duffle install my-pfs pfs-bundle-thick.tgz --bundle-is-file \
    --relocation-mapping pfs-relmap.json \
    --driver k8s
    
    Executing install action...
    time="2019-08-15T10:00:56Z" level=info msg="Installing bundle components"
    time="2019-08-15T10:00:56Z" level=info
    time="2019-08-15T10:00:56Z" level=info msg="installing istio..."
    time="2019-08-15T10:00:57Z" level=info msg="done installing istio"
    time="2019-08-15T10:00:57Z" level=info msg="installing knative-build..."
    time="2019-08-15T10:00:59Z" level=info msg="done installing knative-build"
    time="2019-08-15T10:00:59Z" level=info msg="installing knative-serving..."
    time="2019-08-15T10:01:14Z" level=info msg="done installing knative-serving"
    time="2019-08-15T10:01:14Z" level=info msg="installing riff-system..."
    time="2019-08-15T10:01:15Z" level=info msg="done installing riff-system"
    time="2019-08-15T10:01:15Z" level=info msg="installing riff-application-build-template..."
    time="2019-08-15T10:01:15Z" level=info msg="done installing riff-application-build-template"
    time="2019-08-15T10:01:15Z" level=info msg="installing riff-function-build-template..."
    time="2019-08-15T10:01:16Z" level=info msg="done installing riff-function-build-template"
    time="2019-08-15T10:01:16Z" level=info msg="Kubernetes Application Bundle installed\n\n"
    

    After the command completes pods should be successfully running in the istio-system, knative-build, knative-serving, and kube-system namespaces similar to the output from kubectl get pods shown below.

    kubectl get pods --all-namespaces
    
    istio-system      cluster-local-gateway-54d6d6575b-tf2kg            1/1     Running   0          2m35s
    istio-system      istio-ingressgateway-68cf4b776f-dgt2r             2/2     Running   0          2m35s
    istio-system      istio-pilot-55cd97f5d5-wnl56                      1/1     Running   0          2m35s
    knative-build     build-controller-6fc9648cf8-bjgkm                 1/1     Running   0          2m34s
    knative-build     build-webhook-64dbc7b7db-qlnxz                    1/1     Running   0          2m34s
    knative-serving   activator-7597bb78f5-9nrjk                        1/1     Running   0          2m33s
    knative-serving   autoscaler-7f75cf6844-bvg4s                       1/1     Running   0          2m33s
    knative-serving   controller-76d6686f47-xrplh                       1/1     Running   0          2m33s
    knative-serving   networking-certmanager-595d7cb6b9-lrfx2           1/1     Running   0          2m32s
    knative-serving   networking-istio-6d6d9d85dd-cfffn                 1/1     Running   0          2m32s
    knative-serving   webhook-6f6f4648b8-kdsgk                          1/1     Running   0          2m32s
    kube-system       event-exporter-v0.2.4-5f88c66fb7-6gs77            2/2     Running   0          17m
    kube-system       fluentd-gcp-scaler-59b7b75cd7-4xmfj               1/1     Running   0          17m
    kube-system       fluentd-gcp-v3.2.0-54sw8                          2/2     Running   0          16m
    kube-system       fluentd-gcp-v3.2.0-7n67d                          2/2     Running   0          16m
    kube-system       fluentd-gcp-v3.2.0-fnfw6                          2/2     Running   0          16m
    kube-system       heapster-v1.6.1-7c4885fb6-h4jjv                   3/3     Running   0          16m
    kube-system       kube-dns-6987857fdb-759zm                         4/4     Running   0          17m
    kube-system       kube-dns-6987857fdb-z9wb2                         4/4     Running   0          17m
    kube-system       kube-dns-autoscaler-bb58c6784-bt7nv               1/1     Running   0          17m
    kube-system       kube-proxy-gke-pfs04-default-pool-0d11a115-4lvc   1/1     Running   0          17m
    kube-system       kube-proxy-gke-pfs04-default-pool-0d11a115-rlbj   1/1     Running   0          17m
    kube-system       kube-proxy-gke-pfs04-default-pool-0d11a115-xwsm   1/1     Running   0          17m
    kube-system       l7-default-backend-fd59995cd-mnxkx                1/1     Running   0          17m
    kube-system       metrics-server-v0.3.1-57c75779f-plhzj             2/2     Running   0          17m
    kube-system       prometheus-to-sd-bbgpv                            1/1     Running   0          17m
    kube-system       prometheus-to-sd-llqgg                            1/1     Running   0          17m
    kube-system       prometheus-to-sd-x99nz                            1/1     Running   0          17m
    riff-system       controller-f7868494b-jjp6x                        1/1     Running   0          2m18s
    riff-system       webhook-969869df6-gjlwf                           1/1     Running   0          2m18s
    

PFS is now installed. Next you need configure a container registry.

Configure GCR Credentials for a Namespace

  1. Get the ID of your current GCP project:

    GCP_PROJECT_ID=$(gcloud config get-value core/project)
    

    or in Windows PowerShell

    $Env:GCP_PROJECT_ID=(gcloud config get-value core/project)
    
  2. Create a service account with the required permission to push function images to GCR. To create a GCP service account named push-image run:

    gcloud iam service-accounts create push-image
    

    To grant the push-image account the storage.admin role run the following command:

    gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
        --member serviceAccount:push-image@$GCP_PROJECT_ID.iam.gserviceaccount.com \
        --role roles/storage.admin
    

    or in Windows PowerShell

    gcloud projects add-iam-policy-binding $Env:GCP_PROJECT_ID `
        --member serviceAccount:push-image@$Env:GCP_PROJECT_ID.iam.gserviceaccount.com `
        --role roles/storage.admin
    
  3. Create a private authentication key for the push service account and store it in a local file. To create a new key for the push-image service account and have it stored in a file called gcr-storage-admin.json run the following:

    gcloud iam service-accounts keys create \
        --iam-account "push-image@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
        gcr-storage-admin.json
    

    or in Windows PowerShell

    gcloud iam service-accounts keys create `
        --iam-account "push-image@$Env:GCP_PROJECT_ID.iam.gserviceaccount.com" `
        gcr-storage-admin.json
    
  4. Use the pfs CLI to apply the credentials to a Kubernetes namespace. The following command initializes the default namespace. Pass the path to the previously created private authentication key file using the --gcr flag. --set-default-image-prefix defines a default GCR registry prefix for naming new container images.

    pfs credential apply my-creds --gcr gcr-storage-admin.json --set-default-image-prefix
    

Optional: Enable Outbound Network Access

Knative blocks all outbound traffic by default. For PFS functions to call services outside the cluster, it is necessary to enable outbound network access. Details on how to do that are given in the Knative guide for configuring outbound network access. See Troubleshooting PFS for details on how to verify the outbound traffic configuration.

You can now create your first function.