LATEST VERSION: v1.2 - RELEASE NOTES
Pivotal Container Service v1.2

Installing PKS on GCP

Page last updated:

This topic describes how to install and configure Pivotal Container Service (PKS) on Google Cloud Platform (GCP).

Prerequisites

Before performing the procedures in this topic, you must have deployed and configured Ops Manager. For more information, see GCP Prerequisites and Resource Requirements.

If you use an instance of Ops Manager that you configured previously to install other runtimes, confirm the following settings before you install PKS:

  1. Navigate to Ops Manager.
  2. Open the Director Config pane.
  3. Select the Enable Post Deploy Scripts checkbox.
  4. Clear the Disable BOSH DNS server for troubleshooting purposes checkbox.
  5. Click the Installation Dashboard link to return to the Installation Dashboard.
  6. Click Review Pending Changes. Select all products you intend to deploy and review the changes. For more information, see Reviewing Pending Product Changes.

    Note: In Ops Manager v2.2, the Review Pending Changes page is a Beta feature. If you deploy PKS to Ops Manager v2.2, you can skip this step.

  7. Click Apply Changes.

Step 1: Install PKS

To install PKS, do the following:

  1. Download the product file from Pivotal Network.
  2. Navigate to https://YOUR-OPS-MANAGER-FQDN/ in a browser to log in to the Ops Manager Installation Dashboard.
  3. Click Import a Product to upload the product file.
  4. Under Pivotal Container Service in the left column, click the plus sign to add this product to your staging area.

Step 2: Configure PKS

Click the orange Pivotal Container Service tile to start the configuration process.

Pivotal Container Service tile on the Ops Manager installation dashboard

Assign AZs and Networks

Perform the following steps:

  1. Click Assign AZs and Networks.

  2. Select the availability zone (AZ) where you want to deploy the PKS API VM as a singleton job.

    Note: You must select an additional AZ for balancing other jobs before clicking Save, but this selection has no effect in the current version of PKS.

    Assign AZs and Networks pane in Ops Manager

  3. Under Network, select the infrastructure subnet you created for the PKS API VM.

  4. Under Service Network, select the services subnet you created for Kubernetes cluster VMs.

  5. Click Save.

PKS API

Perform the following steps:

  1. Click PKS API.
  2. Under Certificate to secure the PKS API, provide your own certificate and private key pair. PKS API pane configuration The certificate you enter here should cover the domain that routes to the PKS API VM with TLS termination on the ingress.

    (Optional) If you do not have a certificate and private key pair, you can have Ops Manager generate one for you. Perform the following steps:
    1. Select the Generate RSA Certificate link.
    2. Enter the wildcard domain for your API hostname. For example, if your PKS API domain is api.pks.example.com, then enter *.pks.example.com.
    3. Click Generate.
      PKS API certificate generation

      Note: Ops Manager requires a wildcard certificate. If you enter a FQDN when generating the certificate, the PKS installation fails.

  3. Under API Hostname (FQDN), enter a fully qualified domain name (FQDN) to access the PKS API. For example, api.pks.example.com.
  4. Under Worker VM Max in Flight, enter the maximum number of non-canary worker instances to create or resize in parallel within an availability zone.

    This field sets the max_in_flight variable, which limits how many instances of a component can start simultaneously when a cluster is created or resized. The variable defaults to 1, which means that only one component starts at a time.
  5. Click Save.

Plans

To activate a plan, perform the following steps:

  1. Click the Plan 1, Plan 2, or Plan 3 tab.

    Note: A plan defines a set of resource types used for deploying clusters. You can configure up to three plans. You must configure Plan 1.

  2. Select Active to activate the plan and make it available to developers deploying clusters. Plan pane configuration
  3. Under Name, provide a unique name for the plan.
  4. Under Description, edit the description as needed. The plan description appears in the Services Marketplace, which developers can access by using PKS CLI.
  5. Under Master/ETCD Node Instances, select the default number of Kubernetes master/etcd nodes to provision for each cluster. You can enter either 1 or 3.

    Note: If you deploy a cluster with multiple master/etcd node VMs, confirm that you have sufficient hardware to handle the increased load on disk write and network traffic. For more information, see Hardware recommendations in the etcd documentation.

    In addition to meeting the hardware requirements for a multi-master cluster, we recommend configuring monitoring for etcd to monitor disk latency, network latency, and other indicators for the health of the cluster. For more information, see Monitoring Master/etcd Node VMs.

    WARNING: To change the number of master/etcd nodes for a plan, you must ensure that no existing clusters use the plan. PKS does not support changing the number of master/etcd nodes for plans with existing clusters.

  6. Under Master/ETCD VM Type, select the type of VM to use for Kubernetes master/etcd nodes. For more information, see the Master Node VM Size section of VM Sizing for PKS Clusters.
  7. Under Master Persistent Disk Type, select the size of the persistent disk for the Kubernetes master node VM.
  8. Under Master/ETCD Availability Zones, select one or more AZs for the Kubernetes clusters deployed by PKS. If you select more than one AZ, PKS deploys the master VM in the first AZ and the worker VMs across the remaining AZs.
  9. Under Worker Node Instances, select the default number of Kubernetes worker nodes to provision for each cluster. For high availability, create clusters with a minimum of three worker nodes, or two per AZ if you intend to use persistent volumes. For example, if you deploy across three AZs, you should have six worker nodes. For more information about persistent volumes, see Persistent Volumes in Maintaining Workload Uptime. Provisioning a minimum of three worker nodes, or two nodes per AZ is also recommended for stateless workloads.

    Plan pane configuration, part two

  10. Under Worker VM Type, select the type of VM to use for Kubernetes worker node VMs. For more information, see the Worker Node VM Number and Size section of VM Sizing for PKS Clusters.

    Note: If you install PKS v1.1.5 or later in an NSX-T environment, we recommend that you select a Worker VM Type with a minimum disk size of 16 GB. The disk space provided by the default “medium” Worker VM Type is insufficient for PKS with NSX-T v1.1.5 or later.

  11. Under Worker Persistent Disk Type, select the size of the persistent disk for the Kubernetes worker node VMs.

  12. Under Worker Availability Zones, select one or more AZs for the Kubernetes worker nodes. PKS deploys worker nodes equally across the AZs you select.

  13. Under Errand VM Type, select the size of the VM that contains the errand. The smallest instance possible is sufficient, as the only errand running on this VM is the one that applies the Default Cluster App YAML configuration.

  14. (Optional) Under (Optional) Add-ons - Use with caution, enter additional YAML configuration to add custom workloads to each cluster in this plan. You can specify multiple files using --- as a separator. For more information, see Adding Custom Workloads. Plan pane configuration

  15. (Optional) To allow users to create pods with privileged containers, select the Enable Privileged Containers - Use with caution option. For more information, see Pods in the Kubernetes documentation.

  16. (Optional) To disable the admission controller, select the Disable DenyEscalatingExec checkbox. If you select this option, clusters in this plan can create security vulnerabilities that may impact other tiles. Use this feature with caution.

  17. Click Save.

To deactivate a plan, perform the following steps:

  1. Click the Plan 1, Plan 2, or Plan 3 tab.
  2. Select Plan Inactive.
  3. Click Save.

Kubernetes Cloud Provider

To configure your Kubernetes cloud provider settings, follow the procedures below:

  1. Click Kubernetes Cloud Provider.

  2. Under Choose your IaaS, select GCP.

  3. Ensure the values in the following procedure match those in the Google Config section of the Ops Manager tile as follows:

    GCP pane configuration

    1. Enter your GCP Project Id, which is the name of the deployment in your Ops Manager environment. To find the project ID, go to BOSH Director for GCP > Google Config > Project ID.
    2. Enter your VPC Network, which is the VPC network name for your Ops Manager environment.
    3. Enter your GCP Master Service Account ID. This is the email address associated with the master node service account. For information about configuring this account, see the Create the Master Node Service Account step of Preparing GCP Before Deploying PKS.
    4. Enter your GCP Worker Service Account ID. This is the email address associated with the worker node service account. For information about configuring this account, see the Create the Worker Node Service Account step of Preparing GCP Before Deploying PKS.
  4. Click Save.

(Optional) Logging

You can designate an external syslog endpoint for PKS component and cluster log messages.

To specify the destination for PKS log messages, do the following:

  1. Click Logging.
  2. To enable syslog forwarding, select Yes.

    Enable syslog forwarding

  3. Under Address, enter the destination syslog endpoint.

  4. Under Port, enter the destination syslog port.

  5. Select a transport protocol for log forwarding.

  6. (Optional) Pivotal strongly recommends that you enable TLS encryption when forwarding logs as they may contain sensitive information. For example, these logs may contain cloud provider credentials. To enable TLS, perform the following steps:

    1. Under Permitter Peer, provide the accepted fingerprint (SHA1) or name of remote peer. For example, *.YOUR-LOGGING-SYSTEM.com.
    2. Under TLS Certificate, provide a TLS certificate for the destination syslog endpoint.

      Note: You do not need to provide a new certificate if the TLS certificate for the destination syslog endpoint is signed by a Certificate Authority (CA) in your BOSH certificate store.

  7. To enable clusters to drain app logs to sinks using syslog://, select the Enable Sink Resources checkbox. For more information about using sink resources, see Creating Sink Resources.
    Enable sink resource checkbox

  8. Click Save.

Networking

To configure networking, do the following:

  1. Click Networking.
  2. Under Container Networking Interface, select Flannel. Networking pane configuration

  3. (Optional) If you do not use a NAT instance, select Allow outbound internet access from Kubernetes cluster vms (IaaS-dependent). Enabling this functionality assigns external IP addresses to VMs in clusters.

  4. Click Save.

UAA

To configure the UAA server, do the following:

  1. Click UAA.
  2. Under PKS CLI Access Token Lifetime, enter a time in seconds for the PKS CLI access token lifetime.
    UAA pane configuration
  3. Under PKS CLI Refresh Token Lifetime, enter a time in seconds for the PKS CLI refresh token lifetime.
  4. Select one of the following options:
    • To use an internal user account store for UAA, select Internal UAA. Click Save and continue to (Optional) Monitoring.
    • To use an external user account store for UAA, select LDAP Server and continue to Configure LDAP as an Identity Provider.

      Note: Selecting LDAP Server allows admin users to give cluster access to groups of users. For more information about performing this procedure, see Grant Cluster Access to a Group in Managing Users in PKS with UAA.

Configure LDAP as an Identity Provider

To integrate UAA with one or more LDAP servers, configure PKS with your LDAP endpoint information as follows:

  1. Under UAA, select LDAP Server.
    LDAP Server configuration pane

  2. For Server URL, enter the URLs that point to your LDAP server. If you have multiple LDAP servers, separate their URLs with spaces. Each URL must include one of the following protocols:

    • ldap://: Use this protocol if your LDAP server uses an unencrypted connection.
    • ldaps://: Use this protocol if your LDAP server uses SSL for an encrypted connection. To support an encrypted connection, the LDAP server must hold a trusted certificate or you must import a trusted certificate to the JVM truststore.
  3. For LDAP Credentials, enter the LDAP Distinguished Name (DN) and password for binding to the LDAP server. For example, cn=administrator,ou=Users,dc=example,dc=com. If the bind user belongs to a different search base, you must use the full DN.

    Note: We recommend that you provide LDAP credentials that grant read-only permissions on the LDAP search base and the LDAP group search base.

  4. For User Search Base, enter the location in the LDAP directory tree where LDAP user search begins. The LDAP search base typically matches your domain name.

    For example, a domain named cloud.example.com may use ou=Users,dc=example,dc=com as its LDAP user search base.

  5. For User Search Filter, enter a string to use for LDAP user search criteria. The search criteria allows LDAP to perform more effective and efficient searches. For example, the standard LDAP search filter cn=Smith returns all objects with a common name equal to Smith.

    In the LDAP search filter string that you use to configure PKS, use {0} instead of the username. For example, use cn={0} to return all LDAP objects with the same common name as the username.

    In addition to cn, other common attributes are mail, uid and, in the case of Active Directory, sAMAccountName.

    Note: For information about testing and troubleshooting your LDAP search filters, see Configuring LDAP Integration with Pivotal Cloud Foundry.

  6. For Group Search Base, enter the location in the LDAP directory tree where the LDAP group search begins.

    For example, a domain named cloud.example.com may use ou=Groups,dc=example,dc=com as its LDAP group search base.

    Follow the instructions in the Grant PKS Access to an External LDAP Group section of Managing Users in PKS with UAA to map the groups under this search base to roles in PKS.

  7. For Group Search Filter, enter a string that defines LDAP group search criteria. The standard value is member={0}.

  8. For Server SSL Cert, paste in the root certificate from your CA certificate or your self-signed certificate.
    LDAP Server configuration pane

  9. For Server SSL Cert AltName, do one of the following:

    • If you are using ldaps:// with a self-signed certificate, enter a Subject Alternative Name (SAN) for your certificate.
    • If you are not using ldaps:// with a self-signed certificate, leave this field blank.
  10. For First Name Attribute, enter the attribute name in your LDAP directory that contains user first names. For example, cn.

  11. For Last Name Attribute, enter the attribute name in your LDAP directory that contains user last names. For example, sn.

  12. For Email Attribute, enter the attribute name in your LDAP directory that contains user email addresses. For example, mail.

  13. For Email Domain(s), enter a comma-separated list of the email domains for external users who can receive invitations to Apps Manager.

  14. For LDAP Referrals, choose how UAA handles LDAP server referrals to other user stores. UAA can follow the external referrals, ignore them without returning errors, or generate an error for each external referral and abort the authentication.

  15. For External Groups Whitelist, enter a comma-separated list of group patterns which need to be populated in the user’s id_token. For further information on accepted patterns see the description of the config.externalGroupsWhitelist in the OAuth/OIDC Identity Provider Documentation.

    Note: When sent as a Bearer token in the Authentication header, wide pattern queries for users who are members of multiple groups, can cause the size of the id_token to extend beyond what is supported by web servers.

    External Groups Whitelist field

  16. Click Save.

(Optional) Configure OpenID Connect

You can use OpenID Connect (OIDC) to instruct Kubernetes to verify end-user identities based on authentication performed by an authorization server, such as UAA.

To configure PKS to use OIDC, select Enable UAA as OIDC provider. With OIDC enabled, Admin Users can grant cluster-wide access to Kubernetes end users.

OIDC configuration checkbox

For more information about configuring OIDC, see the table below:

Option Description
OIDC disabled If you do not enable OIDC, Kubernetes authenticates users against its internal user management system.
OIDC enabled If you enable OIDC, Kubernetes uses the authentication mechanism that you selected in UAA:
  • If you selected Internal UAA, Kubernetes authenticates users against the internal UAA authentication mechanism.
  • If you selected LDAP Server, Kubernetes authenticates users against the LDAP server.

For additional information on getting credentials with OIDC configured, see Retrieve Cluster Credentials in Retrieving Cluster Credentials and Configuration.

Note: When you enable OIDC, existing PKS-provisioned Kubernetes clusters are upgraded to use OIDC. This invalidates your kubeconfig files. You must regenerate the files for all clusters.

(Optional) Monitoring

You can monitor Kubernetes clusters and pods metrics externally using the integration with Wavefront by VMware.

Note: Before you configure Wavefront integration, you must have an active Wavefront account and access to a Wavefront instance. You provide your Wavefront access token during configuration and enabling errands. For additional information, see Pivotal Container Service Integration Details in the Wavefront documentation.

By default, monitoring is disabled. To enable and configure Wavefront monitoring, do the following:

  1. Under Wavefront Integration, select Yes.
    Monitoring pane configuration
  2. Under Wavefront URL, enter the URL of your Wavefront subscription. For example, https://try.wavefront.com/api.
  3. Under Wavefront Access Token, enter the API token for your Wavefront subscription.
  4. To configure Wavefront to send alerts by email, enter email addresses or Wavefront Target IDs separated by commas under Wavefront Alert Recipient. For example: user@example.com,Wavefront_TargetID. To create alerts, you must enable errands.
  5. In the Errands tab, enable Create pre-defined Wavefront alerts errand and Delete pre-defined Wavefront alerts errand. Errand pane configuration
  6. Click Save. Your settings apply to any clusters created after you have saved these configuration settings and clicked Apply Changes.

    Note: The PKS tile does not validate your Wavefront configuration settings. To verify your setup, look for cluster and pod metrics in Wavefront.

Usage Data

VMware’s Customer Experience Improvement Program (CEIP) and the Pivotal Telemetry Program (Telemetry) provides VMware and Pivotal with information that enables the companies to improve their products and services, fix problems, and advise you on how best to deploy and use our products. As part of the CEIP and Telemetry, VMware and Pivotal collect technical information about your organization’s use of the Pivotal Container Service (“PKS”) on a regular basis. Since PKS is jointly developed and sold by VMware and Pivotal, we will share this information with one another. Information collected under CEIP or Telemetry does not personally identify any individual.

Regardless of your selection in the Usage Data pane, a small amount of data is sent from Cloud Foundry Container Runtime (CFCR) to the PKS tile. However, that data is not shared externally.

To configure the Usage Data pane:

  1. Select the Usage Data side-tab.
  2. Read the Usage Data description.
  3. Make your selection.
    1. To join the program, select Yes, I want to join the CEIP and Telemetry Program for PKS.
    2. To decline joining the program, select No, I do not want to join the CEIP and Telemetry Program for PKS.
  4. Click Save.

Note: If you join the CEIP and Telemetry Program for PKS, open your firewall to allow outgoing access to https://vcsa.vmware.com/ph-prd on port 443.

Errands

Errands are scripts that run at designated points during an installation.

To configure when post-deploy and pre-delete errands for PKS are run, make a selection in the dropdown next to the errand. For a typical PKS deployment, we recommend that you leave the default settings.

Errand configuration pane

For more information about errands and their configuration state, see Managing Errands in Ops Manager.

WARNING: Because PKS uses floating stemcells, updating the PKS tile with a new stemcell triggers the rolling of every VM in each cluster. Also, updating other product tiles in your deployment with a new stemcell causes the PKS tile to roll VMs. This rolling is enabled by the Upgrade all clusters errand. We recommend that you keep this errand turned on because automatic rolling of VMs ensures that all deployed cluster VMs are patched. However, automatic rolling can cause downtime in your deployment.

If you upgrade PKS from 1.0.x to 1.1, you must enable the Upgrade All Cluster errand. This ensures existing clusters can perform resize or delete actions after the upgrade.

Resource Config

To modify the resource usage of PKS and specify your PKS API load balancer, follow the steps below:

  1. Select Resource Config.

  2. (Optional) Edit resources used by the Pivotal Container Service job.

    Resource pane configuration

  3. In the Load Balancers column, enter a name for your PKS API load balancer that begins with tcp:. For example, tcp:pks-api, where pks-api is the name that you configured in the Create a Load Balancer section of Creating a GCP Load Balancer for the PKS API.

Note: If you experience timeouts or slowness when interacting with the PKS API, select a VM Type with greater CPU and memory resources for the Pivotal Container Service job.

Step 3: Apply Changes

  1. Return to the Ops Manager Installation Dashboard.
  2. Click Review Pending Changes. Select the product that you intend to deploy and review the changes. For more information, see Reviewing Pending Product Changes.

    Note: In Ops Manager v2.2, the Review Pending Changes page is a Beta feature. If you deploy PKS to Ops Manager v2.2, you can skip this step.

  3. Click Apply Changes.

Step 4: Retrieve the PKS API Endpoint

You must share the PKS API endpoint to allow your organization to use the API to create, update, and delete clusters. For more information, see Creating Clusters.

To retrieve the PKS API endpoint, do the following:

  1. Navigate to the Ops Manager Installation Dashboard.
  2. Click the Pivotal Container Service tile.
  3. Click the Status tab and locate the Pivotal Container Service job. The IP address of the Pivotal Container Service job is the PKS API endpoint.

Step 5: Configure External Load Balancer

Follow the procedure in the Create a Network Tag for the Firewall Rule section of Creating a GCP Load Balancer for the PKS API.

Next Steps

After installing PKS on GCP, you may want to do one or more of the following:

Install the PKS and Kubernetes CLIs

The PKS and Kubernetes CLIs help you interact with your PKS-provisioned Kubernetes clusters and Kubernetes workloads. To install the CLIs, follow the instructions below:

Configure PKS API Access

Follow the procedures in Configuring PKS API Access.

Configure Authentication for PKS

Configure authentication for PKS using User Account and Authentication (UAA). For information, see Managing Users in PKS with UAA.


Please send any feedback you have to pks-feedback@pivotal.io.

Create a pull request or raise an issue on the source for this page in GitHub