LATEST VERSION: v1.4 - RELEASE NOTES
Pivotal Container Service v1.4

Installing Enterprise PKS on GCP

Page last updated:

This topic describes how to install and configure Enterprise Pivotal Container Service (Enterprise PKS) on Google Cloud Platform (GCP).

Prerequisites

Before installing Enterprise PKS on GCP, you must complete the prerequisites in GCP Prerequisites and Resource Requirements.

If you use an instance of Ops Manager that you configured previously to install other runtimes, before you install Enterprise PKS, do the following:

  1. Navigate to Ops Manager.
  2. Open the Director Config pane.
  3. Select the Enable Post Deploy Scripts checkbox.
  4. Click the Installation Dashboard link to return to the Installation Dashboard.
  5. Click Review Pending Changes. Select all products you intend to deploy and review the changes. For more information, see Reviewing Pending Product Changes.
  6. Click Apply Changes.

Step 1: Install Enterprise PKS

To install Enterprise PKS, do the following:

  1. Download the product file from Pivotal Network.
  2. Navigate to https://YOUR-OPS-MANAGER-FQDN/ in a browser to log in to the Ops Manager Installation Dashboard.
  3. Click Import a Product to upload the product file.
  4. Under Enterprise PKS in the left column, click the plus sign to add this product to your staging area.

Step 2: Configure Enterprise PKS

Click the orange Enterprise PKS tile to start the configuration process.

PKS tile on the Ops Manager installation dashboard

Warning: When you configure the Enterprise PKS tile do not use spaces in any field entries. This includes spaces between characters as well as leading and trailing spaces. If you use a space in any field entry, the deployment of Enterprise PKS fails.

Assign AZs and Networks

Perform the following steps:

  1. Click Assign AZs and Networks.

  2. Select the availability zone (AZ) where you want to deploy the PKS API VM as a singleton job.

    Note: You must select an additional AZ for balancing other jobs before clicking Save, but this selection has no effect in the current version of Enterprise PKS.

    Assign AZs and Networks pane in Ops Manager

  3. Under Network, select the infrastructure subnet that you created for the PKS API VM.

  4. Under Service Network, select the services subnet that you created for Kubernetes cluster VMs.

  5. Click Save.

PKS API

Perform the following steps:

  1. Click PKS API.

  2. Under Certificate to secure the PKS API, provide your own certificate and private key pair.
    PKS API pane configuration
    The certificate that you supply should cover the domain that routes to the PKS API VM with TLS termination on the ingress.

    If you do not have a certificate and private key pair, Ops Manager can generate one for you. To generate a certificate, do the following:

    1. Select the Generate RSA Certificate link.
    2. Enter the domain for your API hostname. This can be a standard FQDN or a wildcard domain.
    3. Click Generate.
      PKS API certificate generation

      Note: If you deployed a global HTTP load balancer for Ops Manager without a certificate, you can configure the load balancer to use this newly-generated certificate. To configure your Ops Manager load balancer front end certificate, see Configure Front End in Preparing to Deploy Ops Manager on GCP Manually.

  3. Under API Hostname (FQDN), enter the FQDN that you registered to point to the PKS API load balancer, such as api.pks.example.com. To retrieve the public IP address or FQDN of the PKS API load balancer, log in to your IaaS console.

  4. Under Worker VM Max in Flight, enter the maximum number of non-canary worker instances to create or resize in parallel within an availability zone.

    This field sets the max_in_flight variable, which limits how many instances of a component can start simultaneously when a cluster is created or resized. The variable defaults to 1, which means that only one component starts at a time.

  5. Click Save.

Plans

To activate a plan, perform the following steps:

  1. Click the plan that you want to activate.

    Note: A plan defines a set of resource types used for deploying clusters. You can configure up to ten plans. You must configure Plan 1.

  2. Select Active to activate the plan and make it available to developers deploying clusters. Plan pane configuration
  3. Under Name, provide a unique name for the plan.
  4. Under Description, edit the description as needed. The plan description appears in the Services Marketplace, which developers can access by using the PKS CLI.
  5. Under Master/ETCD Node Instances, select the default number of Kubernetes master/etcd nodes to provision for each cluster. You can enter 1, 3, or 5.

    Note: If you deploy a cluster with multiple master/etcd node VMs, confirm that you have sufficient hardware to handle the increased load on disk write and network traffic. For more information, see Hardware recommendations in the etcd documentation.

    In addition to meeting the hardware requirements for a multi-master cluster, we recommend configuring monitoring for etcd to monitor disk latency, network latency, and other indicators for the health of the cluster. For more information, see Monitoring Master/etcd Node VMs.

    WARNING: To change the number of master/etcd nodes for a plan, you must ensure that no existing clusters use the plan. Enterprise PKS does not support changing the number of master/etcd nodes for plans with existing clusters.

  6. Under Master/ETCD VM Type, select the type of VM to use for Kubernetes master/etcd nodes. For more information, including master node VM customization options, see the Master Node VM Size section of VM Sizing for Enterprise PKS Clusters.
  7. Under Master Persistent Disk Type, select the size of the persistent disk for the Kubernetes master node VM.
  8. Under Master/ETCD Availability Zones, select one or more AZs for the Kubernetes clusters deployed by Enterprise PKS. If you select more than one AZ, Enterprise PKS deploys the master VM in the first AZ and the worker VMs across the remaining AZs. If you are using mutliple masters, Enterprise PKS deploys the master and worker VMs across the AZs in round-robin fashion.
  9. Under Maximum number of workers on a cluster, set the maximum number of Kubernetes worker node VMs that Enterprise PKS can deploy for each cluster. Enter any whole number in this field.

    Plan pane configuration, part two

  10. Under Worker Node Instances, select the default number of Kubernetes worker nodes to provision for each cluster.

    If the user creating a cluster with the PKS CLI does not specify a number of worker nodes, the cluster is deployed with the default number set in this field. This value cannot be greater than the maximum worker node value you set in the previous field. For more information about creating clusters, see Creating Clusters.

    For high availability, create clusters with a minimum of three worker nodes, or two per AZ if you intend to use PersistentVolumes (PVs). For example, if you deploy across three AZs, you should have six worker nodes. For more information about PVs, see PersistentVolumes in Maintaining Workload Uptime. Provisioning a minimum of three worker nodes, or two nodes per AZ is also recommended for stateless workloads.

    If you later reconfigure the plan to adjust the default number of worker nodes, the existing clusters that have been created from that plan are not automatically upgraded with the new default number of worker nodes.

  11. Under Worker VM Type, select the type of VM to use for Kubernetes worker node VMs. For more information, including worker node VM customization options, see the Worker Node VM Number and Size section of VM Sizing for Enterprise PKS Clusters.

    Note: If you install Enterprise PKS in an NSX-T environment, we recommend that you select a Worker VM Type with a minimum disk size of 16 GB. The disk space provided by the default medium Worker VM Type is insufficient for Enterprise PKS with NSX-T.

  12. Under Worker Persistent Disk Type, select the size of the persistent disk for the Kubernetes worker node VMs.

  13. Under Worker Availability Zones, select one or more AZs for the Kubernetes worker nodes. Enterprise PKS deploys worker nodes equally across the AZs you select.

  14. Under Kubelet customization - system-reserved, enter resource values that Kubelet can use to reserve resources for system daemons. For example, memory=250Mi, cpu=150m. For more information about system-reserved values, see the Kubernetes documentation.

  15. Under Kubelet customization - eviction-hard, enter threshold limits that Kubelet can use to evict pods when they exceed the limit. Enter limits in the format EVICTION-SIGNAL=QUANTITY. For example, memory.available=100Mi, nodefs.available=10%, nodefs.inodesFree=5%. For more information about eviction thresholds, see the Kubernetes documentation.

    WARNING: Use the Kubelet customization fields with caution. If you enter values that are invalid or that exceed the limits the system supports, Kubelet might fail to start. If Kubelet fails to start, you cannot create clusters.

  16. Under Errand VM Type, select the size of the VM that contains the errand. The smallest instance possible is sufficient, as the only errand running on this VM is the one that applies the Default Cluster App YAML configuration.

  17. (Optional) Under (Optional) Add-ons - Use with caution, enter additional YAML configuration to add custom workloads to each cluster in this plan. You can specify multiple files using --- as a separator. For more information, see Adding Custom Workloads. Plan pane configuration

  18. (Optional) To allow users to create pods with privileged containers, select the Allow Privileged option. For more information, see Pods in the Kubernetes documentation.

  19. (Optional) Enable or disable one or more admission controller plugins: PodSecurityPolicy, DenyEscalatingExec, and SecurityContextDeny. See Admission Plugins for more information.

  20. Click Save.

To deactivate a plan, perform the following steps:

  1. Click the plan that you want to deactivate.
  2. Select Inactive.
  3. Click Save.

Kubernetes Cloud Provider

To configure your Kubernetes cloud provider settings, follow the procedures below:

  1. Click Kubernetes Cloud Provider.

  2. Under Choose your IaaS, select GCP.

  3. Ensure the values in the following procedure match those in the Google Config section of the Ops Manager tile as follows:

    GCP pane configuration

    1. Enter your GCP Project ID, which is the name of the deployment in your Ops Manager environment. To find the project ID, go to BOSH Director for GCP > Google Config > Project ID.
    2. Enter your VPC Network, which is the VPC network name for your Ops Manager environment.
    3. Enter your GCP Master Service Account ID. This is the email address associated with the master node service account.
      • If you are installing Enterprise PKS manually: You configured the master node service account in Create the Master Node Service Account in Creating Service Accounts in GCP for Enterprise PKS.
      • If you are installing Enterprise PKS with Terraform: Retrieve the master node service account ID by running terraform output and locating the value for pks_master_node_service_account_email.
    4. Enter your GCP Worker Service Account ID. This is the email address associated with the worker node service account.
      • If you are installing Enterprise PKS manually: You configured the worker node service account in Create the Worker Node Service Account in Creating Service Accounts in GCP for Enterprise PKS.
      • If you are installing Enterprise PKS with Terraform: Retrieve the worker node service account ID by running terraform output and locating the value for pks_worker_node_service_account_email.
  4. Click Save.

(Optional) Logging

You can designate an external syslog endpoint for forwarding BOSH-deployed VM logs.

In addition, you can enable sink resources to collect Kubernetes cluster and namespace log messages.

To configure logging in Enterprise PKS, do the following:

  1. Click Logging.
  2. To enable syslog forwarding for BOSH-deployed VM logs, select Yes.

    Enable syslog forwarding

  3. Under Address, enter the destination syslog endpoint.

  4. Under Port, enter the destination syslog port.

  5. Select a transport protocol for log forwarding.

  6. (Optional) Pivotal strongly recommends that you enable TLS encryption when forwarding logs as they may contain sensitive information. For example, these logs may contain cloud provider credentials. To enable TLS, perform the following steps:

    1. Under Permitter Peer, provide the accepted fingerprint (SHA1) or name of remote peer. For example, *.YOUR-LOGGING-SYSTEM.com.
    2. Under TLS Certificate, provide a TLS certificate for the destination syslog endpoint.

      Note: You do not need to provide a new certificate if the TLS certificate for the destination syslog endpoint is signed by a Certificate Authority (CA) in your BOSH certificate store.

  7. To enable clusters to drain Kubernetes API events and pod logs to sinks, select the Enable Log Sink Resources checkbox.

  8. To enable clusters to drain Kubernetes node and pod metrics to sinks, select the Enable Metric Sink Resources checkbox. For more information about using sink resources, see Creating Sink Resources.
    Enable metric sink resources checkbox

  9. Click Save.

Networking

To configure networking, do the following:

  1. Click Networking.
  2. Under Container Networking Interface, select Flannel. Networking pane configuration
  3. (Optional) Enter values for Kubernetes Pod Network CIDR Range and Kubernetes Service Network CIDR Range.
    • Ensure that the CIDR ranges do not overlap and have sufficient space for your deployed services.
    • Ensure that the CIDR range for the Kubernetes Pod Network CIDR Range is large enough to accommodate the expected maximum number of pods.
  4. (Optional) If you do not use a NAT instance, select Allow outbound internet access from Kubernetes cluster vms (IaaS-dependent). Enabling this functionality assigns external IP addresses to VMs in clusters.

  5. Click Save.

UAA

To configure the UAA server, do the following:

  1. Click UAA.
  2. Under PKS API Access Token Lifetime, enter a time in seconds for the PKS API access token lifetime.
    UAA pane configuration
  3. Under PKS API Refresh Token Lifetime, enter a time in seconds for the PKS API refresh token lifetime.
  4. (Optional) Select Enable UAA as OIDC provider to grant cluster access to Kubernetes end users. For more information, see Grant Cluster Access in Managing Users in Enterprise PKS with UAA.

    Note: Do not select this option if you are integrating Enterprise PKS with VMware vRealize Operations (vROps), which is done in the Monitoring tab by selecting Deploy cAdvisor. The vROps Management Pack cannot discover and retrieve information from Kubernetes clusters if Enterprise PKS has OIDC enabled.

  5. Select one of the following options:
    • To use an internal user account store for UAA, select Internal UAA. Click Save and continue to (Optional) Monitoring.
    • To use an external user account store for UAA, select LDAP Server and continue to Configure LDAP as an Identity Provider.

      Note: Selecting LDAP Server allows admin users to give cluster access to groups of users. For more information about performing this procedure, see Grant Cluster Access to a Group in Managing Users in Enterprise PKS with UAA.

Configure LDAP as an Identity Provider

To integrate UAA with one or more LDAP servers, configure Enterprise PKS with your LDAP endpoint information as follows:

  1. Under UAA, select LDAP Server.
    LDAP Server configuration pane

  2. For Server URL, enter the URLs that point to your LDAP server. If you have multiple LDAP servers, separate their URLs with spaces. Each URL must include one of the following protocols:

    • ldap://: Use this protocol if your LDAP server uses an unencrypted connection.
    • ldaps://: Use this protocol if your LDAP server uses SSL for an encrypted connection. To support an encrypted connection, the LDAP server must hold a trusted certificate or you must import a trusted certificate to the JVM truststore.
  3. For LDAP Credentials, enter the LDAP Distinguished Name (DN) and password for binding to the LDAP server. For example, cn=administrator,ou=Users,dc=example,dc=com. If the bind user belongs to a different search base, you must use the full DN.

    Note: We recommend that you provide LDAP credentials that grant read-only permissions on the LDAP search base and the LDAP group search base.

  4. For User Search Base, enter the location in the LDAP directory tree where LDAP user search begins. The LDAP search base typically matches your domain name.

    For example, a domain named cloud.example.com may use ou=Users,dc=example,dc=com as its LDAP user search base.

  5. For User Search Filter, enter a string to use for LDAP user search criteria. The search criteria allows LDAP to perform more effective and efficient searches. For example, the standard LDAP search filter cn=Smith returns all objects with a common name equal to Smith.

    In the LDAP search filter string that you use to configure Enterprise PKS, use {0} instead of the username. For example, use cn={0} to return all LDAP objects with the same common name as the username.

    In addition to cn, other common attributes are mail, uid and, in the case of Active Directory, sAMAccountName.

    Note: For information about testing and troubleshooting your LDAP search filters, see Configuring LDAP Integration with Pivotal Cloud Foundry.

  6. For Group Search Base, enter the location in the LDAP directory tree where the LDAP group search begins.

    For example, a domain named cloud.example.com may use ou=Groups,dc=example,dc=com as its LDAP group search base.

    Follow the instructions in the Grant Enterprise PKS Access to an External LDAP Group section of Managing Users in Enterprise PKS with UAA to map the groups under this search base to roles in Enterprise PKS.

  7. For Group Search Filter, enter a string that defines LDAP group search criteria. The standard value is member={0}.

  8. For Server SSL Cert, paste in the root certificate from your CA certificate or your self-signed certificate.
    LDAP Server configuration pane

  9. For Server SSL Cert AltName, do one of the following:

    • If you are using ldaps:// with a self-signed certificate, enter a Subject Alternative Name (SAN) for your certificate.
    • If you are not using ldaps:// with a self-signed certificate, leave this field blank.
  10. For First Name Attribute, enter the attribute name in your LDAP directory that contains user first names. For example, cn.

  11. For Last Name Attribute, enter the attribute name in your LDAP directory that contains user last names. For example, sn.

  12. For Email Attribute, enter the attribute name in your LDAP directory that contains user email addresses. For example, mail.

  13. For Email Domain(s), enter a comma-separated list of the email domains for external users who can receive invitations to Apps Manager.

  14. For LDAP Referrals, choose how UAA handles LDAP server referrals to other user stores. UAA can follow the external referrals, ignore them without returning errors, or generate an error for each external referral and abort the authentication.

  15. For External Groups Whitelist, enter a comma-separated list of group patterns which need to be populated in the user’s id_token. For further information on accepted patterns see the description of the config.externalGroupsWhitelist in the OAuth/OIDC Identity Provider Documentation.

    Note: When sent as a Bearer token in the Authentication header, wide pattern queries for users who are members of multiple groups, can cause the size of the id_token to extend beyond what is supported by web servers.

    External Groups Whitelist field

  16. Click Save.

(Optional) Configure OpenID Connect

You can use OpenID Connect (OIDC) to instruct Kubernetes to verify end-user identities based on authentication performed by an authorization server, such as UAA.

To configure Enterprise PKS to use OIDC, select Enable UAA as OIDC provider. With OIDC enabled, Admin Users can grant cluster-wide access to Kubernetes end users.

OIDC configuration checkbox

For more information about configuring OIDC, see the table below:

Option Description
OIDC disabled If you do not enable OIDC, Kubernetes authenticates users against its internal user management system.
OIDC enabled If you enable OIDC, Kubernetes uses the authentication mechanism that you selected in UAA as follows:
  • If you selected Internal UAA, Kubernetes authenticates users against the internal UAA authentication mechanism.
  • If you selected LDAP Server, Kubernetes authenticates users against the LDAP server.

For additional information about getting credentials with OIDC configured, see Retrieve Cluster Credentials in Retrieving Cluster Credentials and Configuration.

Note: When you enable OIDC, existing Enterprise PKS-provisioned Kubernetes clusters are upgraded to use OIDC. This invalidates your kubeconfig files. You must regenerate the files for all clusters.

(Optional) Monitoring

In the Monitoring pane of the Enterprise PKS tile, you can choose to integrate Enterprise PKS with several external monitoring systems.

By default, monitoring is disabled.

Monitoring pane configuration

Wavefront

You can monitor Kubernetes clusters and pods metrics externally using the integration with Wavefront by VMware.

Note: Before you configure Wavefront integration, you must have an active Wavefront account and access to a Wavefront instance. You provide your Wavefront access token during configuration and enabling errands. For additional information, see the Wavefront documentation.

To enable and configure Wavefront monitoring, do the following:

  1. In the the Enterprise PKS tile, select Monitoring.
  2. Under Wavefront Integration, select Yes.
    Wavefront configuration
  3. Under Wavefront URL, enter the URL of your Wavefront subscription. For example:
    https://try.wavefront.com/api
    
  4. Under Wavefront Access Token, enter the API token for your Wavefront subscription.
  5. To configure Wavefront to send alerts by email, enter email addresses or Wavefront Target IDs separated by commas under Wavefront Alert Recipient, using the following syntax:

    USER-EMAIL,WAVEFRONT-TARGETID_001,WAVEFRONT-TARGETID_002
    

    Where:

    • USER-EMAIL is you alert recipient’s email address.
    • WAVEFRONT-TARGETID_001 and WAVEFRONT-TARGETID_002 are your comma-delimited Wavefront Target IDs.

    For example:

    randomuser@example.com,51n6psdj933ozdjf
    

To create alerts, you must enable errands in Enterprise PKS.

  1. In the the Enterprise PKS tile, select Errands.
  2. On the Errands pane, enable Create pre-defined Wavefront alerts errand.
  3. Enable Delete pre-defined Wavefront alerts errand.
  4. Click Save. Your settings apply to any clusters created after you have saved these configuration settings and clicked Apply Changes.

The Enterprise PKS tile does not validate your Wavefront configuration settings. To verify your setup, look for cluster and pod metrics in Wavefront.

VMware vRealize Operations Management Pack for Container Monitoring

If you are using Enterprise PKS on vSphere or vSphere with NSX-T Data Center, you can monitor Kubernetes clusters with VMware vRealize Operations Management Pack for Container Monitoring.

To integrate Enterprise PKS with VMware vRealize Operations Management Pack for Container Monitoring, you must deploy a container running cAdvisor in your PKS deployment.

cAdvisor is an open source tool that provides monitoring and statistics for Kubernetes clusters.

To deploy a cAdvisor container, do the following:

  1. Select Monitoring.
  2. Under Deploy cAdvisor, select Yes.

For more information about integrating this type of monitoring with PKS, see the VMware vRealize Operations Management Pack for Container Monitoring User Guideand Release Notes in the VMware documentation.

Usage Data

VMware’s Customer Experience Improvement Program (CEIP) and the Pivotal Telemetry Program (Telemetry) provides VMware and Pivotal with information that enables the companies to improve their products and services, fix problems, and advise you on how best to deploy and use our products. As part of the CEIP and Telemetry, VMware and Pivotal collect technical information about your organization’s use of the Enterprise PKS on a regular basis. Since Enterprise PKS is jointly developed and sold by VMware and Pivotal, we will share this information with one another. Information collected under CEIP or Telemetry does not personally identify any individual.

Regardless of your selection in the Usage Data pane, a small amount of data is sent from Cloud Foundry Container Runtime (CFCR) to the Enterprise PKS tile. However, that data is not shared externally.

To configure the Usage Data pane, perform the following steps:

  1. Select the Usage Data side-tab.
  2. Read the Usage Data description.
  3. Make your selection.
    1. To join the program, select Yes, I want to join the CEIP and Telemetry Program for PKS.
    2. To decline joining the program, select No, I do not want to join the CEIP and Telemetry Program for PKS.
  4. Click Save.

Note: If you join the CEIP and Telemetry Program for Enterprise PKS, open your firewall to allow outgoing access to https://vcsa.vmware.com/ph-prd on port 443.

Errands

Errands are scripts that run at designated points during an installation.

To configure when post-deploy and pre-delete errands for Enterprise PKS are run, make a selection in the dropdown next to the errand.

We recommend that you set the Run smoke tests errand to On. The errand uses the PKS CLI to create a Kubernetes cluster and then delete it. If the creation or deletion fails, the errand fails and the installation of the Enterprise PKS tile is aborted.

For the other errands, we recommend that you leave the default settings.

Errand configuration pane

For more information about errands and their configuration state, see Managing Errands in Ops Manager.

WARNING: Because Enterprise PKS uses floating stemcells, updating the Enterprise PKS tile with a new stemcell triggers the rolling of every VM in each cluster. Also, updating other product tiles in your deployment with a new stemcell causes the Enterprise PKS tile to roll VMs. This rolling is enabled by the Upgrade all clusters errand. We recommend that you keep this errand turned on because automatic rolling of VMs ensures that all deployed cluster VMs are patched. However, automatic rolling can cause downtime in your deployment.

If you are upgrading Enterprise PKS, you must enable the Upgrade All Clusters errand.

Resource Config

To modify the resource usage of Enterprise PKS and specify your PKS API load balancer, follow the steps below:

  1. Select Resource Config.

  2. In the Load Balancers column, enter the name of your PKS API load balancer, prefixed with tcp:. For example:

    tcp:PKS-API-LB
    

    Where PKS-API-LB is the name of your PKS API load balancer.

    You can find the name of your PKS API load balancer by doing one of the following:

    • If you are installing Enterprise PKS manually: The name of your PKS API load balancer is the name you configured in the Create a Load Balancer section of Creating a GCP Load Balancer for the PKS API.
    • If you are installing Enterprise PKS using Terraform: The name of your PKS API load balancer is the value of pks_lb_backend_name from terraform output.

    Note: After you click Apply Changes for the first time, BOSH assigns the PKS VM an IP address. BOSH uses the name you provide in the Load Balancers column to locate your load balancer, and then connect the load balancer to the PKS VM using its new IP address.

  3. (Optional) Edit other resources used by the Pivotal Container Service job. The Pivotal Container Service job requires a VM with the following minimum resources:

    CPU Memory Disk
    2 8 GB 29 GB

    Resource pane configuration

    Note: The automatic VM Type value matches the minimum recommended size for the Pivotal Container Service job. If you experience timeouts or slowness when interacting with the PKS API, select a VM Type with greater CPU and memory resources.

Step 3: Apply Changes

  1. Return to the Ops Manager Installation Dashboard.
  2. Click Review Pending Changes. Select the product that you intend to deploy and review the changes. For more information, see Reviewing Pending Product Changes.
  3. Click Apply Changes.

Step 4: Retrieve the PKS API Endpoint

You must share the PKS API endpoint to allow your organization to use the API to create, update, and delete clusters. For more information, see Creating Clusters.

To retrieve the PKS API endpoint, do the following:

  1. Navigate to the Ops Manager Installation Dashboard.
  2. Click the Enterprise PKS tile.
  3. Click the Status tab and locate the Pivotal Container Service job. The IP address of the Pivotal Container Service job is the PKS API endpoint.

Step 5: Configure External Load Balancer

If you are installing Enterprise PKS manually, follow the procedure in the Create a Network Tag for the Firewall Rule section of Creating a GCP Load Balancer for the PKS API.

Step 6: Install the PKS and Kubernetes CLIs

The PKS CLI and the Kubernetes CLI help you interact with your Enterprise PKS-provisioned Kubernetes clusters and Kubernetes workloads. To install the CLIs, follow the instructions below:

Step 7: Configure PKS API Access

Follow the procedures in Configuring PKS API Access.

Step 8: Configure Authentication for Enterprise PKS

Configure authentication for PKS using User Account and Authentication (UAA). For information, see Managing Users in Enterprise PKS with UAA.

Next Steps

After installing Enterprise PKS on GCP, you may want to do one or more of the following:


Please send any feedback you have to pks-feedback@pivotal.io.

Create a pull request or raise an issue on the source for this page in GitHub