LATEST VERSION: v1.2 - RELEASE NOTES
Pivotal Container Service v1.2

Installing PKS on vSphere with NSX-T Integration

Page last updated:

This topic describes how to install and configure Pivotal Container Service (PKS) on vSphere with NSX-T integration.

Prerequisites

Before performing the procedures in this topic, you must have deployed and configured Ops Manager. For more information, see vSphere with NSX-T Prerequisites and Resource Requirements.

If you use an instance of Ops Manager that you configured previously to install other runtimes, confirm the following settings before you install PKS:

  1. Navigate to Ops Manager.
  2. Open the Director Config pane.
  3. Select the Enable Post Deploy Scripts checkbox.
  4. Clear the Disable BOSH DNS server for troubleshooting purposes checkbox.
  5. Click the Installation Dashboard link to return to the Installation Dashboard.
  6. Click Review Pending Changes. Select all products you intend to deploy and review the changes. For more information, see Reviewing Pending Product Changes.

    Note: In Ops Manager v2.2, the Review Pending Changes page is a Beta feature. If you deploy PKS to Ops Manager v2.2, you can skip this step.

  7. Click Apply Changes.

Step 1: Install PKS

To install PKS, do the following:

  1. Download the product file from Pivotal Network.
  2. Navigate to https://YOUR-OPS-MANAGER-FQDN/ in a browser to log in to the Ops Manager Installation Dashboard.
  3. Click Import a Product to upload the product file.
  4. Under Pivotal Container Service in the left column, click the plus sign to add this product to your staging area.

Step 2: Configure PKS

Click the orange Pivotal Container Service tile to start the configuration process.

Note: Configuration of NSX-T or Flannel cannot be changed after initial installation and configuration of PKS.

Pivotal Container Service tile on the Ops Manager installation dashboard

Assign AZs and Networks

Perform the following steps:

  1. Click Assign AZs and Networks.

  2. Select the availability zone (AZ) where you want to deploy the PKS API VM as a singleton job.

    Note: You must select an additional AZ for balancing other jobs before clicking Save, but this selection has no effect in the current version of PKS.

    Assign AZs and Networks pane in Ops Manager

  3. Under Network, select the PKS Management Network linked to the ls-pks-mgmt NSX-T logical switch you created in the Create Networks Page step of Configuring Ops Manager on vSphere with NSX-T Integration. This will provide network placement for the PKS API VM.

  4. Under Service Network, your selection depends on whether you are upgrading from a previous PKS version or installing an original PKS deployment.

    • If you are deploying PKS with NSX-T for the first time, the Service Network field does not apply because PKS creates the service network for you during the installation process. However, the PKS tile requires you to make a selection. Therefore, select the same network you specified in the Network field.
    • If you are upgrading from a previous PKS version, select the Service Network linked to the ls-pks-service NSX-T logical switch that is created by PKS during installation. The service network provides network placement for the already existing on-demand Kubernetes cluster service instances created by the PKS broker.
  5. Click Save.

PKS API

Perform the following steps:

  1. Click PKS API.
  2. Under Certificate to secure the PKS API, provide your own certificate and private key pair. PKS API pane configuration The certificate you enter here should cover the domain that routes to the PKS API VM with TLS termination on the ingress.

    (Optional) If you do not have a certificate and private key pair, you can have Ops Manager generate one for you. Perform the following steps:
    1. Select the Generate RSA Certificate link.
    2. Enter the wildcard domain for your API hostname. For example, if your PKS API domain is api.pks.example.com, then enter *.pks.example.com.
    3. Click Generate.
      PKS API certificate generation

      Note: Ops Manager requires a wildcard certificate. If you enter a FQDN when generating the certificate, the PKS installation fails.

  3. Under API Hostname (FQDN), enter a fully qualified domain name (FQDN) to access the PKS API. For example, api.pks.example.com.
  4. Under Worker VM Max in Flight, enter the maximum number of non-canary worker instances to create or resize in parallel within an availability zone.

    This field sets the max_in_flight variable, which limits how many instances of a component can start simultaneously when a cluster is created or resized. The variable defaults to 1, which means that only one component starts at a time.
  5. Click Save.

Plans

To activate a plan, perform the following steps:

  1. Click the Plan 1, Plan 2, or Plan 3 tab.

    Note: A plan defines a set of resource types used for deploying clusters. You can configure up to three plans. You must configure Plan 1.

  2. Select Active to activate the plan and make it available to developers deploying clusters. Plan pane configuration
  3. Under Name, provide a unique name for the plan.
  4. Under Description, edit the description as needed. The plan description appears in the Services Marketplace, which developers can access by using PKS CLI.
  5. Under Master/ETCD Node Instances, select the default number of Kubernetes master/etcd nodes to provision for each cluster. You can enter either 1 or 3.

    Note: If you deploy a cluster with multiple master/etcd node VMs, confirm that you have sufficient hardware to handle the increased load on disk write and network traffic. For more information, see Hardware recommendations in the etcd documentation.

    In addition to meeting the hardware requirements for a multi-master cluster, we recommend configuring monitoring for etcd to monitor disk latency, network latency, and other indicators for the health of the cluster. For more information, see Monitoring Master/etcd Node VMs.

    WARNING: To change the number of master/etcd nodes for a plan, you must ensure that no existing clusters use the plan. PKS does not support changing the number of master/etcd nodes for plans with existing clusters.

  6. Under Master/ETCD VM Type, select the type of VM to use for Kubernetes master/etcd nodes. For more information, see the Master Node VM Size section of VM Sizing for PKS Clusters.
  7. Under Master Persistent Disk Type, select the size of the persistent disk for the Kubernetes master node VM.
  8. Under Master/ETCD Availability Zones, select one or more AZs for the Kubernetes clusters deployed by PKS. If you select more than one AZ, PKS deploys the master VM in the first AZ and the worker VMs across the remaining AZs.
  9. Under Worker Node Instances, select the default number of Kubernetes worker nodes to provision for each cluster. For high availability, create clusters with a minimum of three worker nodes, or two per AZ if you intend to use persistent volumes. For example, if you deploy across three AZs, you should have six worker nodes. For more information about persistent volumes, see Persistent Volumes in Maintaining Workload Uptime. Provisioning a minimum of three worker nodes, or two nodes per AZ is also recommended for stateless workloads.

    Plan pane configuration, part two

  10. Under Worker VM Type, select the type of VM to use for Kubernetes worker node VMs. For more information, see the Worker Node VM Number and Size section of VM Sizing for PKS Clusters.

    Note: If you install PKS v1.1.5 or later in an NSX-T environment, we recommend that you select a Worker VM Type with a minimum disk size of 16 GB. The disk space provided by the default “medium” Worker VM Type is insufficient for PKS with NSX-T v1.1.5 or later.

  11. Under Worker Persistent Disk Type, select the size of the persistent disk for the Kubernetes worker node VMs.

  12. Under Worker Availability Zones, select one or more AZs for the Kubernetes worker nodes. PKS deploys worker nodes equally across the AZs you select.

  13. Under Errand VM Type, select the size of the VM that contains the errand. The smallest instance possible is sufficient, as the only errand running on this VM is the one that applies the Default Cluster App YAML configuration.

  14. (Optional) Under (Optional) Add-ons - Use with caution, enter additional YAML configuration to add custom workloads to each cluster in this plan. You can specify multiple files using --- as a separator. For more information, see Adding Custom Workloads. Plan pane configuration

  15. (Optional) To allow users to create pods with privileged containers, select the Enable Privileged Containers - Use with caution option. For more information, see Pods in the Kubernetes documentation.

  16. (Optional) To disable the admission controller, select the Disable DenyEscalatingExec checkbox. If you select this option, clusters in this plan can create security vulnerabilities that may impact other tiles. Use this feature with caution.

  17. Click Save.

To deactivate a plan, perform the following steps:

  1. Click the Plan 1, Plan 2, or Plan 3 tab.
  2. Select Plan Inactive.
  3. Click Save.

Kubernetes Cloud Provider

In the procedure below, you use credentials for vCenter master VMs. You must have provisioned the service account with the correct permissions. For more information, see Create the Master Node Service Account in Preparing vSphere Before Deploying PKS.

To configure your Kubernetes cloud provider settings, follow the procedure below:

  1. Click Kubernetes Cloud Provider.
  2. Under Choose your IaaS, select vSphere.
  3. Ensure the values in the following procedure match those in the vCenter Config section of the Ops Manager tile.

    vSphere pane configuration

    1. Enter your vCenter Master Credentials. Enter the username using the format user@CF-EXAMPLE.com. For more information about the master node service account, see Preparing to Deploy PKS on vSphere.
    2. Enter your vCenter Host. For example, vcenter.CF-EXAMPLE.com.
    3. Enter your Datacenter Name. For example, CF-EXAMPLE-dc.
    4. Enter your Datastore Name. For example, CF-EXAMPLE-ds.
    5. Enter the Stored VM Folder so that the persistent stores know where to find the VMs. To retrieve the name of the folder, navigate to your BOSH Director tile, click vCenter Config, and locate the value for VM Folder. The default folder name is pcf_vms.

      Note: We recommend using a shared datastore for multi-AZ and multi-cluster environments.

  4. Click Save.

(Optional) Logging

You can designate an external syslog endpoint for PKS component and cluster log messages.

To specify the destination for PKS log messages, do the following:

  1. Click Logging.
  2. To enable syslog forwarding, select Yes.

    Enable syslog forwarding

  3. Under Address, enter the destination syslog endpoint.

  4. Under Port, enter the destination syslog port.

  5. Select a transport protocol for log forwarding.

  6. (Optional) Pivotal strongly recommends that you enable TLS encryption when forwarding logs as they may contain sensitive information. For example, these logs may contain cloud provider credentials. To enable TLS, perform the following steps:

    1. Under Permitter Peer, provide the accepted fingerprint (SHA1) or name of remote peer. For example, *.YOUR-LOGGING-SYSTEM.com.
    2. Under TLS Certificate, provide a TLS certificate for the destination syslog endpoint.

      Note: You do not need to provide a new certificate if the TLS certificate for the destination syslog endpoint is signed by a Certificate Authority (CA) in your BOSH certificate store.

  7. You can manage logs using VMware vRealize Log Insight (vRLI). The integration pulls logs from all BOSH jobs and containers running in the cluster, including node logs from core Kubernetes and BOSH processes, Kubernetes event logs, and POD stdout and stderr.

    Note: Before you configure the vRLI integration, you must have a vRLI license and vRLI must be installed, running, and available in your environment. You need to provide the live instance address during configuration. For instructions and additional information, see the vRealize Log Insight documentation.

    By default, vRLI logging is disabled. To enable and configure vRLI logging, under Enable VMware vRealize Log Insight Integration?, select Yes and then perform the following steps: Enable VMware vRealize Log Insight Integration

    1. Under Host, enter the IP address or FQDN of the vRLI host.
    2. (Optional) Select the Enable SSL? checkbox to encrypt the logs being sent to vRLI using SSL.
    3. Choose one of the following SSL certificate validation options:
      • To skip certificate validation for the vRLI host, select the Disable SSL certificate validation checkbox. Select this option if you are using a self-signed certificate in order to simplify setup for a development or test environment.

        Note: Disabling certificate validation is not recommended for production environments.

      • To enable certificate validation for the vRLI host, clear the Disable SSL certificate validation checkbox.
    4. (Optional) If your vRLI certificate is not signed by a trusted CA root or other well known certificate, enter the certificate in the CA certificate field. Locate the PEM of the CA used to sign the vRLI certificate, copy the contents of the certificate file, and paste them into the field. Certificates must be in PEM-encoded format.
    5. Under Rate limiting, enter a time in milliseconds to change the rate at which logs are sent to the vRLI host. The rate limit specifies the minimum time between messages before the fluentd agent begins to drop messages. The default value (0) means the rate is not limited, which suffices for many deployments.

      Note: If your deployment is generating a high volume of logs, you can increase this value to limit network traffic. Consider starting with a lower number, such as 10, and tuning to optimize for your deployment. A large number might result in dropping too many log entries.

  8. To enable clusters to drain app logs to sinks using syslog://, select the Enable Sink Resources checkbox. For more information about using sink resources, see Creating Sink Resources.
    Enable sink resource checkbox

  9. Click Save. This configuration applies to any clusters created after you have saved these configuration settings and clicked Apply Changes.

    Note: The PKS tile does not validate your vRLI configuration settings. To verify your setup, look for log entries in vRLI.

Networking

To configure networking, do the following:

  1. Click Networking.
  2. Under Container Networking Interface, select NSX-T. NSX-T Networking configuration pane in Ops Manager
    1. For NSX Manager hostname, enter the hostname or IP address of your NSX Manager.
    2. For NSX Manager Super User Principal Identify Certificate, copy and paste the contents and private key of the Principal Identity certificate you created in Generating and Registering the NSX Manager Superuser Principal Identity Certificate and Key.
    3. (Optional) For NSX Manager CA Cert, copy and paste the contents of the NSX Manager CA certificate you created in Generating and Registering the NSX Manager Certificate. Use this certificate and key to connect to the NSX Manager.
    4. The Disable SSL certificate verification checkbox is not selected by default. In order to disable TLS verification, select the checkbox. You may want to disable TLS verification if you did not enter a CA certificate, or if your CA certificate is self-signed.
    5. If you are using a NAT deployment topology, leave the NAT mode checkbox selected. If you are using a No-NAT topology, clear this checkbox. For more information, see Deployment Topologies.
    6. Enter the following IP Block settings: NSX-T Networking configuration pane in Ops Manager
      • Pods IP Block ID: Enter the UUID of the IP block to be used for Kubernetes pods. PKS allocates IP addresses for the pods when they are created in Kubernetes. Each time a namespace is created in Kubernetes, a subnet from this IP block is allocated. The current subnet size that is created is /24, which means a maximum of 256 pods can be created per namespace.
      • Nodes IP Block ID: Enter the UUID of the IP block to be used for Kubernetes nodes. PKS allocates IP addresses for the nodes when they are created in Kubernetes. The node networks are created on a separate IP address space from the pod networks. The current subnet size that is created is /24, which means a maximum of 256 nodes can be created per cluster. For more information, including sizes and the IP blocks to avoid using, see Plan IP Blocks in Preparing NSX-T Before Deploying PKS.
    7. For T0 Router ID, enter the t0-pks T0 router UUID. Locate this value in the NSX-T UI router overview.
    8. For Floating IP Pool ID, enter the ip-pool-vips ID that you created for load balancer VIPs. For more information, see Plan Network CIDRs. PKS uses the floating IP pool to allocate IP addresses to the load balancers created for each of the clusters. The load balancer routes the API requests to the master nodes and the data plane.
    9. For Nodes DNS, enter one or more Domain Name Servers used by the Kubernetes nodes.
    10. For vSphere Cluster Names, enter a comma-separated list of the vSphere clusters where you will deploy Kubernetes clusters. The NSX-T precheck errand uses this field to verify that the hosts from the specified clusters are available in NSX-T. You can specify clusters in this format: cluster1,cluster2,cluster3.
  3. (Optional) Configure a global proxy for all outgoing HTTP and HTTPS traffic from your Kubernetes clusters.

    Production environments can deny direct access to public Internet services and between internal services by placing an HTTP or HTTPS proxy in the network path between Kubernetes nodes and those services.

    If your environment includes HTTP or HTTPS proxies, configuring PKS to use these proxies allows PKS-deployed Kubernetes nodes to access public Internet services and other internal services. Follow the steps below to configure a global proxy for all outgoing HTTP/HTTPS traffic from your Kubernetes clusters:
    1. Under HTTP/HTTPS proxy, select Enabled. Networking pane configuration
    2. Under HTTP Proxy URL, enter the URL of your HTTP/HTTPS proxy endpoint. For example, http://myproxy.com:1234.
    3. (Optional) If your proxy uses basic authentication, enter the username and password in either HTTP Proxy Credentials or HTTPS Proxy Credentials.
    4. Under No Proxy, enter the service network CIDR where your PKS cluster is deployed. List any additional IP addresses that should bypass the proxy.

      Note: By default, the .internal, 10.100.0.0/8, and 10.200.0.0/8 IP address ranges are not proxied. This allows internal PKS communication.

  4. Under Allow outbound internet access from Kubernetes cluster vms (IaaS-dependent), ignore the Enable outbound internet access checkbox.
  5. Click Save.

UAA

To configure the UAA server, do the following:

  1. Click UAA.
  2. Under PKS CLI Access Token Lifetime, enter a time in seconds for the PKS CLI access token lifetime.
    UAA pane configuration
  3. Under PKS CLI Refresh Token Lifetime, enter a time in seconds for the PKS CLI refresh token lifetime.
  4. Select one of the following options:
    • To use an internal user account store for UAA, select Internal UAA. Click Save and continue to (Optional) Monitoring.
    • To use an external user account store for UAA, select LDAP Server and continue to Configure LDAP as an Identity Provider.

      Note: Selecting LDAP Server allows admin users to give cluster access to groups of users. For more information about performing this procedure, see Grant Cluster Access to a Group in Managing Users in PKS with UAA.

Configure LDAP as an Identity Provider

To integrate UAA with one or more LDAP servers, configure PKS with your LDAP endpoint information as follows:

  1. Under UAA, select LDAP Server.
    LDAP Server configuration pane

  2. For Server URL, enter the URLs that point to your LDAP server. If you have multiple LDAP servers, separate their URLs with spaces. Each URL must include one of the following protocols:

    • ldap://: Use this protocol if your LDAP server uses an unencrypted connection.
    • ldaps://: Use this protocol if your LDAP server uses SSL for an encrypted connection. To support an encrypted connection, the LDAP server must hold a trusted certificate or you must import a trusted certificate to the JVM truststore.
  3. For LDAP Credentials, enter the LDAP Distinguished Name (DN) and password for binding to the LDAP server. For example, cn=administrator,ou=Users,dc=example,dc=com. If the bind user belongs to a different search base, you must use the full DN.

    Note: We recommend that you provide LDAP credentials that grant read-only permissions on the LDAP search base and the LDAP group search base.

  4. For User Search Base, enter the location in the LDAP directory tree where LDAP user search begins. The LDAP search base typically matches your domain name.

    For example, a domain named cloud.example.com may use ou=Users,dc=example,dc=com as its LDAP user search base.

  5. For User Search Filter, enter a string to use for LDAP user search criteria. The search criteria allows LDAP to perform more effective and efficient searches. For example, the standard LDAP search filter cn=Smith returns all objects with a common name equal to Smith.

    In the LDAP search filter string that you use to configure PKS, use {0} instead of the username. For example, use cn={0} to return all LDAP objects with the same common name as the username.

    In addition to cn, other common attributes are mail, uid and, in the case of Active Directory, sAMAccountName.

    Note: For information about testing and troubleshooting your LDAP search filters, see Configuring LDAP Integration with Pivotal Cloud Foundry.

  6. For Group Search Base, enter the location in the LDAP directory tree where the LDAP group search begins.

    For example, a domain named cloud.example.com may use ou=Groups,dc=example,dc=com as its LDAP group search base.

    Follow the instructions in the Grant PKS Access to an External LDAP Group section of Managing Users in PKS with UAA to map the groups under this search base to roles in PKS.

  7. For Group Search Filter, enter a string that defines LDAP group search criteria. The standard value is member={0}.

  8. For Server SSL Cert, paste in the root certificate from your CA certificate or your self-signed certificate.
    LDAP Server configuration pane

  9. For Server SSL Cert AltName, do one of the following:

    • If you are using ldaps:// with a self-signed certificate, enter a Subject Alternative Name (SAN) for your certificate.
    • If you are not using ldaps:// with a self-signed certificate, leave this field blank.
  10. For First Name Attribute, enter the attribute name in your LDAP directory that contains user first names. For example, cn.

  11. For Last Name Attribute, enter the attribute name in your LDAP directory that contains user last names. For example, sn.

  12. For Email Attribute, enter the attribute name in your LDAP directory that contains user email addresses. For example, mail.

  13. For Email Domain(s), enter a comma-separated list of the email domains for external users who can receive invitations to Apps Manager.

  14. For LDAP Referrals, choose how UAA handles LDAP server referrals to other user stores. UAA can follow the external referrals, ignore them without returning errors, or generate an error for each external referral and abort the authentication.

  15. For External Groups Whitelist, enter a comma-separated list of group patterns which need to be populated in the user’s id_token. For further information on accepted patterns see the description of the config.externalGroupsWhitelist in the OAuth/OIDC Identity Provider Documentation.

    Note: When sent as a Bearer token in the Authentication header, wide pattern queries for users who are members of multiple groups, can cause the size of the id_token to extend beyond what is supported by web servers.

    External Groups Whitelist field

  16. Click Save.

(Optional) Configure OpenID Connect

You can use OpenID Connect (OIDC) to instruct Kubernetes to verify end-user identities based on authentication performed by an authorization server, such as UAA.

To configure PKS to use OIDC, select Enable UAA as OIDC provider. With OIDC enabled, Admin Users can grant cluster-wide access to Kubernetes end users.

OIDC configuration checkbox

For more information about configuring OIDC, see the table below:

Option Description
OIDC disabled If you do not enable OIDC, Kubernetes authenticates users against its internal user management system.
OIDC enabled If you enable OIDC, Kubernetes uses the authentication mechanism that you selected in UAA:
  • If you selected Internal UAA, Kubernetes authenticates users against the internal UAA authentication mechanism.
  • If you selected LDAP Server, Kubernetes authenticates users against the LDAP server.

For additional information on getting credentials with OIDC configured, see Retrieve Cluster Credentials in Retrieving Cluster Credentials and Configuration.

Note: When you enable OIDC, existing PKS-provisioned Kubernetes clusters are upgraded to use OIDC. This invalidates your kubeconfig files. You must regenerate the files for all clusters.

(Optional) Monitoring

You can monitor Kubernetes clusters and pods metrics externally using the integration with Wavefront by VMware.

Note: Before you configure Wavefront integration, you must have an active Wavefront account and access to a Wavefront instance. You provide your Wavefront access token during configuration and enabling errands. For additional information, see Pivotal Container Service Integration Details in the Wavefront documentation.

By default, monitoring is disabled. To enable and configure Wavefront monitoring, do the following:

  1. Under Wavefront Integration, select Yes.
    Monitoring pane configuration
  2. Under Wavefront URL, enter the URL of your Wavefront subscription. For example, https://try.wavefront.com/api.
  3. Under Wavefront Access Token, enter the API token for your Wavefront subscription.
  4. To configure Wavefront to send alerts by email, enter email addresses or Wavefront Target IDs separated by commas under Wavefront Alert Recipient. For example: user@example.com,Wavefront_TargetID. To create alerts, you must enable errands.
  5. In the Errands tab, enable Create pre-defined Wavefront alerts errand and Delete pre-defined Wavefront alerts errand. Errand pane configuration
  6. Click Save. Your settings apply to any clusters created after you have saved these configuration settings and clicked Apply Changes.

    Note: The PKS tile does not validate your Wavefront configuration settings. To verify your setup, look for cluster and pod metrics in Wavefront.

Usage Data

VMware’s Customer Experience Improvement Program (CEIP) and the Pivotal Telemetry Program (Telemetry) provides VMware and Pivotal with information that enables the companies to improve their products and services, fix problems, and advise you on how best to deploy and use our products. As part of the CEIP and Telemetry, VMware and Pivotal collect technical information about your organization’s use of the Pivotal Container Service (“PKS”) on a regular basis. Since PKS is jointly developed and sold by VMware and Pivotal, we will share this information with one another. Information collected under CEIP or Telemetry does not personally identify any individual.

Regardless of your selection in the Usage Data pane, a small amount of data is sent from Cloud Foundry Container Runtime (CFCR) to the PKS tile. However, that data is not shared externally.

To configure the Usage Data pane:

  1. Select the Usage Data side-tab.
  2. Read the Usage Data description.
  3. Make your selection.
    1. To join the program, select Yes, I want to join the CEIP and Telemetry Program for PKS.
    2. To decline joining the program, select No, I do not want to join the CEIP and Telemetry Program for PKS.
  4. Click Save.

Note: If you join the CEIP and Telemetry Program for PKS, open your firewall to allow outgoing access to https://vcsa.vmware.com/ph-prd on port 443.

Errands

Errands are scripts that run at designated points during an installation.

To configure when post-deploy and pre-delete errands for PKS are run, make a selection in the dropdown next to the errand.

WARNING: You must enable the NSX-T Validation errand to verify and tag required NSX-T objects.

Errand configuration pane

For more information about errands and their configuration state, see Managing Errands in Ops Manager.

WARNING: Because PKS uses floating stemcells, updating the PKS tile with a new stemcell triggers the rolling of every VM in each cluster. Also, updating other product tiles in your deployment with a new stemcell causes the PKS tile to roll VMs. This rolling is enabled by the Upgrade all clusters errand. We recommend that you keep this errand turned on because automatic rolling of VMs ensures that all deployed cluster VMs are patched. However, automatic rolling can cause downtime in your deployment.

If you upgrade PKS from 1.0.x to 1.1, you must enable the Upgrade All Cluster errand. This ensures existing clusters can perform resize or delete actions after the upgrade.

Resource Config

To modify the resource usage of PKS, click Resource Config and edit the Pivotal Container Service job.

Resource pane configuration

Note: If you experience timeouts or slowness when interacting with the PKS API, select a VM Type with greater CPU and memory resources for the Pivotal Container Service job.

Step 3: Apply Changes

After configuring the PKS tile, follow the steps below to deploy the tile:

  1. Return to the Ops Manager Installation Dashboard.
  2. Click Review Pending Changes. Select the product that you intend to deploy and review the changes. For more information, see Reviewing Pending Product Changes.

    Note: In Ops Manager v2.2, the Review Pending Changes page is a Beta feature. If you deploy PKS to Ops Manager v2.2, you can skip this step.

  3. Click Apply Changes.

Next Steps

After installing PKS on vSphere with NSX-T integration, you may want to do one or more of the following:

Install the PKS and Kubernetes CLIs

The PKS and Kubernetes CLIs help you interact with your PKS-provisioned Kubernetes clusters and Kubernetes workloads. To install the CLIs, follow the instructions below:

Share the PKS API Endpoint

You must share the PKS API endpoint to allow your organization to use the API to create, update, and delete clusters. For more information, see Creating Clusters.

  1. When the installation is completed, retrieve the PKS endpoint by performing the following steps:
    1. From the Ops Manager Installation Dashboard, click the Pivotal Container Service tile.
    2. Click the Status tab and record the IP address assigned to the Pivotal Container Service job.
  2. Create a DNAT rule on the t1-pks-mgmt T1 to map an external IP from the PKS MANAGEMENT CIDR to the PKS endpoint. For example, a DNAT rule that maps 10.172.1.4 to 172.31.0.4, where 172.31.0.4 is PKS endpoint IP address on the ls-pks-mgmt NSX-T Logical Switch.

    Note: Ensure that you have no overlapping NAT rules. If your NAT rules overlap, you cannot reach Ops Manager from VMs in the vCenter network.

Developers should use the DNAT IP address when logging in with the PKS CLI. For more information, see Using PKS.

Configure PKS API Access

Follow the procedures in Configuring PKS API Access.

Configure Authentication for PKS

Configure authentication for PKS using User Account and Authentication (UAA). For information, see Managing Users in PKS with UAA.

Integrate VMware Harbor with PKS

To integrate VMware Harbor Registry with PKS to store and manage container images, see Integrating VMware Harbor Registry with PKS.


Please send any feedback you have to pks-feedback@pivotal.io.

Create a pull request or raise an issue on the source for this page in GitHub