Installing Enterprise PKS on vSphere with NSX-T

Page last updated:

This topic describes how to install and configure Enterprise Pivotal Container Service (Enterprise PKS) on vSphere with NSX-T integration.

Prerequisites

Before you begin this procedure, ensure that you have successfully completed all preceding steps for installing Enterprise PKS on vSphere with NSX-T, including:

Step 1: Install Enterprise PKS

To install Enterprise PKS, do the following:

  1. Download the product file from Pivotal Network.
  2. Navigate to https://YOUR-OPS-MANAGER-FQDN/ in a browser to log in to the Ops Manager Installation Dashboard.
  3. Click Import a Product to upload the product file.
  4. Under Enterprise PKS in the left column, click the plus sign to add this product to your staging area.

Step 2: Configure Enterprise PKS

Click the orange Enterprise PKS tile to start the configuration process.

Note: Configuration of NSX-T or Flannel cannot be changed after initial installation and configuration of Enterprise PKS.

PKS tile on the Ops Manager installation dashboard

WARNING: When you configure the Enterprise PKS tile, do not use spaces in any field entries. This includes spaces between characters as well as leading and trailing spaces. If you use a space in any field entry, the deployment of Enterprise PKS fails.

Assign AZs and Networks

Perform the following steps:

  1. Click Assign AZs and Networks.

  2. Under Place singleton jobs in, select the availability zone (AZ) where you want to deploy the PKS API VM as a singleton job.

    Assign AZs and Networks pane in Ops Manager

  3. Under Balance other jobs in, select the AZ for balancing other jobs.

    Note: You must specify the Balance other jobs in AZ, but the selection has no effect in the current version of Enterprise PKS.

  4. Under Network, select the PKS Management Network linked to the ls-pks-mgmt NSX-T logical switch you created in the Create Networks Page step of Configuring BOSH Director with NSX-T for Enterprise PKS. This will provide network placement for the PKS API VM.

  5. Under Service Network, your selection depends on whether you are installing a new Enterprise PKS deployment or upgrading from a previous version of Enterprise PKS.

    • If you are deploying Enterprise PKS with NSX-T for the first time, select the PKS Management Network that you specified in the Network field. You do not need to create or define a service network because Enterprise PKS creates the service network for you during the installation process.
    • If you are upgrading from a previous version of Enterprise PKS, then select the Service Network linked to the ls-pks-service NSX-T logical switch that Enterprise PKS created for you during installation. The service network provides network placement for existing on-demand Kubernetes cluster service instances that were created by the Enterprise PKS broker.
  6. Click Save.

PKS API

Perform the following steps:

  1. Click PKS API.

  2. Under Certificate to secure the PKS API, provide your own certificate and private key pair.
    PKS API pane configuration
    The certificate that you supply should cover the domain that routes to the PKS API VM with TLS termination on the ingress.

    If you do not have a certificate and private key pair, Ops Manager can generate one for you. To generate a certificate, do the following:

    1. Click Change.
    2. Click Generate RSA Certificate.
    3. Enter the domain for your API hostname. This can be a standard FQDN or a wildcard domain.
    4. Click Generate.
      PKS API certificate generation
  3. Under API Hostname (FQDN), enter the FQDN that you registered to point to the PKS API load balancer, such as api.pks.example.com. To retrieve the public IP address or FQDN of the PKS API load balancer, log in to your IaaS console.

  4. Under Worker VM Max in Flight, enter the maximum number of non-canary worker instances to create or resize in parallel within an availability zone.

    This field sets the max_in_flight variable value. When you create or resize a cluster, the max_in_flight value limits the number of component instances that can be created or started simultaneously. By default, the max_in_flight value is set to 4, which means that up to four component instances are simultaneously created or started at a time.

  5. Click Save.

Plans

A plan defines a set of resource types used for deploying a cluster.

You must first activate and configure Plan 1, and afterwards you can optionally activate Plan 2 through Plan 10.

To activate and configure a plan, perform the following steps:

  1. Click the plan that you want to activate.

    Note: Plans 11, 12 and 13 support only Windows worker-based Kubernetes clusters, on vSphere with Flannel.

  2. Select Active to activate the plan and make it available to developers deploying clusters.
    Plan pane configuration
  3. Under Name, provide a unique name for the plan.
  4. Under Description, edit the description as needed. The plan description appears in the Services Marketplace, which developers can access by using the PKS CLI.
  5. Under Master/ETCD Node Instances, select the default number of Kubernetes master/etcd nodes to provision for each cluster. You can enter 1, 3, or 5.

    Note: If you deploy a cluster with multiple master/etcd node VMs, confirm that you have sufficient hardware to handle the increased load on disk write and network traffic. For more information, see Hardware recommendations in the etcd documentation.

    In addition to meeting the hardware requirements for a multi-master cluster, we recommend configuring monitoring for etcd to monitor disk latency, network latency, and other indicators for the health of the cluster. For more information, see Monitoring Master/etcd Node VMs.

    WARNING: To change the number of master/etcd nodes for a plan, you must ensure that no existing clusters use the plan. Enterprise PKS does not support changing the number of master/etcd nodes for plans with existing clusters.

  6. Under Master/ETCD VM Type, select the type of VM to use for Kubernetes master/etcd nodes. For more information, including master node VM customization options, see the Master Node VM Size section of VM Sizing for Enterprise PKS Clusters.

  7. Under Master Persistent Disk Type, select the size of the persistent disk for the Kubernetes master node VM.

  8. Under Master/ETCD Availability Zones, select one or more AZs for the Kubernetes clusters deployed by Enterprise PKS. If you select more than one AZ, Enterprise PKS deploys the master VM in the first AZ and the worker VMs across the remaining AZs. If you are using multiple masters, Enterprise PKS deploys the master and worker VMs across the AZs in round-robin fashion.

  9. Under Maximum number of workers on a cluster, set the maximum number of Kubernetes worker node VMs that Enterprise PKS can deploy for each cluster. Enter any whole number in this field.
    Plan pane configuration, part two

  10. Under Worker Node Instances, select the default number of Kubernetes worker nodes to provision for each cluster.

    If the user creating a cluster with the PKS CLI does not specify a number of worker nodes, the cluster is deployed with the default number set in this field. This value cannot be greater than the maximum worker node value you set in the previous field. For more information about creating clusters, see Creating Clusters.

    For high availability, create clusters with a minimum of three worker nodes, or two per AZ if you intend to use PersistentVolumes (PVs). For example, if you deploy across three AZs, you should have six worker nodes. For more information about PVs, see PersistentVolumes in Maintaining Workload Uptime. Provisioning a minimum of three worker nodes, or two nodes per AZ is also recommended for stateless workloads.

    If you later reconfigure the plan to adjust the default number of worker nodes, the existing clusters that have been created from that plan are not automatically upgraded with the new default number of worker nodes.

  11. Under Worker VM Type, select the type of VM to use for Kubernetes worker node VMs. For more information, including worker node VM customization options, see the Worker Node VM Number and Size section of VM Sizing for Enterprise PKS Clusters.

    Note: If you install Enterprise PKS in an NSX-T environment, we recommend that you select a Worker VM Type with a minimum disk size of 16 GB. The disk space provided by the default medium Worker VM Type is insufficient for Enterprise PKS with NSX-T.

  12. Under Worker Persistent Disk Type, select the size of the persistent disk for the Kubernetes worker node VMs.

  13. Under Worker Availability Zones, select one or more AZs for the Kubernetes worker nodes. Enterprise PKS deploys worker nodes equally across the AZs you select.

  14. Under Kubelet customization - system-reserved, enter resource values that Kubelet can use to reserve resources for system daemons. For example, memory=250Mi, cpu=150m. For more information about system-reserved values, see the Kubernetes documentation. Plan pane configuration, part two

  15. Under Kubelet customization - eviction-hard, enter threshold limits that Kubelet can use to evict pods when they exceed the limit. Enter limits in the format EVICTION-SIGNAL=QUANTITY. For example, memory.available=100Mi, nodefs.available=10%, nodefs.inodesFree=5%. For more information about eviction thresholds, see the Kubernetes documentation.

    WARNING: Use the Kubelet customization fields with caution. If you enter values that are invalid or that exceed the limits the system supports, Kubelet might fail to start. If Kubelet fails to start, you cannot create clusters.

  16. Under Errand VM Type, select the size of the VM that contains the errand. The smallest instance possible is sufficient, as the only errand running on this VM is the one that applies the Default Cluster App YAML configuration.

  17. (Optional) Under (Optional) Add-ons - Use with caution, enter additional YAML configuration to add custom workloads to each cluster in this plan. You can specify multiple files using --- as a separator. For more information, see Adding Custom Workloads. Plan pane configuration

  18. (Optional) To allow users to create pods with privileged containers, select the Allow Privileged option. For more information, see Pods in the Kubernetes documentation.

    Note: Enabling the Allow Privileged option means that all containers in the cluster will run in privileged mode. Pod Security Policy provides a privileged parameter that can be used to enable or disable Pods running in privileged mode. As a best practice, if you enable Allow Privileged, define PSP to limit which Pods run in privileged mode. If you are implementing PSP for privileged pods, you must enable Allow Privileged mode.

  19. (Optional) Enable or disable one or more admission controller plugins: PodSecurityPolicy, DenyEscalatingExec, and SecurityContextDeny. See Admission Plugins for more information.

  20. (Optional) Under Node Drain Timeout(mins), enter the timeout in minutes for the node to drain pods. If you set this value to 0, the node drain does not terminate. Node Drain Timeout fields

  21. (Optional) Under Pod Shutdown Grace Period (seconds), enter a timeout in seconds for the node to wait before it forces the pod to terminate. If you set this value to -1, the default timeout is set to the one specified by the pod.

  22. (Optional) To configure when the node drains, enable the following:

    • Force node to drain even if it has running pods not managed by a ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet.
    • Force node to drain even if it has running DaemonSet-managed pods.
    • Force node to drain even if it has running running pods using emptyDir.
    • Force node to drain even if pods are still running after timeout.

    Warning: If you select Force node to drain even if pods are still running after timeout, the node kills all running workloads on pods. Before enabling this configuration, set Node Drain Timeout to a value greater than 0.

    For more information about configuring default node drain behavior, see Worker Node Hangs Indefinitely in Troubleshooting.

  23. Click Save.

To deactivate a plan, perform the following steps:

  1. Click the plan that you want to deactivate.
  2. Select Inactive.
  3. Click Save.

Kubernetes Cloud Provider

In the procedure below, you use credentials for vCenter master VMs. You must have provisioned the service account with the correct permissions. For more information, see Create the Master Node Service Account in Preparing vSphere Before Deploying Enterprise PKS.

To configure your Kubernetes cloud provider settings, follow the procedure below:

  1. Click Kubernetes Cloud Provider.
  2. Under Choose your IaaS, select vSphere.
  3. Ensure the values in the following procedure match those in the vCenter Config section of the Ops Manager tile.

    vSphere pane configuration
    1. Enter your vCenter Master Credentials. Enter the username using the format user@example.com. For more information about the master node service account, see Preparing vSphere Before Deploying Enterprise PKS.
    2. Enter your vCenter Host. For example, vcenter-example.com.
    3. Enter your Datacenter Name. For example, example-dc.
    4. Enter your Datastore Name. For example, example-ds. Populate Datastore Name with the Persistent Datastore name configured in your BOSH Director tile under vCenter Config > Persistent Datastore Names. The Datastore Name field should contain a single Persistent datastore.

      Note: The Datastore Name is the default datastore used if the Kubernetes cluster StorageClass does not define a StoragePolicy. Do not enter a datastore that is a list of BOSH Job/VMDK datastores. For more information, see PersistentVolume Storage Options on vSphere.

      Note: For multi-AZ and multi-cluster environments, your Datastore Name should be a shared Persistent datastore available to each vSphere cluster. Do not enter a datastore that is local to a single cluster. For more information, see PersistentVolume Storage Options on vSphere.

    5. Enter the Stored VM Folder so that the persistent stores know where to find the VMs. To retrieve the name of the folder, navigate to your BOSH Director tile, click vCenter Config, and locate the value for VM Folder. The default folder name is pcf_vms.
    6. Click Save.

Networking

To configure networking, do the following:

  1. Click Networking.
  2. Under Container Networking Interface, select NSX-T. NSX-T Networking configuration pane in PKS tile
    1. For NSX Manager hostname, enter the hostname or IP address of your NSX Manager.
    2. For NSX Manager Super User Principal Identify Certificate, copy and paste the contents and private key of the Principal Identity certificate you created in Generating and Registering the NSX Manager Superuser Principal Identity Certificate and Key.
    3. For NSX Manager CA Cert, copy and paste the contents of the NSX Manager CA certificate you created in Generating and Registering the NSX Manager Certificate. Use this certificate and key to connect to the NSX Manager.
    4. The Disable SSL certificate verification checkbox is not selected by default. In order to disable TLS verification, select the checkbox. You may want to disable TLS verification if you did not enter a CA certificate, or if your CA certificate is self-signed.

      Note: The NSX Manager CA Cert field and the Disable SSL certificate verification option are intended to be mutually exclusive. If you disable SSL certificate verification, leave the CA certificate field blank. If you enter a certificate in the NSX Manager CA Cert field, do not disable SSL certificate verification. If you populate the certificate field and disable certificate validation, insecure mode takes precedence.

    5. If you are using a NAT deployment topology, leave the NAT mode checkbox selected. If you are using a No-NAT topology, clear this checkbox. For more information, see Deployment Topologies.
    6. Enter the following IP Block settings: NSX-T Networking configuration pane in Ops Manager View a larger version of this image.
      • Pods IP Block ID: Enter the UUID of the IP block to be used for Kubernetes pods. Enterprise PKS allocates IP addresses for the pods when they are created in Kubernetes. Each time a namespace is created in Kubernetes, a subnet from this IP block is allocated. The current subnet size that is created is /24, which means a maximum of 256 pods can be created per namespace.
      • Nodes IP Block ID: Enter the UUID of the IP block to be used for Kubernetes nodes. Enterprise PKS allocates IP addresses for the nodes when they are created in Kubernetes. The node networks are created on a separate IP address space from the pod networks. The current subnet size that is created is /24, which means a maximum of 256 nodes can be created per cluster. For more information, including sizes and the IP blocks to avoid using, see Plan IP Blocks in Preparing NSX-T Before Deploying Enterprise PKS.
    7. For T0 Router ID, enter the t0-pks T0 router UUID. Locate this value in the NSX-T UI router overview.
    8. For Floating IP Pool ID, enter the ip-pool-vips ID that you created for load balancer VIPs. For more information, see Plan Network CIDRs. Enterprise PKS uses the floating IP pool to allocate IP addresses to the load balancers created for each of the clusters. The load balancer routes the API requests to the master nodes and the data plane.
    9. For Nodes DNS, enter one or more Domain Name Servers used by the Kubernetes nodes.
    10. For vSphere Cluster Names, enter a comma-separated list of the vSphere clusters where you will deploy Kubernetes clusters. The NSX-T pre-check errand uses this field to verify that the hosts from the specified clusters are available in NSX-T. You can specify clusters in this format: cluster1,cluster2,cluster3.
    11. For Kubernetes Service Network CIDR Range, specify an IP address and subnet size depending on the number of Kubernetes services that you plan to deploy within a single Kubernetes cluster, for example: 10.100.200.0/24. The IP address used here is internal to the cluster and can be anything, such as 10.100.200.0. A /24 subnet provides 256 IPs. If you have a cluster that requires more than 256 IPs, define a larger subnet, such as /20.
  3. (Optional) Configure a global proxy for all outgoing HTTP and HTTPS traffic from your Kubernetes clusters and the PKS API server. See Using Proxies with Enterprise PKS on NSX-T for instructions on how to enable a proxy.
  4. Under Allow outbound internet access from Kubernetes cluster vms (IaaS-dependent), ignore the Enable outbound internet access checkbox.
  5. Click Save.

UAA

To configure the UAA server, do the following:

  1. Click UAA.
  2. Under PKS API Access Token Lifetime, enter a time in seconds for the PKS API access token lifetime. This field defaults to 600.
    UAA pane configuration
  3. Under PKS API Refresh Token Lifetime, enter a time in seconds for the PKS API refresh token lifetime. This field defaults to 21600.
  4. Under PKS Cluster Access Token Lifetime, enter a time in seconds for the cluster access token lifetime. This field defaults to 600.
  5. Under PKS Cluster Refresh Token Lifetime, enter a time in seconds for the cluster refresh token lifetime. This field defaults to 21600.

    Note: Pivotal recommends using the default UAA token timeout values. By default, access tokens expire after ten minutes and refresh tokens expire after six hours. If you want to customize your token timeout values, see Token Management in UAA Overview.

  6. Under Configure created clusters to use UAA as the OIDC provider, select Enabled or Disabled. If you click Enabled, Kubernetes verifies end-user identities based on authentication executed by UAA. For more information, see the table below.

    Option Description
    Disabled If you do not enable UAA as the OpenID Connect (OIDC) provider, Kubernetes authenticates users against its internal user management system.
    Enabled If you enable UAA as the OIDC provider, Kubernetes authenticates users as follows:
    • If you select Internal UAA in the next step, Kubernetes authenticates users against the internal UAA authentication mechanism.
    • If you select LDAP Server in the next step, Kubernetes authenticates users against the LDAP server.
    • If you select SAML Identity Provider in the next step, Kubernetes authenticates users against the SAML identity provider.

    Note: When you enable UAA as the OIDC provider, existing Enterprise PKS-provisioned Kubernetes clusters are upgraded to use OIDC. This invalidates your kubeconfig files. You must regenerate the files for all clusters.

    To configure Enterprise PKS to use UAA as the OIDC provider, do the following:

    1. Under Configure created clusters to use UAA as the OIDC provider, select Enabled. OIDC configuration checkbox
    2. For UAA OIDC Groups Claim, enter the name of your groups claim. This is used to set a user’s group in the JSON Web Token (JWT) claim. The default value is roles.
    3. For UAA OIDC Groups Prefix, enter a prefix for your groups claim. This prevents conflicts with existing names. For example, if you enter the prefix oidc:, UAA creates a group name like oidc:developers. If you are configuring a new Enterprise PKS installation, the default is oidc:. If you are upgrading to Enterprise PKS v1.5, the default is -.
    4. For UAA OIDC Username Claim, enter the name of your username claim. This is used to set a user’s username in the JWT claim. The default value is user_name. Depending on your provider, admins can enter claims besides user_name, like email or name.
    5. For UAA OIDC Username Prefix, enter a prefix for your username claim. This prevents conflicts with existing names. For example, if you enter the prefix oidc:, UAA creates a username like oidc:admin. If you are configuring a new Enterprise PKS installation, the default is oidc:. If you are upgrading to Enterprise PKS v1.5, the default is -.

      Note: Pivotal recommends adding OIDC prefixes to prevent OIDC users and groups from gaining unintended cluster privileges. When you upgrade to Enterprise PKS v1.5, if you do not change the values for UAA OIDC Groups Prefix or UAA OIDC Username Prefix, Enterprise PKS does not add prefixes.

      Warning: If you change the above values for a pre-existing Enterprise PKS installation, you must change any existing role bindings that bind to a username or group. If you do not change your role bindings, developers cannot access Kubernetes clusters. For instructions about creating role bindings, see Managing Cluster Access and Permissions.

  7. Select one of the following options:

Configure LDAP as an Identity Provider

To integrate UAA with one or more LDAP servers, configure Enterprise PKS with your LDAP endpoint information as follows:

  1. Under UAA, select LDAP Server.
    LDAP Server configuration pane

  2. For Server URL, enter the URLs that point to your LDAP server. If you have multiple LDAP servers, separate their URLs with spaces. Each URL must include one of the following protocols:

    • ldap://: Use this protocol if your LDAP server uses an unencrypted connection.
    • ldaps://: Use this protocol if your LDAP server uses SSL for an encrypted connection. To support an encrypted connection, the LDAP server must hold a trusted certificate or you must import a trusted certificate to the JVM truststore.
  3. For LDAP Credentials, enter the LDAP Distinguished Name (DN) and password for binding to the LDAP server. For example, cn=administrator,ou=Users,dc=example,dc=com. If the bind user belongs to a different search base, you must use the full DN.

    Note: We recommend that you provide LDAP credentials that grant read-only permissions on the LDAP search base and the LDAP group search base.

  4. For User Search Base, enter the location in the LDAP directory tree where LDAP user search begins. The LDAP search base typically matches your domain name.

    For example, a domain named cloud.example.com may use ou=Users,dc=example,dc=com as its LDAP user search base.

  5. For User Search Filter, enter a string to use for LDAP user search criteria. The search criteria allows LDAP to perform more effective and efficient searches. For example, the standard LDAP search filter cn=Smith returns all objects with a common name equal to Smith.

    In the LDAP search filter string that you use to configure Enterprise PKS, use {0} instead of the username. For example, use cn={0} to return all LDAP objects with the same common name as the username.

    In addition to cn, other common attributes are mail, uid and, in the case of Active Directory, sAMAccountName.

    Note: For information about testing and troubleshooting your LDAP search filters, see Configuring LDAP Integration with Pivotal Cloud Foundry.

  6. For Group Search Base, enter the location in the LDAP directory tree where the LDAP group search begins.

    For example, a domain named cloud.example.com may use ou=Groups,dc=example,dc=com as its LDAP group search base.

    Follow the instructions in the Grant Enterprise PKS Access to an External LDAP Group section of Managing Users in Enterprise PKS with UAA to map the groups under this search base to roles in Enterprise PKS.

    Note: You must configure Group Search Base if you are mapping an external LDAP group to a kubernetes group or an admin role.

  7. For Group Search Filter, enter a string that defines LDAP group search criteria. The standard value is member={0}.

  8. For Server SSL Cert, paste in the root certificate from your CA certificate or your self-signed certificate.
    LDAP Server configuration pane

  9. For Server SSL Cert AltName, do one of the following:

    • If you are using ldaps:// with a self-signed certificate, enter a Subject Alternative Name (SAN) for your certificate.
    • If you are not using ldaps:// with a self-signed certificate, leave this field blank.
  10. For First Name Attribute, enter the attribute name in your LDAP directory that contains user first names. For example, cn.

  11. For Last Name Attribute, enter the attribute name in your LDAP directory that contains user last names. For example, sn.

  12. For Email Attribute, enter the attribute name in your LDAP directory that contains user email addresses. For example, mail.

  13. For Email Domain(s), enter a comma-separated list of the email domains for external users who can receive invitations to Apps Manager.

  14. For LDAP Referrals, choose how UAA handles LDAP server referrals to other user stores. UAA can follow the external referrals, ignore them without returning errors, or generate an error for each external referral and abort the authentication.

  15. For External Groups Whitelist, enter a comma-separated list of group patterns which need to be populated in the user’s id_token. For further information on accepted patterns see the description of the config.externalGroupsWhitelist in the OAuth/OIDC Identity Provider Documentation.

    Note: When sent as a Bearer token in the Authentication header, wide pattern queries for users who are members of multiple groups, can cause the size of the id_token to extend beyond what is supported by web servers.

    External Groups Whitelist field

  16. Click Save.

Configure SAML as an Identity Provider

Before you configure a SAML identity provider in the Enterprise PKS tile, you must configure your identity provider to designate Enterprise PKS as a service provider.

Refer to the table below for information about industry-standard identity providers and how to integrate them with Enterprise PKS:

Solution Name Integration Guide
Okta Single Sign-On Configuring Okta as a SAML Identity Provider
Azure Active Directory Configuring Azure Active Directory as a SAML Identity Provider

To integrate UAA with a SAML identity provider, configure Enterprise PKS, by doing the following:

  1. Under UAA, select SAML Identity Provider.

    SAML Fields 1

  2. For Provider Name, enter a unique name you create for the Identity Provider. This name can include only alphanumeric characters, +, _, and -. You must not change this name after deployment because all external users use it to link to the provider.

  3. For Display Name, enter a display name for your provider. This display name appears as a link on your Pivotal login page, which you can access at https://PKS-API:8443/login.

    SAML provider display name

  4. Retrieve the metadata from your identity provider and enter it into either the Provider Metadata or the Provider Metadata URL fields, depending on whether your identity provider exposes a metadata URL or not. You recorded your identity provider metadata when you configure your identity provider to designate Enterprise PKS as a service provider.

    Enter the your identity provider metadata by doing one of the following:

    • If your identity provider exposes a metadata URL, enter it in Metadata URL.
    • Download your identity provider metadata and paste this XML into Provider Metadata.

    Pivotal recommends that you use the Provider Metadata URL rather than Provider Metadata because the metadata can change.

    Note: You only need to select one of the above configurations. If you configure both, your Identity Provider defaults to the (OR) Provider Metadata URL.

  5. For Name ID Format, select the name identifier format for your SAML Identity Provider. This translates to username on Enterprise PKS. The default is Email Address.

    SAML Fields 2

  6. For First Name Attribute and Last Name Attribute, enter the attribute names in your SAML database that correspond to the first and last names in each user record. This field is case sensitive.

  7. For Email Attribute, enter the attribute name in your SAML assertion that corresponds to the email address in each user record, for example, EmailID. This field is case sensitive.

  8. For External Groups Attribute, enter the attribute name in your SAML database for your user groups. This field is case sensitive. To map the groups from the SAML assertion to admin roles in PKS, see Grant Enterprise PKS Access to an External LDAP Group in Managing Enterprise PKS Users with UAA.

  9. By default, all SAML Authentication Request from Enterprise PKS are signed. To change this, disable Sign Authentication Requests and configure your Identity Provider to verify SAML authentication requests.

  10. To validate the signature for the incoming SAML assertions, enable Required Signed Assertions and configure your Identity Provider to send signed SAML assertions.

  11. For Signature Algorithm, choose an algorithm from the dropdown to use for signed requests and assertions. The default value is SHA256.

  12. Click Save.

(Optional) Host Monitoring

The Host Monitoring pane provides settings for monitoring host VMs in your Enterprise PKS deployment. In this pane, you can configure Enterprise PKS to forward logs from your BOSH-deployed VMs to a syslog endpoint as well as configure transport of multiple metric sources. These settings are not visible to Kubernetes cluster users.

Host Monitoring pane

To configure host monitoring in the Enterprise PKS tile, do the following:

Syslog

  1. Click Host Monitoring.
  2. Under Enable Syslog for PKS, select Yes.
  3. Under Address, enter the destination syslog endpoint.
  4. Under Port, enter the destination syslog port.
  5. Under Transport Protocol, select a transport protocol for log forwarding.
  6. (Optional) To enable TLS encryption during log forwarding, complete the following steps:
    1. Ensure Enable TLS is selected.

      Note: Logs may contain sensitive information, such as cloud provider credentials. Pivotal strongly recommends that you enable TLS encryption for log forwarding.


    2. Under Permitted Peer, provide the accepted fingerprint (SHA1) or name of remote peer. For example, *.YOUR-LOGGING-SYSTEM.com.
    3. Under TLS Certificate, provide a TLS certificate for the destination syslog endpoint.

      Note: You do not need to provide a new certificate if the TLS certificate for the destination syslog endpoint is signed by a Certificate Authority (CA) in your BOSH certificate store.

  7. (Optional) Under Max Message Size, enter a maximum message size for logs that are forwarded to a syslog endpoint. By default, the Max Message Size field is 10,000 characters.
  8. Click Save.

VMware vRealize Log Insight Integration

You can manage logs using VMware vRealize Log Insight (vRLI). The integration pulls logs from all BOSH jobs and containers running in the cluster, including node logs from core Kubernetes and BOSH processes, Kubernetes event logs, and POD stdout and stderr.

Note: Before you configure the vRLI integration, you must have a vRLI license and vRLI must be installed, running, and available in your environment. You need to provide the live instance address during configuration. For instructions and additional information, see the vRealize Log Insight documentation.

  1. By default, vRLI logging is disabled. To enable and configure vRLI logging, under Enable VMware vRealize Log Insight Integration?, select Yes and then perform the following steps: Enable VMware vRealize Log Insight Integration
  2. Under Host, enter the IP address or FQDN of the vRLI host.
  3. (Optional) Select the Enable SSL? checkbox to encrypt the logs being sent to vRLI using SSL.
  4. Choose one of the following SSL certificate validation options:
    • To skip certificate validation for the vRLI host, select the Disable SSL certificate validation checkbox. Select this option if you are using a self-signed certificate in order to simplify setup for a development or test environment.

      Note: Disabling certificate validation is not recommended for production environments.

    • To enable certificate validation for the vRLI host, clear the Disable SSL certificate validation checkbox.
  5. (Optional) If your vRLI certificate is not signed by a trusted CA root or other well known certificate, enter the certificate in the CA certificate field. Locate the PEM of the CA used to sign the vRLI certificate, copy the contents of the certificate file, and paste them into the field. Certificates must be in PEM-encoded format.
  6. Under Rate limiting, enter a time in milliseconds to change the rate at which logs are sent to the vRLI host. The rate limit specifies the minimum time between messages before the fluentd agent begins to drop messages. The default value 0 means that the rate is not limited, which suffices for many deployments.

    Note: If your deployment is generating a high volume of logs, you can increase this value to limit network traffic. Consider starting with a lower value, such as 10, then tuning to optimize for your deployment. A large number might result in dropping too many log entries.

  7. Click Save. These settings apply to any clusters created after you have saved these configuration settings and clicked Apply Changes. If the Upgrade all clusters errand has been enabled, these settings are also applied to existing clusters.

    Note: The Enterprise PKS tile does not validate your vRLI configuration settings. To verify your setup, look for log entries in vRLI.

Telegraf

In Host Monitoring, you can configure the Telegraf agent to collect metrics from Node Exporter, etcd, and the Kubelet agent and send the metrics to a third-party monitoring service:

  • Node Exporter, a Prometheus monitoring service, provides infrastructure and OS metrics. For more information about Node Exporter metrics, see the Node Exporter GitHub repository.
  • etcd provides local monitoring information that can be used for system health checking and cluster debugging.
  • Kubelet provides an endpoint with metrics for all workloads running in each Kubernetes cluster.

To connect a third-party monitoring service to Enterprise PKS, complete the following steps:

  1. Create a configuration file for the third-party monitoring service. For instructions, see Create a Configuration File.
  2. (Optional) Select Include etcd metrics. This includes etcd server and debug metrics.
  3. (Optional) Select Enable node exporter on master. This enables Node Exporter on the localhost of each master node VM.
  4. (Optional) Select Include kubelet metrics. This includes all workload metrics across your Kubernetes clusters. Enabling Include kubelet metrics generates a high volume of metrics.
  5. To enable a third-party monitoring service, configure Setup Telegraf Outputs. Enter the contents of the configuration file you created. If you do not want to output any metrics, leave the default value [[outputs.discard]].
  6. Click Save.

(Optional) In-Cluster Monitoring

In the In-Cluster Monitoring pane of the Enterprise PKS tile, you can configure several observability components that run in Kubernetes clusters and capture logs and metrics about your workloads. These components are visible to cluster users.

Cluster Monitoring pane

To configure in-cluster monitoring in the Enterprise PKS tile, do the following:

Wavefront

You can monitor Kubernetes clusters and pods metrics externally using the integration with Wavefront by VMware.

Note: Before you configure Wavefront integration, you must have an active Wavefront account and access to a Wavefront instance. You provide your Wavefront access token during configuration and enabling errands. For additional information, see the Wavefront documentation.

To enable and configure Wavefront monitoring, do the following:

  1. In the the Enterprise PKS tile, select In-Cluster Monitoring.
  2. Under Wavefront Integration, select Yes.
    Wavefront configuration
  3. Under Wavefront URL, enter the URL of your Wavefront subscription. For example:
    https://try.wavefront.com/api
    
  4. Under Wavefront Access Token, enter the API token for your Wavefront subscription.
  5. To configure Wavefront to send alerts by email, enter email addresses or Wavefront Target IDs separated by commas under Wavefront Alert Recipient, using the following syntax:

    USER-EMAIL,WAVEFRONT-TARGETID_001,WAVEFRONT-TARGETID_002
    

    Where:

    • USER-EMAIL is the alert recipient’s email address.
    • WAVEFRONT-TARGETID_001 and WAVEFRONT-TARGETID_002 are your comma-delimited Wavefront Target IDs.

    For example:

    randomuser@example.com,51n6psdj933ozdjf
    

  6. Click Save.

To create alerts, you must enable errands in Enterprise PKS.

  1. In the the Enterprise PKS tile, select Errands.
  2. On the Errands pane, enable Create pre-defined Wavefront alerts errand.
  3. Enable Delete pre-defined Wavefront alerts errand.
  4. Click Save. Your settings apply to any clusters created after you have saved these configuration settings and clicked Apply Changes.

The Enterprise PKS tile does not validate your Wavefront configuration settings. To verify your setup, look for cluster and pod metrics in Wavefront.

VMware vRealize Operations Management Pack for Container Monitoring

You can monitor Enterprise PKS Kubernetes clusters with VMware vRealize Operations Management Pack for Container Monitoring.

To integrate Enterprise PKS with VMware vRealize Operations Management Pack for Container Monitoring, you must deploy a container running cAdvisor in your PKS deployment.

cAdvisor is an open source tool that provides monitoring and statistics for Kubernetes clusters.

To deploy a cAdvisor container, do the following:

  1. Select In-Cluster Monitoring.
  2. Under Deploy cAdvisor, select Yes.
  3. Click Save.

For more information about integrating this type of monitoring with PKS, see the VMware vRealize Operations Management Pack for Container Monitoring User Guideand Release Notes in the VMware documentation.

Metric Sink Resources

You can configure Enterprise PKS-provisioned clusters to send Kubernetes node metrics and pod metrics to metric sinks. To enable clusters to send Kubernetes node metrics and pod metrics to metric sinks, do the following:

  1. In In-Cluster Monitoring, select Enable Metric Sink Resources. If you enable this checkbox, Enterprise PKS deploys Telegraf as a DaemonSet, a pod that runs on each node in all your Kubernetes clusters.

    Note: After configuring Enterprise PKS, you must create your metric sinks. To create metric sinks, follow the instructions in Creating Sink Resources.

  2. (Optional) To enable Node Exporter to send worker node metrics to metric sinks of kind ClusterMetricSink, select Enable Node Exporter on workers. If you enable this checkbox, Enterprise PKS deploys Node Exporter as a DaemonSet, a pod that runs on each worker node in all your Kubernetes clusters.

    For more information about Node Exporter metrics, see the Node Exporter GitHub repository.

  3. Click Save.

Log Sink Resources

To enable clusters to send Kubernetes API events and pod logs to log sinks, do the following:

  1. Select Enable Log Sink Resources. If you enable this checkbox, Enterprise PKS deploys Fluent Bit as a DaemonSet, a pod that runs on each node in all your Kubernetes clusters.
  2. Click Save.

Tanzu Mission Control (Experimental)

Participants in the VMware Tanzu Mission Control beta program can use the Tanzu Mission Control (Experimental) pane of the Enterprise PKS tile to integrate their Enterprise PKS deployment with Tanzu Mission Control.

Tanzu Mission Control integration lets you monitor and manage Enterprise PKS clusters from the Tanzu Mission Control console, making the Tanzu Mission Control console a single point of control for all Kubernetes clusters.

Tanzu Mission Control Integration

Warning: VMware Tanzu Mission Control is currently experimental Beta software and is intended for evaluation and test purposes only. For more information about Tanzu Mission Control, see the VMware Tanzu Mission Control home page.

Integrate Tanzu Mission Control

To integrate Enterprise PKS with Tanzu Mission Control:

  1. Ensure that the PKS Control Plane VM has internet access and can connect to cna.tmc.cloud.vmware.com.

  2. In the Enterprise PKS tile Tanzu Mission Control (Experimental) pane, select Select Yes under Tanzu Mission Control Integration

  3. Configure the fields below:

    • Tanzu Mission Control URL: Enter the API URL of your Tanzu Mission Control subscription, without a trailing slash (/).
    • VMware Cloud Services API token: Enter your API token to authenticate with VMware Cloud Services APIs.
    • Tanzu Mission Control Cluster Group: Enter the name of a Tanzu Mission Control cluster group.
      • The name can be default or another value, depending on your role and access policy:
        • Org Member users in VMware cloud services have a service.admin role in Tanzu Mission Control. These users:
          • By default, can only create and attach clusters in the default cluster group.
          • Can create new cluster groups after an organization.admin user grants them the clustergroup.admin or clustergroup.edit role.
        • VMware cloud services Org Owner users have organization.admin permissions in Tanzu Mission Control. These users:
          • Can create cluster groups.
          • Can grant clustergroup roles to service.admin users through the Tanzu Mission Control Access Policy view.
    • Tanzu Mission Control Cluster Name Prefix: Enter a name prefix for identifying the PKS clusters in Tanzu Mission Control.

Warning: Once the tile is deployed with a configured cluster group, the cluster group cannot be updated.

Note: When you upgrade your Kubernetes clusters and have Tanzu Mission Control Integration enabled, existing clusters will be attached to Tanzu Mission Control.

CEIP and Telemetry

To configure VMware’s Customer Experience Improvement Program (CEIP) and the Pivotal Telemetry Program (Telemetry), do the following:

  1. Click CEIP and Telemetry.
  2. Review the information about the CEIP and Telemetry. CEIP and Telemetry program description View a larger version of this image.
  3. To specify your level of participation in the CEIP and Telemetry program, select one of the Participation Level options:
    • None: If you select this option, data is not collected from your Enterprise PKS installation.
    • (Default) Standard: If you select this option, data is collected from your Enterprise PKS installation to improve Enterprise PKS. This participation level is anonymous and does not permit the CEIP and Telemetry to identify your organization.
    • Enhanced: If you select this option, data is collected from your Enterprise PKS installation to provide you proactive support and other benefits. This participation level permits the CEIP and Telemetry to identify your organization. CEIP and Telemetry program participation level For more information about the CEIP and Telemetry participation levels, see Participation Levels in Telemetry.
  4. If you selected the Enhanced participation level, complete the following:
    • Enter your VMware account number or Pivotal customer number in the VMware Account Number or Pivotal Customer Number field. If you are a VMware customer, you can find your VMware Account Number in your Account Summary on my.vmware.com. If you are a Pivotal customer, you can find your Pivotal Customer Number in your Pivotal Order Confirmation email.
    • (Optional) Enter a descriptive name for your PKS installation in the PKS Installation Label field. The label you assign to this installation will be used in telemetry reports to identify the environment.
  5. To provide information about the purpose for this installation, select an option in the PKS Installation Type list.
    CEIP and Telemetry installation type
  6. Click Save.

Note: If you join the CEIP and Telemetry Program for Enterprise PKS, open your firewall to allow outgoing access to https://vcsa.vmware.com/ph on port 443.

Note: Even if you select None, Enterprise PKS-provisioned clusters send usage data to the PKS control plane. However, this data is not sent to VMware or Pivotal and remains on your Enterprise PKS installation.

Errands

Errands are scripts that run at designated points during an installation.

To configure when post-deploy and pre-delete errands for Enterprise PKS are run, make a selection in the dropdown next to the errand.

WARNING: You must enable the NSX-T Validation errand to verify and tag required NSX-T objects.

Errand configuration pane

For more information about errands and their configuration state, see Managing Errands in Ops Manager.

Warning: If Upgrade all clusters errand is enabled, updating the Enterprise PKS tile with a new Linux stemcell triggers the rolling of every Linux VM in each Kubernetes cluster. Similarly, updating the Enterprise PKS tile with a new Windows stemcell triggers the rolling of every Windows VM in your Kubernetes clusters. This automatic rolling ensures that all your VMs are patched. To avoid workload downtime, use the resource configuration recommended in What Happens During Enterprise PKS Upgrades and Maintaining Workload Uptime.

(Optional) Resource Config

To modify the resource usage of Enterprise PKS and specify your PKS API load balancer, follow the steps below:

  1. Select Resource Config. Resource pane configuration
  2. (Optional) Edit other resources used by the Pivotal Container Service job. The Pivotal Container Service job requires a VM with the following minimum resources:
    CPU Memory Disk Space
    2 8 GB 29 GB

    Note: The automatic VM Type value matches the minimum recommended size for the Pivotal Container Service job. If you experience timeouts or slowness when interacting with the PKS API, select a VM Type with greater CPU and memory resources.

  3. (Optional) In the Load Balancers column, enter the name of your PKS API load balancer.

    Note: After you click Apply Changes for the first time, BOSH assigns the PKS VM an IP address. BOSH uses the name you provide in the Load Balancers column to locate your load balancer, and then connect the load balancer to the PKS VM using its new IP address.

  4. (Optional) If you do not use a NAT instance, select Internet Connected to allow component instances direct access to the internet.

Step 3: Apply Changes

After configuring the Enterprise PKS tile, follow the steps below to deploy the tile:

  1. Return to the Ops Manager Installation Dashboard.
  2. Click Review Pending Changes. Select the product that you intend to deploy and review the changes. For more information, see Reviewing Pending Product Changes.
  3. Click Apply Changes.

Step 4: Install the PKS and Kubernetes CLIs

The PKS CLI and the Kubernetes CLI help you interact with your Enterprise PKS-provisioned Kubernetes clusters and Kubernetes workloads. To install the CLIs, follow the instructions below:

Step 5: Verify NAT Rules

If you are using NAT mode, verify that you have created the required NAT rules for the Enterprise PKS Management Plane. See Creating the Enterprise PKS Management Plane for details.

In addition, for NAT and no-NAT modes, verify that you created the required NAT rule for Kubernetes master nodes to access NSX Manager. For details, see Creating the PKS Compute Plane.

If you want your developers to be able to access the PKS CLI from their external workstations, create a DNAT rule that maps a routable IP address to the PKS API VM. This must be done after Enterprise PKS is successfully deployed and it has an IP address. See Create DNAT Rule on T0 Router for External Access to the PKS CLI for details.

Step 6: Configure Authentication for Enterprise PKS

Follow the procedures in Setting Up Enterprise PKS Admin Users on vSphere in Installing Enterprise PKS > vSphere.

Next Steps

After installing Enterprise PKS on vSphere with NSX-T integration, complete the following tasks:


Please send any feedback you have to pks-feedback@pivotal.io.