Installing Enterprise PKS on Azure

Page last updated:

This topic describes how to install and configure VMware Enterprise PKS on Azure.

Prerequisites

Before performing the procedures in this topic, you must have deployed and configured Ops Manager. For more information, see Azure Prerequisites and Resource Requirements.

If you use an instance of Ops Manager that you configured previously to install other runtimes, perform the following steps before you install Enterprise PKS:

  1. Navigate to Ops Manager.
  2. Open the Director Config pane.
  3. Select the Enable Post Deploy Scripts checkbox.
  4. Click the Installation Dashboard link to return to the Installation Dashboard.
  5. Click Review Pending Changes. Select all products you intend to deploy and review the changes. For more information, see Reviewing Pending Product Changes.
  6. Click Apply Changes.

Step 1: Install Enterprise PKS

To install Enterprise PKS, do the following:

  1. Download the product file from VMware Tanzu Network.
  2. Navigate to https://YOUR-OPS-MANAGER-FQDN/ in a browser to log in to the Ops Manager Installation Dashboard.
  3. Click Import a Product to upload the product file.
  4. Under Enterprise PKS in the left column, click the plus sign to add this product to your staging area.

Step 2: Configure Enterprise PKS

Click the orange Enterprise PKS tile to start the configuration process.

PKS tile on the Ops Manager installation dashboard

WARNING: When you configure the Enterprise PKS tile, do not use spaces in any field entries. This includes spaces between characters as well as leading and trailing spaces. If you use a space in any field entry, the deployment of Enterprise PKS fails.

Assign Networks

To configure the networks used by the Enterprise PKS control plane:

  1. Click Assign Networks.

    Assign Networks pane in Ops Manager

  2. Under Network, select the infrastructure subnet that you created for Enterprise PKS component VMs, such as the PKS API and PKS Database VMs. For example, infrastructure.

  3. Under Service Network, select the services subnet that you created for Kubernetes cluster VMs. For example, services.

  4. Click Save.

PKS API

Perform the following steps:

  1. Click PKS API.

  2. Under Certificate to secure the PKS API, provide a certificate and private key pair.
    PKS API pane configuration
    The certificate that you supply should cover the specific subdomain that routes to the PKS API VM with TLS termination on the ingress.

    Warning: TLS certificates generated for wildcard DNS records only work for a single domain level. For example, a certificate generated for *.pks.EXAMPLE.com does not permit communication to *.api.pks.EXAMPLE.com. If the certificate does not contain the correct FQDN for the PKS API, calls to the API will fail.

    You can enter your own certificate and private key pair, or have Ops Manager generate one for you.
    To generate a certificate using Ops Manager:

    1. Click Generate RSA Certificate for a new install or Change to update a previously-generated certificate.
    2. Enter the domain for your API hostname. This must match the domain you configured under PKS API > API Hostname (FQDN) in the Tanzu Kubernetes Grid Integrated Edition tile. It can be a standard FQDN or a wildcard domain.
    3. Click Generate.
      PKS API certificate generation
  3. Under API Hostname (FQDN), enter the FQDN that you registered to point to the PKS API load balancer, such as api.pks.example.com. To retrieve the public IP address or FQDN of the PKS API load balancer, see the terraform.tfstate file.

  4. Under Worker VM Max in Flight, enter the maximum number of non-canary worker instances to create or resize in parallel within an availability zone.

    This field sets the max_in_flight variable value. When you create or resize a cluster, the max_in_flight value limits the number of component instances that can be created or started simultaneously. By default, the max_in_flight value is set to 4, which means that up to four component instances are simultaneously created or started at a time.

  5. Click Save.

Plans

A plan defines a set of resource types used for deploying a cluster.

Activate a Plan

You must first activate and configure Plan 1, and afterwards you can optionally activate Plan 2 through Plan 10.

To activate and configure a plan, perform the following steps:

  1. Click the plan that you want to activate.

    Note: Plans 11, 12 and 13 support only Windows worker-based Kubernetes clusters, on vSphere with Flannel.

  2. Select Active to activate the plan and make it available to developers deploying clusters.
    Plan pane configuration
  3. Under Name, provide a unique name for the plan.
  4. Under Description, edit the description as needed. The plan description appears in the Services Marketplace, which developers can access by using the PKS CLI.
  5. Under Master/ETCD Node Instances, select the default number of Kubernetes master/etcd nodes to provision for each cluster. You can enter 1, 3, or 5.

    Note: If you deploy a cluster with multiple master/etcd node VMs, confirm that you have sufficient hardware to handle the increased load on disk write and network traffic. For more information, see Hardware recommendations in the etcd documentation.

    In addition to meeting the hardware requirements for a multi-master cluster, we recommend configuring monitoring for etcd to monitor disk latency, network latency, and other indicators for the health of the cluster. For more information, see Configuring Telegraf in Enterprise PKS.

    WARNING: To change the number of master/etcd nodes for a plan, you must ensure that no existing clusters use the plan. Enterprise PKS does not support changing the number of master/etcd nodes for plans with existing clusters.

  6. Under Master/ETCD VM Type, select the type of VM to use for Kubernetes master/etcd nodes. For more information, including master node VM customization options, see the Master Node VM Size section of VM Sizing for Enterprise PKS Clusters.

  7. Under Master Persistent Disk Type, select the size of the persistent disk for the Kubernetes master node VM.

  8. Under Master/ETCD Availability Zones, select one or more AZs for the Kubernetes clusters deployed by Enterprise PKS. If you select more than one AZ, Enterprise PKS deploys the master VM in the first AZ and the worker VMs across the remaining AZs. If you are using multiple masters, Enterprise PKS deploys the master and worker VMs across the AZs in round-robin fashion.

  9. Under Maximum number of workers on a cluster, set the maximum number of Kubernetes worker node VMs that Enterprise PKS can deploy for each cluster. Enter any whole number in this field.
    Plan pane configuration, part two

  10. Under Worker Node Instances, specify the default number of Kubernetes worker nodes the PKS CLI provisions for each cluster. The Worker Node Instances setting must be less than, or equal to, the Maximum number of workers on a cluster setting.

    For high availability, create clusters with a minimum of three worker nodes, or two per AZ if you intend to use PersistentVolumes (PVs). For example, if you deploy across three AZs, you should have six worker nodes. For more information about PVs, see PersistentVolumes in Maintaining Workload Uptime. Provisioning a minimum of three worker nodes, or two nodes per AZ is also recommended for stateless workloads.

    For more information about creating clusters, see Creating Clusters.

    Note: Changing a plan’s Worker Node Instances setting does not alter the number of worker nodes on existing clusters. For information about scaling an existing cluster, see Scale Horizontally by Changing the Number of Worker Nodes Using the PKS CLI in Scaling Existing Clusters.

  11. Under Worker VM Type, select the type of VM to use for Kubernetes worker node VMs. For more information, including worker node VM customization options, see Worker Node VM Number and Size in VM Sizing for Enterprise PKS Clusters.

    Note: Enterprise PKS requires a Worker VM Type with an ephemeral disk size of 32 GB or more.

  12. Under Worker Persistent Disk Type, select the size of the persistent disk for the Kubernetes worker node VMs.

  13. Under Worker Availability Zones, select one or more AZs for the Kubernetes worker nodes. Enterprise PKS deploys worker nodes equally across the AZs you select.

  14. Under Kubelet customization - system-reserved, enter resource values that Kubelet can use to reserve resources for system daemons. For example, memory=250Mi, cpu=150m. For more information about system-reserved values, see the Kubernetes documentation. Plan pane configuration, part two

  15. Under Kubelet customization - eviction-hard, enter threshold limits that Kubelet can use to evict pods when they exceed the limit. Enter limits in the format EVICTION-SIGNAL=QUANTITY. For example, memory.available=100Mi, nodefs.available=10%, nodefs.inodesFree=5%. For more information about eviction thresholds, see the Kubernetes documentation.

    WARNING: Use the Kubelet customization fields with caution. If you enter values that are invalid or that exceed the limits the system supports, Kubelet might fail to start. If Kubelet fails to start, you cannot create clusters.

  16. Under Errand VM Type, select the size of the VM that contains the errand. The smallest instance possible is sufficient, as the only errand running on this VM is the one that applies the Default Cluster App YAML configuration.

  17. (Optional) Under (Optional) Add-ons - Use with caution, enter additional YAML configuration to add custom workloads to each cluster in this plan. You can specify multiple files using --- as a separator. For more information, see Adding Custom Linux Workloads. Plan pane configuration

  18. (Optional) To allow users to create pods with privileged containers, select the Allow Privileged option. For more information, see Pods in the Kubernetes documentation.

    Note: Enabling the Allow Privileged option means that all containers in the cluster will run in privileged mode. Pod Security Policy provides a privileged parameter that can be used to enable or disable Pods running in privileged mode. As a best practice, if you enable Allow Privileged, define PSP to limit which Pods run in privileged mode. If you are implementing PSP for privileged pods, you must enable Allow Privileged mode.

  19. (Optional) Enable or disable one or more admission controller plugins: PodSecurityPolicy, DenyEscalatingExec, and SecurityContextDeny. For more information see Using Admission Control Plugins for Enterprise PKS Clusters.

  20. (Optional) Under Node Drain Timeout(mins), enter the timeout in minutes for the node to drain pods. If you set this value to 0, the node drain does not terminate. Node Drain Timeout fields

  21. (Optional) Under Pod Shutdown Grace Period (seconds), enter a timeout in seconds for the node to wait before it forces the pod to terminate. If you set this value to -1, the default timeout is set to the one specified by the pod.

  22. (Optional) To configure when the node drains, enable the following:

    • Force node to drain even if it has running pods not managed by a ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet.
    • Force node to drain even if it has running DaemonSet-managed pods.
    • Force node to drain even if it has running running pods using emptyDir.
    • Force node to drain even if pods are still running after timeout.

    Warning: If you select Force node to drain even if pods are still running after timeout, the node kills all running workloads on pods. Before enabling this configuration, set Node Drain Timeout to a value greater than 0.

    For more information about configuring default node drain behavior, see Worker Node Hangs Indefinitely in Troubleshooting.

  23. Click Save.

Deactivate a Plan

To deactivate a plan, perform the following steps:

  1. Click the plan that you want to deactivate.
  2. Select Inactive.
  3. Click Save.

Kubernetes Cloud Provider

To configure your Kubernetes cloud provider settings, follow the procedures below:

  1. Click Kubernetes Cloud Provider.

  2. Under Choose your IaaS, select Azure.

    Azure pane configuration

  3. Under Azure Cloud Name, select the identifier of your Azure environment.

  4. Enter Subscription ID. This is the ID of the Azure subscription that the cluster is deployed in.

  5. Enter Tenant ID. This is the Azure Active Directory (AAD) tenant ID for the subscription that the cluster is deployed in.

  6. Enter Location. This is the location of the resource group that the cluster is deployed in.

    You set the location name in the terraform.tfvars file in Deploying Ops Manager to Azure Using Terraform. However, Terraform removes the spaces from this name and makes it lower-case. For example, if you entered Central US in the terraform.tfvars file, it becomes centralus. You must enter the converted form of the location name in the Location field, such as centralus.

  7. Enter Resource Group. This is the name of the resource group that the cluster is deployed in.

  8. Enter Virtual Network. This is the name of the virtual network that the cluster is deployed in.

  9. Enter Virtual Network Resource Group. This is the name of the resource group that the virtual network is deployed in.

  10. Enter Default Security Group. This is the name of the security group attached to the cluster’s subnet.

    Note: Enterprise PKS automatically assigns the default security group to each VM when you create a Kubernetes cluster. However, on Azure this automatic assignment may not occur. For more information, see Azure Default Security Group Is Not Automatically Assigned to Cluster VMs in Enterprise PKS Release Notes.

  11. Enter Primary Availability Set. This is the name of the availability set that will be used as the load balancer back end.

    Terraform creates this availability set and its name is YOUR-ENVIRONMENT-NAME-pks-as, where YOUR-ENVIRONMENT-NAME is the value you provided for env_name in the terraform.tfvars file. For more information, see Download Templates and Edit Variables File in Deploying Ops Manager to Azure Using Terraform in the VMware Tanzu documentation. You can also find the name of the availability set by logging in to the Azure console.

  12. For Master Managed Identity, enter pks-master. You created the managed identity for the master nodes in Create the Master Nodes Managed Identity in Creating Managed Identities in Azure for Enterprise PKS.

  13. For Worker Managed Identity, enter pks-worker. You created the managed identity for the worker nodes in Create the Worker Nodes Managed Identity in Creating Managed Identities in Azure for Enterprise PKS.

  14. Click Save.

Networking

To configure networking, do the following:

  1. Click Networking.
  2. Under Container Networking Interface, select Flannel. Networking pane configuration
  3. (Optional) Enter values for Kubernetes Pod Network CIDR Range and Kubernetes Service Network CIDR Range.
    • Ensure that the CIDR ranges do not overlap and have sufficient space for your deployed services.
    • Ensure that the CIDR range for the Kubernetes Pod Network CIDR Range is large enough to accommodate the expected maximum number of pods.
  4. Under Allow outbound internet access from Kubernetes cluster vms (IaaS-dependent), leave the Enable outbound internet access checkbox unselected. You must leave this checkbox unselected due to an incompatibility between the public dynamic IPs provided by BOSH and load balancers on Azure.

  5. Click Save.

UAA

To configure the UAA server:

  1. Click UAA.
  2. Under PKS API Access Token Lifetime, enter a time in seconds for the PKS API access token lifetime. This field defaults to 600.

    UAA pane configuration

  3. Under PKS API Refresh Token Lifetime, enter a time in seconds for the PKS API refresh token lifetime. This field defaults to 21600.

  4. Under PKS Cluster Access Token Lifetime, enter a time in seconds for the cluster access token lifetime. This field defaults to 600.

  5. Under PKS Cluster Refresh Token Lifetime, enter a time in seconds for the cluster refresh token lifetime. This field defaults to 21600.

    Note: VMware recommends using the default UAA token timeout values. By default, access tokens expire after ten minutes and refresh tokens expire after six hours.

  6. Under Configure created clusters to use UAA as the OIDC provider, select Enabled or Disabled. This is a global default setting for PKS-provisioned clusters. For more information, see OIDC Provider for Kubernetes Clusters.

    To configure Enterprise PKS to use UAA as the OIDC provider:

    1. Under Configure created clusters to use UAA as the OIDC provider, select Enabled. OIDC configuration checkbox
    2. For UAA OIDC Groups Claim, enter the name of your groups claim. This is used to set a user’s group in the JSON Web Token (JWT) claim. The default value is roles.
    3. For UAA OIDC Groups Prefix, enter a prefix for your groups claim. This prevents conflicts with existing names. For example, if you enter the prefix oidc:, UAA creates a group name like oidc:developers. The default value is oidc:.
    4. For UAA OIDC Username Claim, enter the name of your username claim. This is used to set a user’s username in the JWT claim. The default value is user_name. Depending on your provider, you can enter claims besides user_name, like email or name.
    5. For UAA OIDC Username Prefix, enter a prefix for your username claim. This prevents conflicts with existing names. For example, if you enter the prefix oidc:, UAA creates a username like oidc:admin. The default value is oidc:.

      Warning: VMware recommends adding OIDC prefixes to prevent users and groups from gaining unintended cluster privileges. If you change the above values for a pre-existing Enterprise PKS installation, you must change any existing role bindings that bind to a username or group. If you do not change your role bindings, developers cannot access Kubernetes clusters. For instructions, see Managing Cluster Access and Permissions.

  7. Select one of the following options:

(Optional) Host Monitoring

In Host Monitoring, you can configure one or more of the following:

  • To configure Syslog, see Syslog. Syslog forwards log messages from all BOSH-deployed VMs to a syslog endpoint.
  • To configure Telegraf, see Configuring Telegraf in Enterprise PKS. The Telegraf agent sends metrics from PKS API, master node, and worker node VMs to a monitoring service, such as Wavefront or Datadog.

For more information about these components, see Monitoring Enterprise PKS and PKS-Provisioned Clusters.

Host Monitoring pane

Syslog

To configure Syslog for all BOSH-deployed VMs in Enterprise PKS:

  1. Click Host Monitoring.
  2. Under Enable Syslog for PKS, select Yes.
  3. Under Address, enter the destination syslog endpoint.
  4. Under Port, enter the destination syslog port.
  5. Under Transport Protocol, select a transport protocol for log forwarding.
  6. (Optional) To enable TLS encryption during log forwarding, complete the following steps:
    1. Ensure Enable TLS is selected.

      Note: Logs may contain sensitive information, such as cloud provider credentials. VMware recommends that you enable TLS encryption for log forwarding.

    2. Under Permitted Peer, provide the accepted fingerprint (SHA1) or name of remote peer. For example, *.YOUR-LOGGING-SYSTEM.com.
    3. Under TLS Certificate, provide a TLS certificate for the destination syslog endpoint.

      Note: You do not need to provide a new certificate if the TLS certificate for the destination syslog endpoint is signed by a Certificate Authority (CA) in your BOSH certificate store.

  7. (Optional) Under Max Message Size, enter a maximum message size for logs that are forwarded to a syslog endpoint. By default, the Max Message Size field is 10,000 characters.
  8. Click Save.

(Optional) In-Cluster Monitoring

In In-Cluster Monitoring, you can configure one or more observability components and integrations that run in Kubernetes clusters and capture logs and metrics about your workloads. For more information, see Monitoring Workers and Workloads.

Cluster Monitoring pane

To configure in-cluster monitoring:

Wavefront

You can monitor Kubernetes clusters and pods metrics externally using the integration with Wavefront by VMware.

Note: Before you configure Wavefront integration, you must have an active Wavefront account and access to a Wavefront instance. You provide your Wavefront access token during configuration and enabling errands. For additional information, see the Wavefront documentation.

To enable and configure Wavefront monitoring:

  1. In the Enterprise PKS tile, select In-Cluster Monitoring.
  2. Under Wavefront Integration, select Yes. Wavefront configuration
  3. Under Wavefront URL, enter the URL of your Wavefront subscription. For example:
    https://try.wavefront.com/api
    
  4. Under Wavefront Access Token, enter the API token for your Wavefront subscription.
  5. To configure Wavefront to send alerts by email, enter email addresses or Wavefront Target IDs separated by commas under Wavefront Alert Recipient, using the following syntax:

    USER-EMAIL,WAVEFRONT-TARGETID_001,WAVEFRONT-TARGETID_002
    

    Where:

    • USER-EMAIL is the alert recipient’s email address.
    • WAVEFRONT-TARGETID_001 and WAVEFRONT-TARGETID_002 are your comma-delimited Wavefront Target IDs.

    For example:

    randomuser@example.com,51n6psdj933ozdjf
    

  6. Click Save.

To create alerts, you must enable errands in Enterprise PKS.

  1. In the Enterprise PKS tile, select Errands.
  2. On the Errands pane, enable Create pre-defined Wavefront alerts errand.
  3. Enable Delete pre-defined Wavefront alerts errand.
  4. Click Save. Your settings apply to any clusters created after you have saved these configuration settings and clicked Apply Changes.

The Enterprise PKS tile does not validate your Wavefront configuration settings. To verify your setup, look for cluster and pod metrics in Wavefront.

cAdvisor

cAdvisor is an open source tool for monitoring, analyzing, and exposing Kubernetes container resource usage and performance statistics.

To deploy a cAdvisor container:

  1. Select In-Cluster Monitoring.
  2. Under Deploy cAdvisor, select Yes.
  3. Click Save.

Note: For information about configuring cAdvisor to monitor your running Kubernetes containers, see cAdvisor in the cAdvisor GitHub repository. For general information about Kubernetes cluster monitoring, see Tools for Monitoring Resources in the Kubernetes documentation.

Metric Sink Resources

You can configure PKS-provisioned clusters to send Kubernetes node metrics and pod metrics to metric sinks. For more information about metric sink resources and what to do after you enable them in the tile, see Sink Resources in Monitoring Workers and Workloads.

To enable clusters to send Kubernetes node metrics and pod metrics to metric sinks:

  1. In In-Cluster Monitoring, select Enable Metric Sink Resources. If you enable this checkbox, Enterprise PKS deploys Telegraf as a DaemonSet, a pod that runs on each worker node in all your Kubernetes clusters.
  2. (Optional) To enable Node Exporter to send worker node metrics to metric sinks of kind ClusterMetricSink, select Enable node exporter on workers. If you enable this checkbox, Enterprise PKS deploys Node Exporter as a DaemonSet, a pod that runs on each worker node in all your Kubernetes clusters.

    For instructions on how to create a metric sink of kind ClusterMetricSink for Node Exporter metrics, see Create a ClusterMetricSink Resource for Node Exporter Metrics in Creating and Managing Sink Resources.

  3. Click Save.

Log Sink Resources

You can configure PKS-provisioned clusters to send Kubernetes API events and pod logs to log sinks. For more information about log sink resources and what to do after you enable them in the tile, see Sink Resources in Monitoring Workers and Workloads.

To enable clusters to send Kubernetes API events and pod logs to log sinks:

  1. Select Enable Log Sink Resources. If you enable this checkbox, Enterprise PKS deploys Fluent Bit as a DaemonSet, a pod that runs on each worker node in all your Kubernetes clusters.
  2. Click Save.

Tanzu Mission Control (Experimental)

Participants in the VMware Tanzu Mission Control beta program can use the Tanzu Mission Control (Experimental) pane of the Enterprise PKS tile to integrate their Enterprise PKS deployment with Tanzu Mission Control.

Tanzu Mission Control integration lets you monitor and manage Enterprise PKS clusters from the Tanzu Mission Control console, which makes the Tanzu Mission Control console a single point of control for all Kubernetes clusters.

Warning: VMware Tanzu Mission Control is currently experimental beta software and is intended for evaluation and test purposes only. For more information about Tanzu Mission Control, see the VMware Tanzu Mission Control home page.

To integrate Enterprise PKS with Tanzu Mission Control:

  1. Confirm that the PKS API VM has internet access and can connect to cna.tmc.cloud.vmware.com and the other outbound URLs listed in the What Happens When You Attach a Cluster section of the Tanzu Mission Control documentation.

  2. Navigate to the Enterprise PKS tile > the Tanzu Mission Control (Experimental) pane and select Yes under Tanzu Mission Control Integration.

    Tanzu Mission Control Integration

  3. Configure the fields below:

    • Tanzu Mission Control URL: Enter the Org URL of your Tanzu Mission Control subscription, without a trailing slash (/). For example, YOUR-ORG.tmc.cloud.vmware.com.
    • VMware Cloud Services API token: Enter your API token to authenticate with VMware Cloud Services APIs. You can retrieve this token by logging in to VMware Cloud Services and viewing your account information.
    • Tanzu Mission Control Cluster Group: Enter the name of a Tanzu Mission Control cluster group.

      The name can be default or another value, depending on your role and access policy:

      • Org Member users in VMware cloud services have a service.admin role in Tanzu Mission Control. These users:
        • By default, can create and attach clusters only in the default cluster group.
        • Can create and attach clusters to other cluster groups after an organization.admin user grants them the clustergroup.admin or clustergroup.edit role for those groups.
      • Org Owner users in VMware cloud services have organization.admin permissions in Tanzu Mission Control. These users:
        • Can create cluster groups.
        • Can grant clustergroup roles to service.admin users through the Tanzu Mission Control Access Policy view.

      For more information about role and access policy, see Access Control in the VMware Tanzu Mission Control Product Documentation.

    • Tanzu Mission Control Cluster Name Prefix: Enter a name prefix for identifying the Enterprise PKS clusters in Tanzu Mission Control.

  4. Click Save.

Warning: After the Enterprise PKS tile is deployed with a configured cluster group, the cluster group cannot be updated.

Note: When you upgrade your Kubernetes clusters and have Tanzu Mission Control integration enabled, existing clusters will be attached to Tanzu Mission Control.

CEIP and Telemetry

To configure VMware’s Customer Experience Improvement Program (CEIP) and the Telemetry Program, do the following:

  1. Click CEIP and Telemetry.
  2. Review the information about the CEIP and Telemetry. CEIP and Telemetry program description View a larger version of this image.
  3. To specify your level of participation in the CEIP and Telemetry program, select one of the Participation Level options:
    • None: If you select this option, data is not collected from your Enterprise PKS installation.
    • (Default) Standard: If you select this option, data is collected from your Enterprise PKS installation to improve Enterprise PKS. This participation level is anonymous and does not permit the CEIP and Telemetry to identify your organization.
    • Enhanced: If you select this option, data is collected from your Enterprise PKS installation to provide you proactive support and other benefits. This participation level permits the CEIP and Telemetry to identify your organization. CEIP and Telemetry program participation level For more information about the CEIP and Telemetry participation levels, see Participation Levels in Telemetry.
  4. If you selected the Enhanced participation level, complete the following:
    • Enter your account number or customer number in the VMware Account Number or Pivotal Customer Number field. If you are a VMware customer, you can find your VMware Account Number in your Account Summary on my.vmware.com. If you started as a Pivotal customer, you can find your Customer Number in your Order Confirmation email.
    • (Optional) Enter a descriptive name for your PKS installation in the PKS Installation Label field. The label you assign to this installation will be used in telemetry reports to identify the environment.
  5. To provide information about the purpose for this installation, select an option in the PKS Installation Type list.
    CEIP and Telemetry installation type
  6. Click Save.

Note: If you join the CEIP and Telemetry Program for Enterprise PKS, open your firewall to allow outgoing access to https://vcsa.vmware.com/ph on port 443.

Note: Even if you select None, Enterprise PKS-provisioned clusters send usage data to the PKS control plane. However, this data is not sent to VMware and remains on your Enterprise PKS installation.

Errands

Errands are scripts that run at designated points during an installation.

To configure which post-deploy and pre-delete errands run for Enterprise PKS:

  1. Make a selection in the dropdown next to each errand.
    Errand configuration pane

    Note: We recommend that you use the default settings for all errands except for the Run smoke tests errand.

  2. Set the PKS 1.7.x Upgrade - MySQL Clone errand to On.

    Warning: Do not disable the PKS 1.7.x Upgrade - MySQL Clone errand. This errand must remain set to Default (On) at all times.

  3. (Optional) Set the Run smoke tests errand to On.

    This errand uses the PKS CLI to create a Kubernetes cluster and then delete it. If the creation or deletion fails, the errand fails and the installation of the Enterprise PKS tile is aborted.

  4. (Optional) To ensure that all of your cluster VMs are patched, configure the Upgrade all clusters errand errand to On.

    Updating the Enterprise PKS tile with a new Linux stemcell and the Upgrade all clusters errand enabled triggers the rolling of every Linux VM in each Kubernetes cluster. Similarly, updating the Enterprise PKS tile with a new Windows stemcell triggers the rolling of every Windows VM in your Kubernetes clusters.

    Warning: To avoid workload downtime, use the resource configuration recommended in About Enterprise PKS Upgrades and Maintaining Workload Uptime.

Resource Config

To modify the resource configuration of Enterprise PKS and specify your PKS API load balancer, follow the steps below:

  1. Select Resource Config.

  2. For each job, review the Automatic values in the following fields:

    • VM TYPE: By default, the PKS Database and PKS API jobs are set to the same Automatic VM type. If you want to adjust this value, we recommend that you select the same VM type for both jobs.

      Note: The Automatic VM TYPE values match the recommended resource configuration for the PKS API and PKS Database jobs.

    • PERSISTENT DISK TYPE: By default, the PKS Database and PKS API jobs are set to the same persistent disk type. If you want to adjust this value, you can change the persistent disk type for each of the jobs independently. Using the same persistent disk type for both jobs is not required.

  3. For the PKS Database job:

    • Leave the LOAD BALANCERS field blank.
    • (Optional) If you do not use a NAT instance, select INTERNET CONNECTED. This allows component instances direct access to the internet.
  4. For the PKS API job:

    • Enter the name of your PKS API load balancer in the LOAD BALANCERS field. The name of your PKS API load balancer is YOUR-ENVIRONMENT-NAME-pks-lb. Replace YOUR-ENVIRONMENT-NAME with the environment name that you configured during Step 1: Download Templates and Edit Variables File in Deploying Ops Manager on Azure Using Terraform. If needed, you can find your environment name in your terraform.tfstate file.

      Note: After you click Apply Changes for the first time, BOSH assigns the PKS API VM an IP address. BOSH uses the name you provide in the LOAD BALANCERS field to locate your load balancer and then connect the load balancer to the PKS API VM using its new IP address.

    • (Optional) If you do not use a NAT instance, select INTERNET CONNECTED. This allows component instances direct access to the internet.

Step 3: Apply Changes

  1. Return to the Ops Manager Installation Dashboard.
  2. Click Review Pending Changes. Select the product that you intend to deploy and review the changes. For more information, see Reviewing Pending Product Changes.
  3. Click Apply Changes.

Step 4: Retrieve the PKS API Endpoint

You need to retrieve the PKS API endpoint to allow your organization to use the API to create, update, and delete Kubernetes clusters.

To retrieve the PKS API endpoint, do the following:

  1. Navigate to the Ops Manager Installation Dashboard.
  2. Click the Enterprise PKS tile.
  3. Click the Status tab and locate the PKS API job. The IP address of the PKS API job is the PKS API endpoint.

Step 5: Configure an Azure Load Balancer for the PKS API

Follow the procedures in Configuring an Azure Load Balancer for the PKS API to configure an Azure load balancer for the PKS API.

Step 6: Install the PKS and Kubernetes CLIs

The PKS CLI and the Kubernetes CLI help you interact with your Enterprise PKS-provisioned Kubernetes clusters and Kubernetes workloads. To install the CLIs, follow the instructions below:

Step 7: Configure Authentication for Enterprise PKS

Follow the procedures in Setting Up Enterprise PKS Admin Users on Azure.

Next Steps

After installing Enterprise PKS on Azure, you may want to do one or more of the following:


Please send any feedback you have to pks-feedback@pivotal.io.