Enterprise PKS Architecture
Page last updated:
Warning: VMware Enterprise PKS v1.7 is no longer supported because it has reached the End of General Support (EOGS) phase as defined by the Support Lifecycle Policy. To stay up to date with the latest software and security updates, upgrade to a supported version.
This topic describes how VMware Enterprise PKS manages the deployment of Kubernetes clusters.
An Enterprise PKS environment consists of a PKS Control Plane and one or more workload clusters.
Enterprise PKS administrators use the PKS Control Plane to deploy and manage Kubernetes clusters. The workload clusters run the apps pushed by developers.
The following illustrates the interaction between Enterprise PKS components:
Administrators access the PKS Control Plane through the PKS Command Line Interface (PKS CLI) installed on their local workstations.
Within the PKS Control Plane the PKS API and PKS Broker use BOSH to execute the requested cluster management functions. For information about the PKS Control Plane, see PKS Control Plane Overview below. For instructions on installing the PKS CLI, see Installing the PKS CLI.
Kubernetes deploys and manages workloads on Kubernetes clusters.
Administrators use the Kubernetes CLI,
kubectl, to direct Kubernetes
from their local workstations.
For information about
kubectl, see Overview of kubectl in the Kubernetes documentation.
The PKS Control Plane manages the lifecycle of Kubernetes clusters deployed using Enterprise PKS.
The control plane provides the following via the PKS API:
- View cluster plans
- Create clusters
- View information about clusters
- Obtain credentials to deploy workloads to clusters
- Scale clusters
- Delete clusters
- Create and manage network profiles for VMware NSX-T
In addition, the PKS Control Plane can upgrade all existing clusters using the Upgrade all clusters BOSH errand. For more information, see Upgrade Kubernetes Clusters in Upgrading Enterprise PKS (Flannel Networking).
PKS Control Plane is hosted on a pair of VMs:
- The PKS API VM hosts cluster management services.
- The PKS Database VM stores cluster management data.
The PKS API VM hosts the following services:
- User Account and Authentication (UAA)
- PKS API
- PKS Broker
- Billing and Telemetry
The following sections describe UAA, PKS API, and PKS Broker services, the primary services hosted on the PKS API VM.
When a user logs in to or logs out of the PKS API through the PKS CLI, the PKS CLI communicates with UAA to authenticate them. The PKS API permits only authenticated users to manage Kubernetes clusters. For more information about authenticating, see PKS API Authentication.
UAA must be configured with the appropriate users and user permissions. For more information, see Managing Enterprise PKS Users with UAA.
Through the PKS CLI, users instruct the PKS API service to deploy, scale up, and delete Kubernetes clusters as well as show cluster details and plans.
The PKS API can also write Kubernetes cluster credentials to a local kubeconfig file, which enables users to connect to a cluster through
On AWS, GCP, and vSphere without NSX-T deployments the PKS CLI communicates with the PKS API within the control plane via the PKS API Load Balancer. On vSphere with NSX-T deployments the PKS API host is accessible via a DNAT rule. For information about enabling the PKS API on vSphere with NSX-T, see the Share the PKS API Endpoint section in Installing Enterprise PKS on vSphere with NSX-T Integration.
The PKS API sends all cluster management requests, except read-only requests, to the PKS Broker.
When the PKS API receives a request to modify a Kubernetes cluster, it instructs the PKS Broker to make the requested change.
The PKS Broker consists of an On-Demand Service Broker and a Service Adapter. The PKS Broker generates a BOSH manifest and instructs the BOSH Director to deploy or delete the Kubernetes cluster.
For Enterprise PKS deployments on vSphere with NSX-T, there is an additional component, the Enterprise PKS NSX-T Proxy Broker. The PKS API communicates with the PKS NSX-T Proxy Broker, which in turn communicates with the NSX Manager to provision the Node Networking resources. The PKS NSX-T Proxy Broker then forwards the request to the On-Demand Service Broker to deploy the cluster.
The PKS Database VM hosts MySQL, proxy, and other data-related services. These data-related functions persist PKS Control Plane data for the the following services:
- PKS API
Enterprise PKS uses Availability Zones (AZs) to provide high availability for Kubernetes cluster workers.
When an operator creates Plans for developers, they assign AZs to the Plans. Assigning multiple AZs to a Plan allows developers to provide high-availability for their worker clusters. When a cluster has more than one node, Ops Manager balances those nodes across the Availability Zones assigned to the cluster.
Public-cloud IaaSes such as AWS and Azure provide AZs as part of their service. In vSphere with NSX-T, you define and create AZs using vCenter clusters and resource pools. See Step 4: Create Availability Zones in Configuring BOSH Director with NSX-T for Enterprise PKS for how to create AZs in NSX-T.
For instructions on selecting AZs for your Enterprise PKS Plans, see Plans in Installing Enterprise PKS on vSphere with NSX-T.
For instructions on selecting the AZ for the Enterprise PKS control plane, see Assign AZs and Networks in Installing Enterprise PKS on vSphere with NSX-T.
Windows worker-based cluster (beta) Linux nodes can be configured in either standard or high availability modes.
- In standard mode, a single Master/etcd node and a single Linux worker manage a cluster’s Windows Kubernetes VMs.
- In high availability mode,
multiple Master/etcd and Linux worker nodes manage a cluster’s Windows Kubernetes VMs.
The following illustrates the interaction between the
Enterprise PKS Management Plane and Windows worker-based Kubernetes clusters:
To configure Enterprise PKS Windows worker-based clusters for high availability, set these fields in the Plan pane as described in Plans in Configuring Windows Worker-Based Kubernetes Clusters (Beta):
- Enable HA Linux workers
- Master/ETCD Node Instances
- Worker Node Instances
Please send any feedback you have to firstname.lastname@example.org.