Enterprise PKS Cluster Management
Page last updated:
Warning: VMware Enterprise PKS v1.6 is no longer supported because it has reached the End of General Support (EOGS) phase as defined by the Support Lifecycle Policy. To stay up to date with the latest software and security updates, upgrade to a supported version.
This topic describes how VMware Enterprise PKS manages the deployment of Kubernetes clusters.
Users interact with Enterprise PKS and Enterprise PKS-deployed Kubernetes clusters in two ways:
- Deploying Kubernetes clusters with BOSH and managing their lifecycle. These tasks are performed using the PKS Command Line Interface (PKS CLI) and the PKS control plane.
- Deploying and managing container-based workloads on Kubernetes clusters.
These tasks are performed using the Kubernetes CLI,
The PKS control plane enables users to deploy and manage Kubernetes clusters.
For communicating with the PKS control plane, Enterprise PKS provides a command line interface, the PKS CLI. See Installing the PKS CLI for installation instructions.
The PKS control plane manages the lifecycle of Kubernetes clusters deployed using Enterprise PKS. The control plane allows users to do the following through the PKS CLI:
- View cluster plans
- Create clusters
- View information about clusters
- Obtain credentials to deploy workloads to clusters
- Scale clusters
- Delete clusters
- Create and manage network profiles for VMware NSX-T
In addition, the PKS control plane can upgrade all existing clusters using the Upgrade all clusters BOSH errand. For more information, see Upgrade Kubernetes Clusters in Upgrading Enterprise PKS.
The PKS control plane is deployed on a single VM that includes the following components:
- The PKS API server
- The PKS Broker
- A User Account and Authentication (UAA) server
The following illustration shows how these components interact:
The PKS API Load Balancer is used for AWS, GCP, and vSphere without NSX-T deployments. If Enterprise PKS is deployed on vSphere with NSX-T, a DNAT rule is configured for the PKS API host so that it is accessible. For more information, see the Share the PKS API Endpoint section in Installing Enterprise PKS on vSphere with NSX-T Integration.
When a user logs in to or logs out of the PKS API through the PKS CLI, the PKS CLI communicates with UAA to authenticate them. The PKS API permits only authenticated users to manage Kubernetes clusters. For more information about authenticating, see PKS API Authentication.
UAA must be configured with the appropriate users and user permissions. For more information, see Managing Enterprise PKS Users with UAA.
Through the PKS CLI, users instruct the PKS API server to deploy, scale up, and delete Kubernetes clusters as well as show cluster details and plans.
The PKS API can also write Kubernetes cluster credentials to a local kubeconfig file, which enables users to connect to a cluster through
The PKS API sends all cluster management requests, except read-only requests, to the PKS Broker.
When the PKS API receives a request to modify a Kubernetes cluster, it instructs the PKS Broker to make the requested change.
The PKS Broker consists of an On-Demand Service Broker and a Service Adapter. The PKS Broker generates a BOSH manifest and instructs the BOSH Director to deploy or delete the Kubernetes cluster.
For Enterprise PKS deployments on vSphere with NSX-T, there is an additional component, the Enterprise PKS NSX-T Proxy Broker. The PKS API communicates with the PKS NSX-T Proxy Broker, which in turn communicates with the NSX Manager to provision the Node Networking resources. The PKS NSX-T Proxy Broker then forwards the request to the On-Demand Service Broker to deploy the cluster.
Enterprise PKS users manage their container-based workloads on Kubernetes clusters through
kubectl. For more information about
kubectl, see Overview of kubectl in the Kubernetes documentation.
Please send any feedback you have to email@example.com.