Tanzu Kubernetes Grid Integrated Edition Architecture
Note: As of v1.8, Enterprise PKS has been renamed to VMware Tanzu Kubernetes Grid Integrated Edition. Some screenshots in this documentation do not yet reflect the change.
Page last updated:
This topic describes how VMware Tanzu Kubernetes Grid Integrated Edition manages the deployment of Kubernetes clusters.
An Tanzu Kubernetes Grid Integrated Edition environment consists of a TKGI Control Plane and one or more workload clusters.
Tanzu Kubernetes Grid Integrated Edition administrators use the TKGI Control Plane to deploy and manage Kubernetes clusters. The workload clusters run the apps pushed by developers.
The following illustrates the interaction between Tanzu Kubernetes Grid Integrated Edition components:
Administrators access the TKGI Control Plane through the TKGI Command Line Interface (TKGI CLI) installed on their local workstations.
Within the TKGI Control Plane the TKGI API and TKGI Broker use BOSH to execute the requested cluster management functions. For information about the TKGI Control Plane, see TKGI Control Plane Overview below. For instructions on installing the TKGI CLI, see Installing the TKGI CLI.
Kubernetes deploys and manages workloads on Kubernetes clusters.
Administrators use the Kubernetes CLI,
kubectl, to direct Kubernetes
from their local workstations.
For information about
kubectl, see Overview of kubectl in the Kubernetes documentation.
The TKGI Control Plane manages the lifecycle of Kubernetes clusters deployed using Tanzu Kubernetes Grid Integrated Edition.
The control plane provides the following via the TKGI API:
- View cluster plans
- Create clusters
- View information about clusters
- Obtain credentials to deploy workloads to clusters
- Scale clusters
- Delete clusters
- Create and manage network profiles for VMware NSX-T
In addition, the TKGI Control Plane can upgrade all existing clusters using the Upgrade all clusters BOSH errand. For more information, see Upgrade Kubernetes Clusters in Upgrading Tanzu Kubernetes Grid Integrated Edition (Flannel Networking).
TKGI Control Plane is hosted on a pair of VMs:
- The TKGI API VM hosts cluster management services.
- The TKGI Database VM stores cluster management data.
The TKGI API VM hosts the following services:
- User Account and Authentication (UAA)
- TKGI API
- TKGI Broker
- Billing and Telemetry
The following sections describe UAA, TKGI API, and TKGI Broker services, the primary services hosted on the TKGI API VM.
When a user logs in to or logs out of the TKGI API through the TKGI CLI, the TKGI CLI communicates with UAA to authenticate them. The TKGI API permits only authenticated users to manage Kubernetes clusters. For more information about authenticating, see TKGI API Authentication.
UAA must be configured with the appropriate users and user permissions. For more information, see Managing Tanzu Kubernetes Grid Integrated Edition Users with UAA.
Through the TKGI CLI, users instruct the TKGI API service to deploy, scale up, and delete Kubernetes clusters as well as show cluster details and plans.
The TKGI API can also write Kubernetes cluster credentials to a local kubeconfig file, which enables users to connect to a cluster through
On AWS, GCP, and vSphere without NSX-T deployments the TKGI CLI communicates with the TKGI API within the control plane via the TKGI API Load Balancer. On vSphere with NSX-T deployments the TKGI API host is accessible via a DNAT rule. For information about enabling the TKGI API on vSphere with NSX-T, see the Share the TKGI API Endpoint section in Installing Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX-T Integration.
The TKGI API sends all cluster management requests, except read-only requests, to the TKGI Broker.
When the TKGI API receives a request to modify a Kubernetes cluster, it instructs the TKGI Broker to make the requested change.
The TKGI Broker consists of an On-Demand Service Broker and a Service Adapter. The TKGI Broker generates a BOSH manifest and instructs the BOSH Director to deploy or delete the Kubernetes cluster.
For Tanzu Kubernetes Grid Integrated Edition deployments on vSphere with NSX-T, there is an additional component, the Tanzu Kubernetes Grid Integrated Edition NSX-T Proxy Broker. The TKGI API communicates with the TKGI NSX-T Proxy Broker, which in turn communicates with the NSX Manager to provision the Node Networking resources. The TKGI NSX-T Proxy Broker then forwards the request to the On-Demand Service Broker to deploy the cluster.
The TKGI Database VM hosts MySQL, proxy, and other data-related services. These data-related functions persist TKGI Control Plane data for the the following services:
- TKGI API
Tanzu Kubernetes Grid Integrated Edition uses Availability Zones (AZs) to provide high availability for Kubernetes cluster workers.
When an operator creates Plans for developers, they assign AZs to the Plans. Assigning multiple AZs to a Plan allows developers to provide high-availability for their worker clusters. When a cluster has more than one node, Ops Manager balances those nodes across the Availability Zones assigned to the cluster.
Public-cloud IaaSes such as AWS and Azure provide AZs as part of their service. In vSphere with NSX-T, you define and create AZs using vCenter clusters and resource pools. See Step 4: Create Availability Zones in Configuring BOSH Director with NSX-T for Tanzu Kubernetes Grid Integrated Edition for how to create AZs in NSX-T.
For instructions on selecting AZs for your Tanzu Kubernetes Grid Integrated Edition Plans, see Plans in Installing Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX-T.
For instructions on selecting the AZ for the Tanzu Kubernetes Grid Integrated Edition control plane, see Assign AZs and Networks in Installing Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX-T.
Windows worker-based cluster (beta) Linux nodes can be configured in either standard or high availability modes.
- In standard mode, a single Master/etcd node and a single Linux worker manage a cluster’s Windows Kubernetes VMs.
- In high availability mode,
multiple Master/etcd and Linux worker nodes manage a cluster’s Windows Kubernetes VMs.
The following illustrates the interaction between the
Tanzu Kubernetes Grid Integrated Edition Management Plane and Windows worker-based Kubernetes clusters:
To configure Tanzu Kubernetes Grid Integrated Edition Windows worker-based clusters for high availability, set these fields in the Plan pane as described in Plans in Configuring Windows Worker-Based Kubernetes Clusters (Beta):
- Enable HA Linux workers
- Master/ETCD Node Instances
- Worker Node Instances
Please send any feedback you have to firstname.lastname@example.org.