LATEST VERSION: v1.4 - RELEASE NOTES
Pivotal Container Service v1.3

Hardware Requirements for PKS on vSphere with NSX-T

Page last updated:

This topic provides hardware requirements for production deployments of Pivotal Container Service (PKS) on vSphere with NSX-T.

vSphere Clusters for PKS

A vSphere cluster is a collection of ESXi hosts and associated virtual machines (VMs) with shared resources and a shared management interface. Installing PKS on vSphere with NSX-T requires the following vSphere clusters:

For more information on creating vSphere clusters, see Creating Clusters in the vSphere documentation.

PKS Management Cluster

The PKS Management Cluster on vSphere comprises the following components:

  • vCenter Server
  • NSX-T Manager
  • NSX-T Controller (quantity 3)

For more information, see Deploying NSX-T for PKS.

PKS Edge Cluster

The PKS Edge Cluster on vSphere comprises two or more NSX-T Edge Nodes in active/standby mode. The minimum number of Edge Nodes per Edge Cluster is two; the maximum is 10. PKS supports running Edge Node pairs in active/standby mode only.

For more information, see Deploying NSX-T for PKS.

PKS Compute Cluster

The PKS Compute Cluster on vSphere comprises the following components:

  • Kubernetes master nodes (quantity 3)
  • Kubernetes worker nodes

For more information, see Installing PKS on vSphere with NSX-T.

PKS Management Plane Placement Considerations

The PKS Management Plane comprises the following components:

  • Pivotal Ops Manager
  • Pivotal BOSH Director
  • PKS Control Plane
  • VMware Harbor Registry

Depending on your design choice, PKS management components can be deployed in the PKS Management Cluster on the standard vSphere network or in the PKS Compute Cluster on the NSX-T-defined virtual network. For more information, see NSX-T Deployment Topologies for PKS.

Configuration Requirements for vSphere Clusters for PKS

For each vSphere cluster defined for PKS, the following configurations are required to support production workloads:

  • The vSphere Distributed Resource Scheduler (DRS) is enabled. For more information, see Creating a DRS Cluster in the vSphere documentation.

  • The DRS custom automation level is set to Partially Automated or Fully Automated. For more information, see Set a Custom Automation Level for a Virtual Machine in the vSphere documentation.

  • vSphere high-availability (HA) is enabled. For more information, see Creating and Using vSphere HA Clusters in the vSphere documentation.

  • vSphere HA Admission Control (AC) is configured to support one ESXi host failure. For more information, see Configure Admission Control in the vSphere documentation.

    Specifically:

    • Host failure: Restart VMs
    • Admission Control: Host failures cluster tolerates = 1

RPD for PKS on vSphere with NSX-T

The recommended production deployment (RPD) topology represents the VMware-recommended configuration to run production workloads in PKS on vSphere with NSX-T.

Note: The RPD differs depending on whether you are using vSAN or not.

RPD for PKS with vSAN

The RPD for PKS with vSAN storage requires 12 ESXi hosts. The diagram below shows the topology for this deployment.

RPD for PKS with vSAN

The following subsections describe configuration details for the RPD with vSAN topology.

Management/Edge Cluster

The RPD with vSAN topology includes a Management/Edge Cluster with the following characteristics:

  • Collapsed Management/Edge Cluster with three ESXi hosts.
  • Each ESXi host runs one NSX-T Controller. The NSX-T Control Plane has three NSX-T Controllers total.
  • Two NSX-T Edge Nodes are deployed across two different ESXi hosts.

Compute Clusters

The RPD with vSAN topology includes three Compute Clusters with the following characteristics:

  • Each Compute cluster has three ESXi hosts and is bound by a distinct availability zone (AZ) defined in BOSH Director.
    • Compute cluster1 (AZ1) with three ESXi hosts.
    • Compute cluster2 (AZ2) with three ESXi hosts.
    • Compute cluster3 (AZ3) with three ESXi hosts.
  • Each Compute cluster runs one instance of a PKS-provisioned Kubernetes cluster with three master nodes per cluster and a per-plan number of worker nodes.

Storage (vSAN)

The RPD with vSAN topology requires the following storage configuration:

  • Each Compute Cluster is backed by a vSAN datastore
  • An external shared datastore (using NFS or iSCSI, for instance) must be provided to store Kubernetes Pod PV (Persistent Volumes).
  • Three ESXi hosts are required per Compute cluster because of the vSAN cluster requirements. For data protection, vSAN creates two copies of the data and requires one witness.

For more information on using vSAN with PKS, see PersistentVolume Storage Options on vSphere.

Future Growth

The RPD with vSAN topology can be scaled as follows to accommodate future growth requirements:

  • The collapsed Management/Edge Cluster can be expanded to include up to 64 ESXi hosts.
  • Each Compute Cluster can be expanded to include up to 64 ESXi hosts.

RPD for PKS without vSAN

The RPD for PKS without vSAN storage requires nine ESXi hosts. The diagram below shows the topology for this deployment.

RPD for PKS without vSAN

The following subsections describe configuration details for the RPD of PKS without vSAN.

Management/Edge Cluster

The RPD without vSAN includes a Management/Edge Cluster with the following characteristics:

  • Collapsed Management/Edge Cluster with three ESXi hosts.
  • Each ESXi host runs one NSX-T Controller. The NSX-T Control Plane has three NSX-T Controllers total.
  • Two NSX-T Edge Nodes are deployed across two different ESXi hosts.

Compute Clusters

The RPD without vSAN topology includes three Compute Clusters with the following characteristic:

  • Each Compute cluster has two ESXi hosts and is bound by a distinct availability zone (AZ) defined in BOSH Director.
    • Compute cluster1 (AZ1) with two ESXi hosts.
    • Compute cluster2 (AZ2) with two ESXi hosts.
    • Compute cluster3 (AZ3) with two ESXi hosts.
  • Each Compute cluster runs one instance of a PKS-provisioned Kubernetes cluster with three master nodes per cluster and a per-plan number of worker nodes.

Storage (non-vSAN)

The RPD without vSAN topology requires the following storage configuration:

  • All Compute Clusters are connected to same shared datastore that is used for persistent VM disks for PKS components and Persistent Volumes (PVs) for Kubernetes pods.
  • All datastores can be collapses to single datastore, if needed.

Future Growth

The RPD without vSAN topology can be scaled as follows to accommodate future growth requirements:

  • The collapsed Management/Edge Cluster can be expanded to include up to 64 ESXi hosts.
  • Each Compute Cluster can be expanded to include up to 64 ESXi hosts.

MPD for PKS on vSphere with NSX-T

The minimum production deployment (MPD) topology represents the baseline requirements for running PKS on vSphere with NSX-T.

Note: The MPD topology for PKS applies to both vSAN and non-vSAN environments.

The diagram below shows the topology for this deployment.

MPD for PKS

The following subsections describe configuration details for an MPD of PKS.

MPD Topology

The MPD topology for PKS requires the following minimum configuration:

  • A single collapsed Management/Edge/Compute cluster running three ESXi hosts in total.
  • Each ESXi host runs one NSX-T Controller. The NSX-T Control Plane has three NSX-T Controllers in total.
  • Each ESXi host runs one Kubernetes master node. Each Kubernetes cluster has three master nodes in total.
  • Two NSX-T edge nodes are deployed across two different ESXi hosts.
  • The shared datastore (NFS or iSCSI, for instance) or vSAN datastore is used for persistent VM disks for PKS components and Persistent Volumes (PVs) for Kubernetes pods.
  • The collapsed Management/Edge/Compute cluster can be expanded to include up to 64 ESXi hosts.

Note: For an MPD deployment, each ESXi host must have four physical network interface controllers (PNICs). In addition, while a PKS deployment requires a minimum of three nodes, PKS upgrades require four ESXi hosts to ensure full survivability of the NSX Manager appliance.

MPD Configuration Requirements

When configuring vSphere for an MPD topology for PKS, keep in mind the following requirements:

  • When deploying the NSX-T Controller to each ESXi host, create a vSphere distributed resource scheduler (DRS) anti-affinity rule of type “separate virtual machines” for each of the three NSX-T Controllers.
  • When deploying the NSX-T Edge Nodes across two different ESXi hosts, create a DRS anti-affinity rule of type “separate virtual machines” for both Edge Node VMs.
  • After deploying the Kubernetes cluster, you must manually make sure each master node is deployed to a different ESXi host by tuning the DRS anti-affinity rule of type “separate virtual machines.”

For more information on defining DRS anti-affinity rules, see Virtual Machine Storage DRS Rules in the vSphere documentation.

MPD Considerations

When planning an MPD topology for PKS, keep in mind the following:

  • Leverage vSphere resource pools to allocate proper hardware resources for the PKS Management Plane components and tune reservation and resource limits accordingly.
  • There is no fault tolerance for the Kubernetes cluster because PKS Availability Zones are not fully leveraged with this topology.
  • At a minimum, the PKS AZ should be mapped to a vSphere Resource Pool.

For more information, see Creating the PKS Management Plane and Creating the PKS Compute Plane.

VM Inventory and Sizes

The following tables list the VMs and their sizes for deployments of PKS on vSphere with NSX-T.

Control Plane VMs and Sizes

The following table lists the resource requirements for each VM in the PKS infrastructure and control plane.

VM CPU Memory (GB) Disk Space (GB)
vCenter Appliance 4 16 290
NSX-T Manager 4 16 140
NSX-T Controller 1 4 16 120
NSX-T Controller 2 4 16 120
NSX-T Controller 3 4 16 120
Ops Manager 1 8 160
BOSH Director 2 8 103
PKS Control Plane 2 8 29 ^*
Harbor Registry 2 8 167
TOTAL 27 112 1.25 TB

Storage Requirements for Large Numbers of Pods

If you expect the cluster workload to run a large number of pods continuously, then increase the size of persistent disk storage allocated to the the Pivotal Container Service VM as follows:

Number of Pods Storage (Persistent Disk) Requirement ^*
1,000 pods 20 GB
5,000 pods 100 GB
10,000 pods 200 GB
50,000 pods 1,000 GB

NSX-T Edge Node VMs and Sizes

The following table lists the resource requirements for each VM in the Edge Cluster.

VM CPU (Intel CPU only) Memory (GB) Disk Space (GB)
NSX-T Edge Node 1 8 16 120
NSX-T Edge Node 2 8 16 120
TOTAL 16 32 .25 TB

Note: NSX-T Edge Nodes must be deployed on Intel-based hardware.

Kubernetes Cluster Nodes VMs and Sizes

The following table lists sizing information for Kubernetes cluster node VMs. The size and resource consumption of these VMs are configurable in the Plans section of the PKS tile.

VM CPU Memory (GB) Ephemeral Disk Space Persistent Disk Space
Master Nodes 1 to 16 1 to 64 8 to 256 GB 1 GB to 32 TB
Worker Nodes 1 to 16 1 to 64 8 to 256 GB 1 GB to 32 TB

For illustrative purposes, the following table shows sizing information for two example Kubernetes clusters. Each cluster has three master nodes and five worker nodes.

VM CPU per Node Memory (GB) per Node Ephemeral Disk Space per Node Persistent Disk Space per Node
Master Nodes (6 total) 2 8 64 GB 128 GB
Worker Nodes (10 total) 4 16 64 GB 256 GB
TOTAL 52 208 1.0 TB 3.4 TB

Hardware Requirements

The following tables list the hardware requirements for RDP and MPD topologies for PKS on vSphere with NSX-T.

RPD Hardware Requirements

The following table lists the hardware requirements for the RPD with vSAN topology.

VM Number of Hosts Total Cores per Host (with HT) Memory per Host (GB) NICs per Host Shared Datastore
Management/Edge Cluster 3 16 98 2x 10GbE 1.5 TB
Compute cluster1 (AZ1) 3 6 48 2x 10GbE 192 GB
Compute cluster2 (AZ2) 3 6 48 2x 10GbE 192 GB
Compute cluster3 (AZ3) 3 6 48 2x 10GbE 192 GB

Note: The Total Cores per Host values assume the use of hyper-threading (HT).

The following table lists the hardware requirements for the RPD without vSAN topology.

VM Number of Hosts Total Cores per Host (with HT) Memory per Host (GB) NICs per Host Shared Datastore
Management/Edge Cluster 3 16 98 2x 10GbE 1.5 TB
Compute cluster1 (AZ1) 2 10 70 2x 10GbE 192 GB
Compute cluster2 (AZ2) 2 10 70 2x 10GbE 192 GB
Compute cluster3 (AZ3) 2 10 70 2x 10GbE 192 GB

Note: The Total Cores per Host values assume the use of hyper-threading (HT).

MPD Hardware Requirements

The following table lists the hardware requirements for the MPD topology with a single (collapsed) cluster for all Management, Edge, and Compute nodes.

VM Number of Hosts Total Cores per Host Memory per Host (GB) NICs per Host Shared Datastore
Collapsed Cluster 3 32 (with hyper-threading) 236 2x 10GbE 5.9 TB

Adding Hardware Capacity

To add hardware capacity to your PKS environment on vSphere, do the following: 1. Add one or more ESXi hosts to the vSphere compute cluster. For more information, see the VMware vSphere documentation. 1. Prepare each newly added ESXi host so that it becomes an ESXi transport node for NSX-T. For more information, see Prepare ESXi Servers for the PKS Compute Cluster.


Please send any feedback you have to pks-feedback@pivotal.io.

Create a pull request or raise an issue on the source for this page in GitHub