Preparing to Deploy PKS on vSphere with NSX-T
Page last updated:
Before you install PKS on vSphere with NSX-T integration, you must prepare your NSX-T environment. Complete all of the steps listed in the order presented to manually create the NSX-T environment for PKS.
- vCenter, NSX-T components, and ESXi hosts must be able to communicate with each other.
- The BOSH Director VM must be able to communicate with vCenter and the NSX Manager.
- The BOSH Director VM must be able to communicate with all nodes in all Kubernetes clusters.
- Each PKS-provisioned Kubernetes cluster deploys the NSX-T Node Agent and the Kube Proxy that run as BOSH-managed processes on each worker node.
In addition, the NSX-T Container Plugin (NCP) runs as a BOSH-managed process on the Kubernetes master node. In a multi-master PKS deployment, the NCP process runs on all master nodes. However, the process is active only on one master node. If the NCP process on an active master is unresponsive, BOSH activates another NCP process. Refer to the NCP documentation for more information.
Before you install PKS on vSphere with NSX-T, you should plan for the CIDRs and IP blocks that you are using in your deployment.
Plan for the following network CIDRs in the IPv4 address space according to the instructions in the VMware NSX-T documentation.
VTEP CIDRs: One or more of these networks host your GENEVE Tunnel Endpoints on your NSX Transport Nodes. Size the networks to support all of your expected Host and Edge Transport Nodes. For example, a CIDR of
192.168.1.0/24provides 254 usable IPs.
PKS MANAGEMENT CIDR: This small network is used to access PKS management components such as Ops Manager, BOSH Director, the PKS Service VM, and the Harbor Registry VM (if deployed). For example, a CIDR of
10.172.1.0/28provides 14 usable IPs. For the No-NAT deployment topologies, this is a corporate routable subnet /28. For the NAT deployment topology, this is a non-routable subnet /28, and DNAT needs to be configured in NSX-T to access the PKS management components.
PKS LB CIDR: This network provides your load balancing address space for each Kubernetes cluster created by PKS. The network also provides IP addresses for Kubernetes API access and Kubernetes exposed services. For example,
10.172.2.0/24provides 256 usable IPs. This network is used when creating the
ip-pool-vipsdescribed in Creating NSX-T Objects for PKS, or when the services are deployed. You enter this network in the Floating IP Pool ID field in the Networking pane of the PKS tile.
When you install PKS on NSX-T, you are required to specify the Pods IP Block ID and Nodes IP Block ID in the Networking pane of the PKS tile. These IDs map to the two IP blocks you must configure in NSX-T: the Pods IP Block for Kubernetes pods, and the Node IP Block for Kubernetes nodes (VMs). For more information, see the Networking section of Installing PKS on vSphere with NSX-T Integration.
Each time a Kubernetes namespace is created, a subnet from the Pods IP Block is allocated. The subnet size carved out from this block is /24, which means a maximum of 256 pods can be created per namespace. When a Kubernetes cluster is deployed by PKS, by default 3 namespaces are created. Often additional namespaces will be created by operators to facilitate cluster use. As a result, when creating the Pods IP Block, you must use a CIDR range larger than /24 to ensure that NSX has enough IP addresses to allocate for all pods. The recommended size is /16. For more information, see Create NSX Network Objects below.
Note: By default, Pods IP Block is a block of non-routable, private IP addresses. After you deploy PKS, you can define a network profile that specifies a routable IP block for your pods. The routable IP block overrides the default non-routable Pods IP Block when a Kubernetes cluster is deployed using that network profile. For more information, see Routable Pods in Using Network Profiles (NSX-T Only).
Each Kubernetes cluster deployed by PKS owns a /24 subnet. To deploy multiple Kubernetes clusters, set the Nodes IP Block ID in the Networking pane of the PKS tile to larger than /24. The recommended size is /16. For more information, see Create NSX Network Objects below.
Note: You can use a smaller nodes block size for no-NAT environments with a limited number of routable subnets. For example, /20 allows up to 16 Kubernetes clusters to be created.
The PKS Management Plane must not use the use 172.17.0.0/16 subnet. This restriction applies to all virtual machines (VMs) deployed during the PKS installation process, including the PKS control plane, Ops Manager, BOSH Director, and Harbor Registry.
In addition, do not use any of the IP blocks listed below for Kubernetes master or worker node VMs, or for Kubernetes pods. If you create Kubernetes clusters with any of the blocks listed below, the Kubernetes worker nodes cannot reach Harbor or internal Kubernetes services.
The Docker daemon on the Kubernetes worker node uses the subnet in the following CIDR range. Do not use IP addresses in the following CIDR range:
If PKS is deployed with Harbor, Harbor uses the following CIDR ranges for its internal Docker bridges. Do not use IP addresses in the following CIDR range:
Each Kubernetes cluster uses the following subnet for Kubernetes services. Do not use the following IP block for the Nodes IP Block:
PKS supports active/standby Edge Node failover and requires at least two Edge Nodes. In addition, PKS requires the Edge Node Large VM (8 vCPU, 16 GB of RAM, and 120 GB of storage). The default size of the LB provisioned for PKS is small. You can customize this after deploying PKS using Network Profiles.
The VIB repository service provides access to native libraries for NSX Transport Nodes. For instructions on enabling VIB, see Enable VIB Repository Service on NSX Manager.
Create Tunnel Endpoint IP Pool (TEP IP Pool) within the usable range of the VTEP CIDR created [Plan Network CIDRS)(plan-cidrs). This IP pool is used for NSX Transport Nodes. For more information, see NSX Edge Networking Setup. For instructions, see Create TEP IP Pool.
Configure NSX Edge for high availability (HA) using Active/Standby mode to support failover, as shown in the following figure. For instructions, see Configure Edge HA.
Note: If the T0 Router is not properly configured for HA, failover to the standby Edge Node will not occur. See [Configure Edge HA](nsxt-deploy.html#configure-edge-ha) for instructions.
An NSX Transport Node allows NSX Nodes to exchange traffic for virtual networks. ESXi hosts dedicated to the PKS Compute Cluster must be prepared as tranport nodes. For instructions, see Prepare Compute Cluster ESXi Hosts.
Note: The Transport Nodes must be placed on free host NICs not already used by other vSwitches on the ESXi host. Use the
VTEPS IP pool that allows ESXi hosts to route and communicate with each other, as well as other Edge Transport Nodes.
Prepare the vSphere and NSX-T infrastructure for the PKS Management Plane where the PKS, Ops Manager, BOSH Director, and Harbor Registry VMs are deployed. This includes a vSphere resource pool for PKS management components, NSX Tier-1 (T1) Logical Switch, and a Tier-1 Logical Router and Port. For instructions, see Prepare Management Plane.
|DNAT||External -> Ops Manager|
|DNAT||External -> Harbor (optional)|
|SNAT||PKS Management Plane -> vCenter and NSX-T Manager|
|SNAT||PKS Management Plane -> DNS|
|SNAT||PKS Management Plane -> NTP|
|SNAT||PKS Management Plane -> LDAP/AD|
|SNAT||PKS Management Plane -> ESXi|
Create Resource Pools for AZ-1 and AZ-2, which map to the Availability Zones you will create when you configure BOSH Director and reference when you install the PKS tile. In addition, create SNAT rules on the T0 router:
- One for K8s Master Nodes (hosting NCP) to reach the NSX-T Manager
- One for Kubernetes Master Node Access to LDAP/AD (optional)
For instructions, see Prepare Compute Plane.
Deploy Ops Manager 2.3.2+ on the NSX-T Management Plane network. For instructions, see Deploy Ops Manager on vSphere with NSX-T.
Generate the CA Cert for the NSX Manager and import the certificate to NSX Manager. For instructions, see Generate the NSX Manager CA Cert.
Create availability zones (AZs) that map to the Management and Compute resource pools in vSphere, and the Management and Control networks in NSX-T. For instructions, see Configure BOSH Director for vSphere with NSX-T.
Generate the NSX Manager Super User Principal Identity Certificate and register it with the NSX Manager using the NSX API. For instructions, see Generate the NSX Manager PI Cert.
Create IP blocks for the node networks and the pod networks. The subnets for both nodes and pods should have a size of 256 (/16). For more information, see Plan IP Blocks as well as Reserved IP Blocks.
In addition, create a Floating IP Pool from which to assign routable IP addresses to components. This network provides your load balancing address space for each Kubernetes cluster created by PKS. The network also provides IP addresses for Kubernetes API access and Kubernetes exposed services.
These objects are required to configure the PKS tile for NSX-T networking. For instructions, see Create NSXT Object for PKS.
At this point your NSX-T environment is prepared for PKS installation using the PKS tile in Ops Manager. For instructions, see Installing PKS on vSphere with NSX-T.
The VMware Harbor Registry is recommended for PKS. Install Harbor in the NSX Management Plane with other PKS components (PKS API, Ops Manager, and BOSH). For instructions, see Installing Harbor Registry on vSphere with NSX-T in the PKS Harbor documentation.
If you are using the NAT deployment topology for PKS, create a DNAT rule that maps the private Harbor IP address to a routable IP address from the floating IP pool on the PKS management network. See Create DNAT Rule.
Please send any feedback you have to firstname.lastname@example.org.