Planning, Preparing, and Configuring NSX-T for PKS

Page last updated:

Before you install PKS on vSphere with NSX-T integration, you must prepare your NSX-T environment. Complete all of the steps listed in the order presented to manually create the NSX-T environment for PKS.

Step 1: Plan Network Topology, Subnets, and IP Blocks

Plan NSX-T Deployment Topology

Review vSphere with NSX-T Version Requirements and Hardware Requirements for PKS on vSphere with NSX-T.

Review the Deployment Toplogies for PKS on vSphere with NSX-T, and the NSX-T Data Center documentation to ensure that your chosen network topology will enable the following communications:

  • vCenter, NSX-T components, and ESXi hosts must be able to communicate with each other.
  • The BOSH Director VM must be able to communicate with vCenter and the NSX Manager.
  • The BOSH Director VM must be able to communicate with all nodes in all Kubernetes clusters.
  • Each PKS-provisioned Kubernetes cluster deploys the NSX-T Node Agent and the Kube Proxy that run as BOSH-managed processes on each worker node.

In addition, the NSX-T Container Plugin (NCP) runs as a BOSH-managed process on the Kubernetes master node. In a multi-master PKS deployment, the NCP process runs on all master nodes. However, the process is active only on one master node. If the NCP process on an active master is unresponsive, BOSH activates another NCP process. Refer to the NCP documentation for more information.

Plan Network CIDRs

Before you install PKS on vSphere with NSX-T, you should plan for the CIDRs and IP blocks that you are using in your deployment.

Plan for the following network CIDRs in the IPv4 address space according to the instructions in the VMware NSX-T documentation.

  • VTEP CIDRs: One or more of these networks host your GENEVE Tunnel Endpoints on your NSX Transport Nodes. Size the networks to support all of your expected Host and Edge Transport Nodes. For example, a CIDR of 192.168.1.0/24 provides 254 usable IPs.

  • PKS MANAGEMENT CIDR: This small network is used to access PKS management components such as Ops Manager, BOSH Director, the PKS Service VM, and the Harbor Registry VM (if deployed). For example, a CIDR of 10.172.1.0/28 provides 14 usable IPs. For the No-NAT deployment topologies, this is a corporate routable subnet /28. For the NAT deployment topology, this is a non-routable subnet /28, and DNAT needs to be configured in NSX-T to access the PKS management components.

  • PKS LB CIDR: This network provides your load balancing address space for each Kubernetes cluster created by PKS. The network also provides IP addresses for Kubernetes API access and Kubernetes exposed services. For example, 10.172.2.0/24 provides 256 usable IPs. This network is used when creating the ip-pool-vips described in Creating NSX-T Objects for PKS, or when the services are deployed. You enter this network in the Floating IP Pool ID field in the Networking pane of the PKS tile.

Plan IP Blocks

When you install PKS on NSX-T, you are required to specify the Pods IP Block ID and Nodes IP Block ID in the Networking pane of the PKS tile. These IDs map to the two IP blocks you must configure in NSX-T: the Pods IP Block for Kubernetes pods, and the Node IP Block for Kubernetes nodes (VMs). For more information, see the Networking section of Installing PKS on vSphere with NSX-T Integration.

Required IP Blocks for NSX-T

Pods IP Block

Each time a Kubernetes namespace is created, a subnet from the Pods IP Block is allocated. The subnet size carved out from this block is /24, which means a maximum of 256 pods can be created per namespace. When a Kubernetes cluster is deployed by PKS, by default 3 namespaces are created. Often additional namespaces will be created by operators to facilitate cluster use. As a result, when creating the Pods IP Block, you must use a CIDR range larger than /24 to ensure that NSX has enough IP addresses to allocate for all pods. The recommended size is /16. For more information, see Creating NSX-T Objects for PKS.

Pods IP Block

Note: By default, Pods IP Block is a block of non-routable, private IP addresses. After you deploy PKS, you can define a network profile that specifies a routable IP block for your pods. The routable IP block overrides the default non-routable Pods IP Block when a Kubernetes cluster is deployed using that network profile. For more information, see Routable Pods in Using Network Profiles (NSX-T Only).

Nodes IP Block

Each Kubernetes cluster deployed by PKS owns a /24 subnet. To deploy multiple Kubernetes clusters, set the Nodes IP Block ID in the Networking pane of the PKS tile to larger than /24. The recommended size is /16. For more information, see Creating NSX-T Objects for PKS.

Nodes IP Block

Note: You can use a smaller nodes block size for no-NAT environments with a limited number of routable subnets. For example, /20 allows up to 16 Kubernetes clusters to be created.

Reserved IP Blocks

The PKS Management Plane must not use the use 172.17.0.0/16 subnet. This restriction applies to all virtual machines (VMs) deployed during the PKS installation process, including the PKS control plane, Ops Manager, BOSH Director, and Harbor Registry.

In addition, do not use any of the IP blocks listed below for Kubernetes master or worker node VMs, or for Kubernetes pods. If you create Kubernetes clusters with any of the blocks listed below, the Kubernetes worker nodes cannot reach Harbor or internal Kubernetes services.

The Docker daemon on the Kubernetes worker node uses the subnet in the following CIDR range. Do not use IP addresses in the following CIDR range:

  • 172.17.0.1/16
  • 172.18.0.1/16
  • 172.19.0.1/16
  • 172.20.0.1/16
  • 172.21.0.1/16
  • 172.22.0.1/16

If PKS is deployed with Harbor, Harbor uses the following CIDR ranges for its internal Docker bridges. Do not use IP addresses in the following CIDR range:

  • 172.18.0.0/16
  • 172.19.0.0/16
  • 172.20.0.0/16
  • 172.21.0.0/16
  • 172.22.0.0/16

Each Kubernetes cluster uses the following subnet for Kubernetes services. Do not use the following IP block for the Nodes IP Block:

  • 10.100.200.0/24

Step 2: Deploy NSX Manager

Deploy the NSX Manager Unified Appliance. For instructions, see Deploy the NSX Manager.

Step 3: Deploy NSX Controllers

Deploy one or more NSX Controllers. You must deploy at least one NSX Controller for PKS; three NSX Controllers are recommended. For instructions, see Deploy NSX Controllers.

Step 4: Create NSX Clusters

Create NSX Clusters for the Management Plane and Control Plane. For instructions, see Create NSX Clusters.

Step 5: Deploy NSX Edge Nodes

Deploy two or more NSX Edge Nodes. Edge Nodes for PKS run load balancers for PKS API traffic, load balancer services for Kubenetes pods, and ingress controllers for Kubernetes pods. For instructions, see Deploy NSX Edge Nodes.

PKS supports active/standby Edge Node failover and requires at least two Edge Nodes. In addition, PKS requires the Edge Node Large VM (8 vCPU, 16 GB of RAM, and 120 GB of storage). The default size of the LB provisioned for PKS is small. You can customize this after deploying PKS using Network Profiles.

The table below lists the maximum number of load balancers per Edge Node form factor.

Edge Node Type LB Small Max LB Medium Max LB Large Max Supported by PKS
Edge VM Small 0 0 0 No
Edge VM Medium 1 0 0 No
Edge VM Large 40 4 0 Yes
Edge Bare Metal 750 100 7 Yes

Keep in mind the following requirements for NSX Edge Nodes with PKS:

  • PKS requires the NSX-T Edge Node large VM (8 vCPU and 16 GB of RAM) or the bare metal Edge Node. For more information, see Hardware requirements for PKS on vSphere with NSX-T.
  • The default load balancer deployed by NSX-T for a PKS-provisioned Kubernetes cluster is the small load balancer. The size of the load balancer can be customized using Network Profiles.
  • Edge Node VMs can only be deployed on Intel-based ESXi hosts.
  • The large load balancer requires a bare metal Edge Node.
  • For high-availability Edge Nodes are deployed as pairs within an Edge Cluster. The minimum number of Edge Nodes per Edge Cluster is 2; the maximum is 10. PKS supports active/standby mode only. In standby mode, the standby LB is not available for use while the active LB is active. To determine the maximum number of load balancers per Edge Cluster, multiply the maximum number of LBs for the Edge Node type by the number of Edge Nodes and divide by 2. For example, with 10 Edge VM Large nodes in an Edge Cluster, you can have up to 200 small LB instances (40 x 10 / 2), or up to 20 medium LB instances (4 x 10 / 2).
  • PKS deploys a virtual server for each load balancer instance. For service of type load balancer, it is one virtual server per service. There are two global virtual servers deployed for ingress resources (HTTP and HTTPS). And there is one global virtual server for the PKS API. For more information, see Defining Network Profiles.

Step 6: Register NSX Edge Nodes

Register NSX Edge Nodes with the NSX Manager. For instructions, see Register NSX Edge Nodes.

Step 7: Enable VIB Repository Service

The VIB repository service provides access to native libraries for NSX Transport Nodes. VIB must be enabled before you proceed further with deploying NSX. For instructions, see Enable VIB Repository Service on NSX Manager.

Step 8: Create TEP IP Pool

Create Tunnel Endpoint IP Pool (TEP IP Pool) within the usable range of the VTEP CIDR that was defined in [preparation for installing NSX-T)(#plan-cidrs). The TEP IP Pool is used for NSX Transport Nodes. For instructions, see Create TEP IP Pool.

Step 9: Create Overlay Transport Zone

Create an NSX Overlay Transport Zone (TZ-Overlay) for PKS Control Plane services and Kubernetes Cluster deployment overlay networks. For instructions, see Create Overlay TZ.

Step 10: Create VLAN Transport Zone

Create an NSX VLAN Transport Zone (TZ-VLAN) for NSX Edge uplinks (ingress/egress) for PKS-managed Kubernetes clusters. For instructions, see Create VLAN TZ.

Create an NSX Uplink Profile for NSX Edge Nodes to be used with PKS. For instructions, see Create Uplink Profile for Edge Nodes.

Step 12: Create Transport Edge Nodes

Create NSX Edge Transport Nodes, which allow Edge Nodes to exchange traffic for virtual networks among other NSX nodes. For instructions, see Create Transport Edge Nodes.

Step 13: Create Edge Cluster

Create an NSX Edge Cluster and add each NSX Edge Transport Node to the Edge Cluster. For instructions, see Create Edge Cluster.

Step 14: Create T0 Logical Router for PKS

NSX Tier-0 Logical Routers are used to route data between the NSX-T virtual network and the physical network. For instructions, see Create T0 Router.

Step 15: Configure NSX Edge for High Availability (HA)

Configure NSX Edge for high availability (HA) using Active/Standby mode to support failover, as shown in the following figure. For instructions, see Configure Edge HA.

Note: If the T0 Router is not configured for HA as decribed in Configure Edge Nodes for HA, failover to the standby Edge Node will not occur.

NSX Edge High Availability

Step 16: Prepare ESXi Hosts for PKS Compute Plane

An NSX Transport Node allows NSX Nodes to exchange traffic for virtual networks. ESXi hosts dedicated to the PKS Compute Cluster must be prepared as tranport nodes. For instructions, see Prepare Compute Cluster ESXi Hosts.

Note: The Transport Nodes must be placed on free host NICs not already used by other vSwitches on the ESXi host. Use the VTEPS IP pool that allows ESXi hosts to route and communicate with each other, as well as other Edge Transport Nodes.

Step 17: Create NSX-T Objects for PKS Management Plane

Prepare the vSphere and NSX-T infrastructure for the PKS Management Plane where the PKS, Ops Manager, BOSH Director, and Harbor Registry VMs are deployed. This includes a vSphere resource pool for PKS management components, an NSX Tier-1 (T1) Logical Switch, and an NSX Tier-1 Logical Router and Port. For instructions, see Prepare PKS Management Plane.

If you are using the NAT Topology, create the following NAT rules on the T0 Router. For instructions, see Prepare Management Plane.

Type For
DNAT External > Ops Manager
DNAT External > Harbor (optional)
SNAT PKS Management Plane > vCenter and NSX-T Manager
SNAT PKS Management Plane > DNS
SNAT PKS Management Plane > NTP
SNAT PKS Management Plane > LDAP/AD (optional)
SNAT PKS Management Plane > ESXi

Step 18: Create NSX-T Objects for PKS Compute Plane

Create Resource Pools for AZ-1 and AZ-2, which map to the Availability Zones you will create when you configure BOSH Director and reference when you install the PKS tile. In addition, create SNAT rules on the T0 router:

  • One for K8s Master Nodes (hosting NCP) to reach the NSX-T Manager
  • One for Kubernetes Master Node Access to LDAP/AD (optional)

For instructions, see Prepare Compute Plane.

Step 19: Deploy Ops Manager in the NSX-T Environment

Deploy Ops Manager 2.3.2+ on the NSX-T Management Plane network. For instructions, see Deploy Ops Manager on vSphere with NSX-T.

Step 20: Generate NSX Manager Certificate

Generate the CA Cert for the NSX Manager and import the certificate to NSX Manager. For instructions, see Generate the NSX Manager CA Cert.

Step 21: Configure BOSH Director for vSphere with NSX-T

Create BOSH availability zones (AZs) that map to the Management and Compute resource pools in vSphere, and the Management and Control plane networks in NSX-T. For instructions, see Configure BOSH Director for vSphere with NSX-T.

Step 22: Generate NSX Manager Principal Identity Certificate

Generate the NSX Manager Super User Principal Identity Certificate and register it with the NSX Manager using the NSX API. For instructions, see Generate the NSX Manager PI Cert.

Step 23: Create NSX-T Objects for PKS

Create IP blocks for the node networks and the pod networks. The subnets for both nodes and pods should have a size of 256 (/16). See Plan IP Blocks and Reserved IP Blocks for details.

In addition, create a Floating IP Pool from which to assign routable IP addresses to components. This network provides your load balancing address space for each Kubernetes cluster created by PKS. The network also provides IP addresses for Kubernetes API access and Kubernetes exposed services.

These network objects are required to configure the PKS tile for NSX-T networking. For instructions, see Create NSXT Object for PKS.

Step 24: Install PKS on vSphere with NSX-T

At this point your NSX-T environment is prepared for PKS installation using the PKS tile in Ops Manager. For instructions, see Installing PKS on vSphere with NSX-T.

Step 25: Install Harbor Harbor Registry for PKS

The VMware Harbor Registry is recommended for PKS. Install Harbor in the NSX Management Plane with other PKS components (PKS API, Ops Manager, and BOSH). For instructions, see Installing Harbor Registry on vSphere with NSX-T in the PKS Harbor documentation.

If you are using the NAT deployment topology for PKS, create a DNAT rule that maps the private Harbor IP address to a routable IP address from the floating IP pool on the PKS management network. See Create DNAT Rule.

Step 26: Perform Post-Installation NSX-T Configurations as Necessary

Once PKS is installed, you may want to perform additional NSX-T configurations to support customization of Kubernetes clusters at deployment time, such as:

  • Configuring an HTTTP Proxy to proxy outgoing HTTP/S traffic from NCP, PKS, BOSH, and Ops Manager to vSphere infrastructure components (vCenter, NSX Manager)
  • Defining Network Profiles to customize NSX-T networking objects, such as load balancer size, custom Pods IP Block, routable Pods IP Block, configurable CIDR range for the Pods IP Block, custom Floating IP block, and more.
  • Configuring Multiple Tier-0 Routers to support customer/tenant isolation

Please send any feedback you have to pks-feedback@pivotal.io.