LATEST VERSION: v1.2 - RELEASE NOTES
Pivotal Container Service v1.1

Preparing NSX-T Before Deploying PKS

Page last updated:

Before you install Pivotal Container Service (PKS) on vSphere with NSX-T integration, you must prepare your NSX-T environment.

In addition to fulfilling the prerequisites specified in vSphere with NSX-T Prerequisites and Resource Requirements, follow the steps below.

Step 1: Plan for Network Subnets and IP Blocks

Before you install PKS on vSphere with NSX-T, you should plan for the CIDRs and IP blocks that you are using in your deployment.

Plan Network CIDRs

Plan for the following network CIDRs in the IPv4 address space according to the instructions in the VMware NSX-T documentation.

  • VTEP CIDRs: One or more of these networks host your GENEVE Tunnel Endpoints on your NSX Transport Nodes. Size the networks to support all of your expected Host and Edge Transport Nodes. For example, a CIDR of 192.168.1.0/24 provides 254 usable IPs. This is used when creating the ip-pool-vteps in Step 3.
  • PKS MANAGEMENT CIDR: This small network is used to access PKS management components such as Ops Manager and the PKS Service VM. For example, a CIDR of 10.172.1.0/28 provides 14 usable IPs. For the No-NAT deployment topologies, this is a corporate routable subnet /28. For the NAT deployment topology, this is a non-routable subnet /28, and DNAT needs to be configured in NSX-T to access the PKS management components.
  • PKS LB CIDR: This network provides your load balancing address space for each Kubernetes cluster created by PKS. The network also provides IP addresses for Kubernetes API access and Kubernetes exposed services. For example, 10.172.2.0/24 provides 256 usable IPs. This network is used when creating the ip-pool-vips described in Create NSX Network Objects, or when the services are deployed. You enter this network in the Floating IP Pool ID field in the Networking pane of the PKS tile.

Refer to the instructions in the VMware NSX-T documentation to ensure that your network topology enables the following communications:

  • vCenter, NSX-T components, and ESXi hosts must be able to communicate with each other.
  • The Ops Manager Director VM must be able to communicate with vCenter and the NSX Manager.
  • The Ops Manager Director VM must be able to communicate with all nodes in all Kubernetes clusters.
  • Each PKS-deployed Kubernetes cluster deploys an NCP pod that must be able to communicate with the NSX Manager.

    Note: Starting with PKS v1.1.5, NCP runs as a BOSH-managed process. See NSX-T Architectural Changes in the PKS v1.1.5 release notes for details.

Plan IP Blocks

You must plan IP blocks for the pods and nodes that are created when PKS creates the Kubernetes cluster. IP block sizes must be a multiple of 256. For example, /24. You must allocate subnets for the IP blocks before configuring the PKS tile. For more information, see Step 3.1: Create NSX Network Objects below.

Each Kubernetes cluster owns the /24 subnet. To deploy multiple Kubernetes clusters, set the Nodes IP Block ID in the Networking pane of the PKS tile to larger than /24. The recommended size is /16.

Note: You can use a smaller nodes block size for no-NAT environments with a limited number of routable subnets. For example, /20 allows up to 16 Kubernetes clusters to be created.

You configure the Pods IP Block ID and Nodes IP Block ID in the Networking pane of the PKS tile. For more information, see Networking in Installing PKS on vSphere with NSX-T.

Reserved IP Blocks

Do not use any of the IP blocks listed in this section for pods or nodes. If you create Kubernetes clusters with any of the blocks listed below, the Kubernetes worker nodes cannot reach Harbor or internal Kubernetes services.

The Docker daemon on the Kubernetes worker node uses the subnet in the following CIDR range. Do not use IP addresses in the following CIDR range:

  • 172.17.0.0/16

If PKS is deployed with Harbor, Harbor uses the following CIDR ranges for its internal Docker bridges. Do not use IP addresses in the following CIDR range:

  • 172.18.0.0/16
  • 172.19.0.0/16
  • 172.20.0.0/16
  • 172.21.0.0/16
  • 172.22.0.0/16

Each Kubernetes cluster uses the following subnet for Kubernetes services. Do not use the following IP block for the Nodes IP Block:

  • 10.100.200.0/24

Step 2: Deploy NSX-T

Deploy NSX-T according to the instructions in the VMware NSX-T documentation.

Note: In general, accept default settings unless instructed otherwise.

  1. Deploy the NSX Manager. For more information, see NSX Manager Installation in the VMware NSX-T documentation.
  2. Deploy NSX Controllers. For more information, see NSX Controller Installation and Clustering in the VMware NSX-T documentation.
  3. Join the NSX Controllers to the NSX Manager. For more information, see Join NSX Controllers with the NSX Manager in the VMware NSX-T documentation.
  4. Initialize the Control Cluster. For more information, see Initialize the Control Cluster to Create a Control Cluster Master in the VMware NSX-T documentation.
  5. Add your ESXi hosts to the NSX-T Fabric. For more information, see Add a Hypervisor Host to the NSX-T Fabric in the VMware NSX-T documentation. Each host must have at least one free nic/vmnic not already used by other vSwitches on the ESXi host for use with NSX Host Transport Nodes.
  6. Deploy NSX Edge VMs. We recommend at least two VMs. For more information, see NSX Edge Installation in the VMware NSX-T documentation. Each deployed NSX Edge VM requires free resources in your vSphere environment to provide 8 vCPU, 16 GB of RAM, and 120 GB of storage. When deploying, you must connect the vNICs of the NSX Edge VMs to an appropriate PortGroup for your environment by completing the following steps:
    1. Connect the first Edge interface to your environment’s PortGroup/VLAN where your Edge Management IP can route and communicate with the NSX Manager.
    2. Connect the second Edge interface to your environment’s PortGroup/VLAN where your GENEVE VTEPs can route and communicate with each other. Your VTEP CIDR should be routable to this PortGroup.
    3. Connect the third Edge interface to your environment’s PortGroup/VLAN where your T0 uplink interface is located.
    4. Join the NSX Edge VMs to the NSX-T Fabric. For more information, see Join NSX Edge with the Management Plane in the VMware NSX-T documentation.

Step 3: Create the NSX-T Objects Required for PKS

Create the NSX-T objects (network objects, logical switches, NSX Edge, and logical routers) needed for PKS deployment according to the instructions in the VMware NSX-T documentation.

3.1: Create NSX Network Objects

  1. Create two NSX IP pools. For more information, see Create an IP Pool for Tunnel Endpoint IP Addresses in the VMware NSX-T documentation. Configuration details for the NSX IP pools:
    • One NSX IP pool for GENEVE Tunnel Endpoints ip-pool-vteps, within the usable range of the VTEP CIDR created in Step 1, to be used with NSX Transport Nodes that you create later in this section
    • One NSX IP pool for NSX Load Balancing VIPs ip-pool-vips, within the usable range of the PKS LB CIDR created in Step 1, to be used with the T0 Logical Router that you create later in this section
  2. Create two NSX Transport Zones (TZs). For more information, see Create Transport Zones in the VMware NSX-T documentation. Configuration details for the NSX TZs:
    • One NSX TZ for PKS control plane Services and Kubernetes Cluster deployment overlay networks named tz-overlay and the associated N-VDS hs-overlay. Select Standard.
    • One NSX TZ for NSX Edge uplinks (ingress/egress) for PKS Kubernetes clusters named tz-vlan and the associated N-VDS hs-vlan. Select Standard.
  3. If the default uplink profile is not applicable in your deployment, create your own NSX uplink host profile. For more information, see Create an Uplink Profile in the VMware NSX-T documentation.
  4. Create NSX Host Transport Nodes. For more information, see Create a Host Transport Node in the VMware NSX-T documentation. Configuration details:
    • For each host in the NSX-T Fabric, create a node named tnode-host-NUMBER. For example, if you have three hosts in the NSX-T Fabric, create three nodes named tnode-host-1, tnode-host-2, and tnode-host-3.
    • Add the tz-overlay NSX Transport Zone to each NSX Host Transport Node.

      Note: The Transport Nodes must be placed on free host NICs not already used by other vSwitches on the ESXi host. Use the ip-pool-vteps IP pool that allows them to route and communicate with each other, as well as other Edge Transport Nodes, to build GENEVE tunnels.

  5. Create NSX IP blocks. We recommend that you use separate NSX IP blocks for the node networks and the pod networks. The subnets for both nodes and pods should have a size of 256 (/24). However, if you are planning to deploy multiple Kubernetes clusters, the nodes subnet size should be /16. For more information about planning IP blocks, see the Plan IP Blocks section above. For more information about creating NSX IP blocks in NSX Manager, see Manage IP Blocks in the VMware NSX-T documentation. Configuration details:
    • One NSX IP Block named node-network-ip-block. PKS uses this block to assign address space to Kubernetes master and worker nodes when new clusters are deployed or a cluster increases its scale.
    • One NSX IP Block named pod-network-ip-block. The NSX-T Container Plug-in (NCP) uses this block to assign address space to Kubernetes pods through the Container Networking Interface (CNI).

3.2: Create Logical Switches

  1. Create the following NSX Logical Switches. For more information, see Create a Logical Switch in the VMware NSX-T documentation. Configuration details for the Logical Switches:
  2. Attach your first NSX Logical Switch to the tz-vlan NSX Transport Zone.
  3. Attach your second and third NSX Logical Switches to the tz-overlay NSX Transport Zone.

    Note: PKS v1.0 required you to manually create the ls-pks-service logical switch for the PKS service network. With PKS v1.1, the service network and switch are created for you by NSX-T. When you install PKS for the first time, you are prompted to specify the service network. Specify the management network in this field. For more information, see the Assign AZs and Networks section of the NSX-T installation documentation.

3.3: Create NSX Edge Objects

  1. Create NSX Edge Transport Nodes. For more information, see Create an NSX Edge Transport Node in the VMware NSX-T documentation.
  2. Add both tz-vlan and tz-overlay NSX Transport Zones to the NSX Edge Transport Nodes. Controller Connectivity and Manager Connectivity should be UP.
  3. Refer to the MAC addresses of the Edge VM interfaces you deployed to deploy your virtual NSX Edges:
    1. Connect the hs-overlay N-VDS to the vNIC (fp-eth#) that matches the MAC address of the second NIC from your deployed Edge VM.
    2. Connect the hs-vlan N-VDS to the vNIC (fp-eth#) that matches the MAC address of the third NIC from your deployed Edge VM.
  4. Create an NSX Edge cluster named edge-cluster-pks. For more information, see Create an NSX Edge Cluster in the VMware NSX-T documentation.
  5. Add the NSX Edge Transport Nodes to the cluster.

3.4: Create Logical Routers

Create T0 Logical Router for PKS

T0 routers are edge routers that help route data between your non-NSX-T (such as a Physical Network) and the NSX-T network. PKS currently supports only a single T0 router per instance.

  1. Create a Tier-0 (T0) logical router named t0-pks. For more information, see Create a Tier-0 Logical Router in the VMware NSX-T documentation. Configuration details:

    • Select edge-cluster-pks for the cluster.
    • Set High Availability Mode to Active-Standby. NAT rules are be applied on T0 by NCP. If not set Active-Standby, the router does not support NAT rule configuration.
  2. Attach the T0 logical router to the ls-pks-uplink logical switch you created previously. For more information, see Connect a Tier-0 Logical Router to a VLAN Logical Switch in the VMware NSX-T documentation. Create a logical router port for ls-pks-uplink and assign an IP address and CIDR that your environment uses to route to all PKS assigned IP pools and IP blocks.

  3. Configure T0 routing to the rest of your environment using the appropriate routing protocol for your environment or by using static routes. For more information, see Tier-0 Logical Router in the VMware NSX-T documentation. The CIDR used in ip-pool-vips must route to the IP you just assigned to your t0 uplink interface.

(Optional) Configure NSX Edge for High Availability (HA)

You can configure NSX Edge for high availability (HA) using Active/Standby mode to support failover, as shown in the following figure.

NSX Edge High Availability

To configure NSX Edge for HA, complete the following steps:

Note: All IP addresses must belong to the same subnet.

Step 1: On the T0 router, create a second uplink attached to the second Edge transport node:

Setting First Uplink Second Uplink
IP Address/Mask uplink_1_ip uplink_2_ip
URPF Mode None (optional) None (optional)
Transport Node edge-TN1 edge-TN2
LS uplink-LS1 uplink-LS1

Step 2: On the T0 router, create the HA VIP:

Setting HA VIP
VIP address [ha_vip_ip]
Uplinks ports uplink-1 and uplink-2

The HA VIP becomes the official IP for the T0 router uplink. External router devices peering with the T0 router must use this IP address.

Step 3: On the physical router, configure the next hop to point to the HA VIP address.

Step 4: You can verify your setup by running the following commands:

nsx-edge-n> get high-availability channels
nsx-edge-n> get high-availability channels stats
nsx-edge-n> get logical-router
nsx-edge-n> get logical-router ROUTER-UUID high-availability status

Create T1 Logical Router for PKS Management VMs

  1. Create a Tier-1 (T1) logical router for PKS management VMs named t1-pks-mgmt. For more information, see Create a Tier-1 Logical Router in the VMware NSX-T documentation. Configuration details:
    • Link to the t0-pks logical router you created in a previous step.
    • Select edge-cluster-pks for the cluster.

      Note: Skip this step if you are deploying the No-NAT with Virtual Switch topology. This Logical Router is required for the NAT deployment topology and No-NAT with Logical Switch deployment topology.

  2. Create a logical router port for ls-pks-mgmt and assign the following CIDR block: 10.172.1.0/28. For more information, see Connect a Tier-0 Logical Router to a VLAN Logical Switch in the VMware NSX-T documentation.
  3. Configure route advertisement on the T1 as follows. For more information, see Configure Route Advertisement on a Tier-1 Logical Router in the VMware NSX-T documentation. Configuration details:
    • Enable Status.
    • Enable Advertise All NSX Connected Routes.
    • Enable Advertise All NAT Routes.
    • Enable Advertise All LB VIP Routes.

Configure NAT Rules for PKS Management VMs

Note: This step applies to the NAT Topology only. Skip this step for No-NAT with Virtual Switch (VSS/VDS) Topology and No-NAT with Logical Switch (NSX-T) Topology.

Create the following NAT rules for the Mgmt T0. For more information, see Tier-0 NAT in the VMware NSX-T documentation. Configuration details:

Type For
DNAT External -> Ops Manager
DNAT External -> Pivotal Container Service
SNAT Ops Manager & BOSH Director -> DNS
SNAT Ops Manager & BOSH Director -> NTP
SNAT Ops Manager & BOSH Director -> vCenter
SNAT Ops Manager & BOSH Director -> ESXi
SNAT Ops Manager & BOSH Director -> NSX-T Manager

The Destination NAT (DNAT) rule on the T0 maps an external IP address from the PKS MANAGEMENT CIDR to the IP where you deploy Ops Manager on the ls-pks-mgmt logical switch. For example, a DNAT rule that maps 10.172.1.2 to 172.31.0.2, where 172.31.0.2 is the IP address you assign to Ops Manager when connected to ls-pks-mgmt. Later, you create another DNAT rule to map an external IP address from the PKS MANAGEMENT CIDR to the PKS endpoint.

The Source NAT (SNAT) rule on the T0 allows the PKS Management VMs to communicate with your vCenter and NSX Manager environments. For example, an SNAT rule that maps 172.31.0.0/24 to 10.172.1.1, where 10.172.1.1 is a routable IP address from your PKS MANAGEMENT CIDR. For more information, see Configure Source NAT on a Tier-1 Router in the VMware NSX-T documentation.

Note: Ops Manager and BOSH must use the NFCP protocol to the actual ESX hosts to which it is uploading stemcells. Specifically, Ops Manager & BOSH Director -> ESXi.

Note: Limit the Destination CIDR for the SNAT rules to the subnets that contain your vCenter and NSX Manager IP addresses.


Please send any feedback you have to pks-feedback@pivotal.io.

Create a pull request or raise an issue on the source for this page in GitHub