Network Planning for Installing Enterprise PKS with NSX-T

Page last updated:

Warning: VMware Enterprise PKS v1.6 is no longer supported because it has reached the End of General Support (EOGS) phase as defined by the Support Lifecycle Policy. To stay up to date with the latest software and security updates, upgrade to a supported version.

Before you install VMware Enterprise PKS on vSphere with NSX-T integration, you must plan the environment as described in this topic.

Prerequisites

Familiarize yourself with the following VMware documentation:

Familiarize yourself with the following related documentation:

Review the following Enterprise PKS documentation:

Understand Component Interactions

Enterprise PKS on vSphere with NSX-T requires the following component interactions:

  • vCenter, NSX-T Manager Nodes, NSX-T Edge Nodes, and ESXi hosts must be able to communicate with each other.
  • The BOSH Director VM must be able to communicate with vCenter and the NSX-T Management Cluster.
  • The BOSH Director VM must be able to communicate with all nodes in all Kubernetes clusters.
  • Each Enterprise PKS-provisioned Kubernetes cluster deploys the NSX-T Node Agent and the Kube Proxy that run as BOSH-managed processes on each worker node.
  • NCP runs as a BOSH-managed process on the Kubernetes master node. In a multi-master deployment, the NCP process runs on all master nodes, but is active only on one master node. If the NCP process on an active master is unresponsive, BOSH activates another NCP process.

Plan Deployment Topology

Review the Deployment Topologies for Enterprise PKS on vSphere with NSX-T. The most common deployment topology is the NAT topology. Decide which deployment topology you will implement, and plan accordingly.

Plan Network CIDRs

Before you install Enterprise PKS on vSphere with NSX-T, you should plan for the CIDRs and IP blocks that you are using in your deployment.

Plan for the following network CIDRs in the IPv4 address space according to the instructions in the VMware NSX-T documentation.

  • VTEP CIDRs: One or more of these networks host your GENEVE Tunnel Endpoints on your NSX Transport Nodes. Size the networks to support all of your expected Host and Edge Transport Nodes. For example, a CIDR of 192.168.1.0/24 provides 254 usable IPs.

  • PKS MANAGEMENT CIDR: This small network is used to access Enterprise PKS management components such as Ops Manager, BOSH Director, the PKS Service VM, and the Harbor Registry VM (if deployed). For example, a CIDR of 10.172.1.0/28 provides 14 usable IPs. For the No-NAT deployment topologies, this is a corporate routable subnet /28. For the NAT deployment topology, this is a non-routable subnet /28, and DNAT needs to be configured in NSX-T to access the Enterprise PKS management components.

  • PKS LB CIDR: This network provides your load balancing address space for each Kubernetes cluster created by Enterprise PKS. The network also provides IP addresses for Kubernetes API access and Kubernetes exposed services. For example, 10.172.2.0/24 provides 256 usable IPs. This network is used when creating the ip-pool-vips described in Creating NSX-T Objects for Enterprise PKS, or when the services are deployed. You enter this network in the Floating IP Pool ID field in the Networking pane of the Enterprise PKS tile.

Plan IP Blocks

When you install Enterprise PKS on NSX-T, you are required to specify the Pods IP Block ID and Nodes IP Block ID in the Networking pane of the Enterprise PKS tile. These IDs map to the two IP blocks you must configure in NSX-T: the Pods IP Block for Kubernetes pods, and the Node IP Block for Kubernetes nodes (VMs). For more information, see the Networking section of Installing Enterprise PKS on vSphere with NSX-T Integration.

Required IP Blocks for NSX-T

Pods IP Block

Each time a Kubernetes namespace is created, a subnet from the Pods IP Block is allocated. The subnet size carved out from this block is /24, which means a maximum of 256 pods can be created per namespace. When a Kubernetes cluster is deployed by Enterprise PKS, by default 3 namespaces are created. Often additional namespaces will be created by operators to facilitate cluster use. As a result, when creating the Pods IP Block, you must use a CIDR range larger than /24 to ensure that NSX has enough IP addresses to allocate for all pods. The recommended size is /16. For more information, see Creating NSX-T Objects for Enterprise PKS.

Pods IP Block

Note: By default, Pods IP Block is a block of non-routable, private IP addresses. After you deploy Enterprise PKS, you can define a network profile that specifies a routable IP block for your pods. The routable IP block overrides the default non-routable Pods IP Block when a Kubernetes cluster is deployed using that network profile. For more information, see Routable Pods in Using Network Profiles (NSX-T Only).

Nodes IP Block

Each Kubernetes cluster deployed by Enterprise PKS owns a /24 subnet. To deploy multiple Kubernetes clusters, set the Nodes IP Block ID in the Networking pane of the Enterprise PKS tile to larger than /24. The recommended size is /16. For more information, see Creating NSX-T Objects for Enterprise PKS.

Nodes IP Block

Note: You can use a smaller nodes block size for no-NAT environments with a limited number of routable subnets. For example, /20 allows up to 16 Kubernetes clusters to be created.

Reserved IP Blocks

Enterprise PKS reserves several CIDR blocks and IP addresses for internal use. When deploying TKGI, do not use these CIDR blocks or IP addresses.

  • Each Kubernetes cluster uses the 10.100.200.0/24 subnet for Kubernetes services. Do not use this IP range for the Nodes IP Block.
  • Docker is installed on each Enterprise PKS worker node and is assigned the 172.17.0.0/16 network interface. Do not use this CIDR range for any TKGI component, including Ops Manager, BOSH Director, the TKGI API VM, the TKGI DB VM, and the Harbor Registry VM.
  • The Enterprise PKS Management Console runs the Docker daemon and reserves for it the 172.18.0.0/16 subnet and the gateway 172.18.0.1. Do not use this CIDR range or IP address unless you customize them during OVA configuration.
  • The Harbor Registry requires IP blocks in the range 172.20.0.0/16. Do not use this CIDR range, unless you change it in the Harbor tile configuration.

Note: The TKGI Management Console VM also reserves an unused docker0 interface on 172.17.0.0/16. This cannot be customized.

The table below summarizes the IP blocks reserved by TKGI. Do not use these when deploying TKGI components.

IP Address Block Reserved for TKGI Component Customizable
10.100.200.0/24 Kubernetes services Nodes IP Block No
172.17.0.0/16 Docker daemon Worker node VM No
172.17.0.0/16 Docker daemon Management Console VM No (unused)
172.18.0.0/16 Docker daemon Management Console VM Yes (OVA configuration)
172.18.0.1 Docker gateway Management Console VM Yes (OVA configuration)
172.20.0.0/16 Docker daemon Harbor Registry VM Yes (Harbor tile > Networking)

Gather Other Required IP Addresses

To install Enterprise PKS on vSphere with NSX-T, you will need to know the following:

  • Subnet name where you will install Enterprise PKS
  • VLAN ID for the subnet
  • CIDR for the subnet
  • Netmask for the subnet
  • Gateway for the subnet
  • DNS server for the subnet
  • NTP server for the subnet
  • IP address and CIDR you plan to use for the NSX-T Tier-0 Router

Please send any feedback you have to pks-feedback@pivotal.io.