Network Planning for Installing Enterprise PKS with NSX-T
Page last updated:
Warning: Pivotal Container Service (PKS)
v1.5 is no longer supported because it has reached the End
of General Support (EOGS) phase as defined by the
Support Lifecycle Policy.
To stay up to date with the latest software and security updates, upgrade to a supported version.
Before you install Enterprise Pivotal Container Service (Enterprise PKS) on vSphere with NSX-T integration, you must plan the environment as described in this topic.
Familiarize yourself with the following VMware documentation:
- vSphere, vCenter, vSAN, and ESXi documentation
- NSX-T Data Center documentation
- NSX Container Plugin (NCP) documentation
Familiarize yourself with the following related documentation:
Review the following Enterprise PKS documentation:
- vSphere with NSX-T Version Requirements
- Hardware Requirements for Enterprise PKS on vSphere with NSX-T
- Firewall Ports and Protocols Requirements for vSphere with NSX-T
- Network Objects Created by NSX-T for Enterprise PKS
Enterprise PKS on vSphere with NSX-T requires the following component interactions:
- vCenter, NSX-T Manager Nodes, NSX-T Edge Nodes, and ESXi hosts must be able to communicate with each other.
- The BOSH Director VM must be able to communicate with vCenter and the NSX-T Management Cluster.
- The BOSH Director VM must be able to communicate with all nodes in all Kubernetes clusters.
- Each Enterprise PKS-provisioned Kubernetes cluster deploys the NSX-T Node Agent and the Kube Proxy that run as BOSH-managed processes on each worker node.
- NCP runs as a BOSH-managed process on the Kubernetes master node. In a multi-master deployment, the NCP process runs on all master nodes, but is active only on one master node. If the NCP process on an active master is unresponsive, BOSH activates another NCP process.
Review the Deployment Topologies for Enterprise PKS on vSphere with NSX-T. The most common deployment topology is the NAT topology. Decide which deployment topology you will implement, and plan accordingly.
Before you install Enterprise PKS on vSphere with NSX-T, you should plan for the CIDRs and IP blocks that you are using in your deployment.
Plan for the following network CIDRs in the IPv4 address space according to the instructions in the VMware NSX-T documentation.
VTEP CIDRs: One or more of these networks host your GENEVE Tunnel Endpoints on your NSX Transport Nodes. Size the networks to support all of your expected Host and Edge Transport Nodes. For example, a CIDR of
192.168.1.0/24provides 254 usable IPs.
PKS MANAGEMENT CIDR: This small network is used to access Enterprise PKS management components such as Ops Manager, BOSH Director, the PKS Service VM, and the Harbor Registry VM (if deployed). For example, a CIDR of
10.172.1.0/28provides 14 usable IPs. For the No-NAT deployment topologies, this is a corporate routable subnet /28. For the NAT deployment topology, this is a non-routable subnet /28, and DNAT needs to be configured in NSX-T to access the Enterprise PKS management components.
PKS LB CIDR: This network provides your load balancing address space for each Kubernetes cluster created by Enterprise PKS. The network also provides IP addresses for Kubernetes API access and Kubernetes exposed services. For example,
10.172.2.0/24provides 256 usable IPs. This network is used when creating the
ip-pool-vipsdescribed in Creating NSX-T Objects for Enterprise PKS, or when the services are deployed. You enter this network in the Floating IP Pool ID field in the Networking pane of the Enterprise PKS tile.
When you install Enterprise PKS on NSX-T, you are required to specify the Pods IP Block ID and Nodes IP Block ID in the Networking pane of the Enterprise PKS tile. These IDs map to the two IP blocks you must configure in NSX-T: the Pods IP Block for Kubernetes pods, and the Node IP Block for Kubernetes nodes (VMs). For more information, see the Networking section of Installing Enterprise PKS on vSphere with NSX-T Integration.
Each time a Kubernetes namespace is created, a subnet from the Pods IP Block is allocated. The subnet size carved out from this block is /24, which means a maximum of 256 pods can be created per namespace. When a Kubernetes cluster is deployed by Enterprise PKS, by default 3 namespaces are created. Often additional namespaces will be created by operators to facilitate cluster use. As a result, when creating the Pods IP Block, you must use a CIDR range larger than /24 to ensure that NSX has enough IP addresses to allocate for all pods. The recommended size is /16. For more information, see Creating NSX-T Objects for Enterprise PKS.
Note: By default, Pods IP Block is a block of non-routable, private IP addresses. After you deploy Enterprise PKS, you can define a network profile that specifies a routable IP block for your pods. The routable IP block overrides the default non-routable Pods IP Block when a Kubernetes cluster is deployed using that network profile. For more information, see Routable Pods in Using Network Profiles (NSX-T Only).
Each Kubernetes cluster deployed by Enterprise PKS owns a /24 subnet. To deploy multiple Kubernetes clusters, set the Nodes IP Block ID in the Networking pane of the Enterprise PKS tile to larger than /24. The recommended size is /16. For more information, see Creating NSX-T Objects for Enterprise PKS.
Note: You can use a smaller nodes block size for no-NAT environments with a limited number of routable subnets. For example, /20 allows up to 16 Kubernetes clusters to be created.
The Enterprise PKS Management Plane must not use the use 172.17.0.0/16 subnet. This restriction applies to all virtual machines (VMs) deployed during the Enterprise PKS installation process, including the PKS control plane, Ops Manager, BOSH Director, and Harbor Registry.
In addition, do not use any of the IP blocks listed below for Kubernetes master or worker node VMs, or for Kubernetes pods. If you create Kubernetes clusters with any of the blocks listed below, the Kubernetes worker nodes cannot reach Harbor or internal Kubernetes services.
The Docker daemon on the Kubernetes worker node uses the subnet in the following CIDR range. Do not use IP addresses in the following CIDR range:
If Enterprise PKS is deployed with Harbor v1.9.3 or v1.9.4, also do not use IP addresses in the following CIDR ranges. Harbor v1.9.3 and v1.9.4 use these for internal Docker bridges:
Each Kubernetes cluster uses the following subnet for Kubernetes services. Do not use the following IP block for the Nodes IP Block:
To install Enterprise PKS on vSphere with NSX-T, you will need to know the following:
- Subnet name where you will install Enterprise PKS
- VLAN ID for the subnet
- CIDR for the subnet
- Netmask for the subnet
- Gateway for the subnet
- DNS server for the subnet
- NTP server for the subnet
- IP address and CIDR you plan to use for the NSX-T Tier-0 Router
Please send any feedback you have to firstname.lastname@example.org.