Prerequisites for a Bring Your Own Topology Deployment to NSX-T Data Center

Note: As of v1.8, Enterprise PKS has been renamed to VMware Tanzu Kubernetes Grid Integrated Edition. Some screenshots in this documentation do not yet reflect the change.

Page last updated:

A bring your own topology environment is an NSX-T Data Center instance that you have fully configured yourself for use with Tanzu Kubernetes Grid Integrated Edition. For example, an NSX-T Data Center instance that you have used in a previous deployment of Tanzu Kubernetes Grid Integrated Edition. The following objects must be in place before you start a production deployment.

  • 3 NSX Manager Nodes deployed
  • NSX Management Cluster formed
  • Virtual IP address assigned for Management Cluster or load balancer

For information about the supported versions of NSX-T Data Center, see the release notes.

General Requirements

  • An active/active Tier-0 Router created.
  • A logical switch on an NSX-T Virtual Distributed Switch (N-VDS) for use by the TKGI management plane is prepared. The switch must be either under the Tier-0 router, or under the Tier-1 router if the Tier-1 router is directly under the Tier-0 router.
  • Edge Cluster with at least 2 NSX-T Data Center Edge Nodes deployed in active/standby mode, with connectivity to an uplink network configured.
  • Overlay Transport Zone created, with the edge nodes included.
  • VLAN Transport Zone created, with the edge nodes included.
  • MTU of all transport nodes and physical interfaces configured to 1600 or more.
  • If your NSX-T Data Center environment uses custom certificates, obtain the CA certificate for NSX Manager.

Notes: Do not use the network on which you deploy the Tanzu Kubernetes Grid Integrated Edition Management Console VM as the network for the management plane when you deploy Tanzu Kubernetes Grid Integrated Edition. Using the same network for the management console VM and the management plane requires additional NSX-T Data Center configuration and is not recommended.

If NSX-T Data Center uses custom certificates and you do not provide the CA certificate for NSX Manager, Tanzu Kubernetes Grid Integrated Edition Management Console automatically generates one and registers it with NSX Manager. This can cause other services that are integrated with NSX Manager not to function correctly.

In BYOT mode, Tanzu Kubernetes Grid Integrated Edition Management Console automatically retrieves the tier0 HA mode from your NSX-T Data Center environment and creates NAT rules on the tier 0 or tier 1 router.

If you are deploying Tanzu Kubernetes Grid Integrated Edition in a multiple-tier0 topology, additional post-deployment configuration of the management console VM is required. For information, see Tanzu Kubernetes Grid Integrated Edition Management Console Cannot Retrieve Cluster Data in a Multi-Tier0 Topology in Troubleshooting the Management Console.

NSX-T Data Center Configuration Requirements

  • Virtual IP for the Tier-0 Router configured
  • Floating IP Pool configured
  • Pod IP Block ID created
  • Node IP Block ID created
  • Logical Switch configured for TKGI Management Plane
  • Tier-1 Router configured and connected to the Tier-0 Router
  • Routing for TKGI Floating IPs configured to point to the Tier-0 HA Virtual IP

Proof-of-Concept Deployments

The requirements above are for production environments. In proof-of-concept deployments one NSX Manager node is sufficient. The NSX management cluster and load balancer are also optional for proof-of-concept deployments.


Please send any feedback you have to pks-feedback@pivotal.io.