Prerequisites for a Bring Your Own Topology Deployment to NSX-T Data Center
Page last updated:
A bring your own topology environment is an NSX-T Data Center instance that you have fully configured yourself for use with Enterprise PKS. For example, an NSX-T Data Center instance that you have used in a previous deployment of Enterprise PKS. The following objects must be in place before you start a production deployment.
- 3 NSX Manager Nodes deployed
- NSX Management Cluster formed
- Virtual IP address assigned for Management Cluster or load balancer
For information about the supported versions of NSX-T Data Center, see the release notes.
- An active/standby Tier-0 Router created
- A logical switch on an NSX-T Virtual Distributed Switch (N-VDS) is prepared under the Tier-0 router, for use by the PKS management plane
- Edge Cluster with at least 2 NSX-T Data Center Edge Nodes deployed in active/standby mode, with connectivity to an uplink network configured
- Overlay Transport Zone created, with the edge nodes included
- VLAN Transport Zone created, with the edge nodes included
- MTU of all transport nodes and physical interfaces configured to 1600 or more
- If your NSX-T Data Center environment uses custom certificates, obtain the CA certificate for NSX Manager
Notes: Do not use the network on which you deploy the Enterprise PKS Management Console appliance VM as the network for the management plane when you deploy Enterprise PKS. Using the same network for the appliance VM and the management plane requires additional NSX-T Data Center configuration and is not recommended.
If NSX-T Data Center uses custom certificates and you do not provide the CA certificate for NSX Manager, Enterprise PKS Management Console automatically generates one and registers it with NSX Manager. This can cause other services that are integrated with NSX Manager not to function correctly.
In BYOT mode, Enterprise PKS Management Console automatically retrieves the tier0 HA mode from your NSX-T Data Center environment and creates NAT rules on the tier 0 or tier 1 router.
If you are deploying Enterprise PKS in a multiple-tier0 topology, additional post-deployment configuration of the management console VM is required. For information, see Enterprise PKS Management Console Cannot Retrieve Cluster Data in a Multi-Tier0 Topology in Troubleshooting Enterprise PKS Management Console.
- Virtual IP for the Tier-0 Router configured
- Floating IP Pool configured
- Pod IP Block ID created
- Node IP Block ID created
- Logical Switch configured for PKS Management Plane
- Tier-1 Router configured and connected to the Tier-0 Router
- Routing for PKS Floating IPs configured to point to the Tier-0 HA Virtual IP
The requirements above are for production environments. In proof-of-concept deployments one NSX Manager node is sufficient. The NSX management cluster and load balancer are also optional for proof-of-concept deployments.
Please send any feedback you have to firstname.lastname@example.org.