Installing and Configuring NSX-T Data Center v3.0 for Tanzu Kubernetes Grid Integrated Edition

Page last updated:

This topic provides instructions for installing and configuring NSX-T Data Center v3.0 for use with VMware Tanzu Kubernetes Grid Integrated Edition on vSphere.

Prerequisites for Installing NSX-T Data Center v3.0 for Tanzu Kubernetes Grid Integrated Edition

To perform a new installation of NSX-T Data Center for Tanzu Kubernetes Grid Integrated Edition, complete the following steps in the order presented.

  1. Read the Release Notes for the target TKGI version you are installing and verify NSX-T v3.0 support.

  2. Read the topics in the Preparing to Install Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX-T Data Center section of the documentation.

Install the NSX-T Management Hosts

Create the NSX-T Management cluster by installing 3 NSX Manager appliances and configuring a VIP address.

NSX-T Management Cluster

Deploy NSX-T Manager 1

Deploy the NSX-T Manager OVA in vSphere. Download the OVA from the VMware software download site.

  1. Using the vSphere Client, right-click the vCenter cluster and select Deploy OVF Template.
  2. At the Select an OVF Template screen, browse to and select the NSX Unified Appliance OVA file.
  3. At the Select a name and folder screen, select the target Datacenter object.
  4. At the Select a compute resource screen, select the target vCenter cluster.
  5. Review the details.
  6. At the Configuration screen, select at least Medium for the configuration size.
  7. At the Select storage screen, choose Thin Provision and the desired datastore.
  8. For Network1, enter the VLAN management network, such as PG-MGMT-VLAN-1548.
  9. Enter strong passwords for all user types.
  10. Enter the hostname, such as nsx-manager-1.
  11. Enter the rolename, such as NSX Manager.
  12. Enter the Gateway IP address, such as 10.173.62.253.
  13. Enter a public IP address for the VM, such as 10.173.62.44.
  14. Enter the Netmask, such as 255.255.255.0.
  15. Enter the DNS server, such as 10.172.40.1.
  16. Enter the NTP server, such as 10.113.60.176.
  17. Enable the Enable SSH checkbox.
  18. Enable the Allow SSH root logins checkbox.
  19. Click Finish, and NSX-T Manager 1 starts deploying.
  20. Monitor the deployment using the Recent Tasks pane.
  21. When the deployment completes, select the VM and power it on.
  22. Access the NSX-T Manager 1 web console by navigating to the URL, such as: https://10.173.62.44/.
  23. Log in and verify the installation. Note the system message that a “3 node cluster” is recommended.

Add vCenter as the Compute Manager

A compute manager is required for NSX-T environments with multiple NSX-T Manager nodes. A compute manager is an application that manages resources such as hosts and VMs. For TKGI we use the vCenter Server as the compute manager.

Complete the following steps to add vCenter as the Compute Manager. For additional guidance, refer to the NSX-T documentation.

  1. In the NSX Management console, navigate to System > Appliances.
  2. Select Compute Managers.
  3. Click Add.
  4. Enter a Name, such as vCenter.
  5. Enter an IP address, such as 10.173.62.43.
  6. Enter the vCenter username, such as administrator@vsphere.local.
  7. Set the Enable Trust toggle to Yes.
  8. Click Add.
  9. Click Add again at the thumbprint warning.
  10. Verify that the Compute Manager is added and registered.

Deploy NSX-T Manager 2

Use the NSX-T Management Console to deploy an additional NSX-T Manager node as part of the NSX-T Management layer. For more information, refer to the NSX-T documentation.

  1. In the NSX Management Console, navigate to System > Appliances.
  2. Select Add NSX Appliance.
  3. Enter a hostname, such as nsx-manager-2.
  4. Enter the Management IP/netmask, such as 10.173.62.45/24.
  5. Enter the Gateway, such as 10.173.62.253.
  6. For the Node size, choose medium.
  7. For the Compute Manager, select vCenter.
  8. For the Compute Cluster, enter MANAGEMENT-cluster.
  9. For the Datastore, select the datastore, such as datastore2.
  10. For the Virtual Disk Format, select thin provision.
  11. For the Network, select the VLAN management network, such as PG-MGMT-VLAN-1548.
  12. Select Enable SSH.
  13. Select Enable root access.
  14. Enter a strong password.
  15. Click Install Appliance.
  16. Verify that the NSX-T Manager 2 appliance is added.

Deploy NSX-T Manager 3

Use the NSX-T Management Console to deploy a third NSX-T Manager node as part of the NSX-T Management layer. For more information, refer to the NSX-T documentation.

  1. In the NSX Management Console, navigate to System > Appliances.
  2. Select Add NSX Appliance.
  3. Enter a hostname, such as nsx-manager-3.
  4. Enter the Management IP/netmask, such as 10.173.62.46/24.
  5. Enter the Gateway, such as 10.173.62.253.
  6. For the Node size, choose medium.
  7. For the Compute Manager, select vCenter.
  8. For the Compute Cluster, enter MANAGEMENT-cluster.
  9. For the Datastore, select the datastore, such as datastore2.
  10. For the Virtual Disk Format, select thin provision.
  11. For the Network, select the VLAN management network, such as PG-MGMT-VLAN-1548.
  12. Select Enable SSH.
  13. Select Enable root access.
  14. Enter a strong password.
  15. Click Install Appliance.
  16. Verify that the NSX-T Manager 3 appliance is added.

Configure the NSX-T Management VIP

The NSX-T Management layer includes three NSX-T Manager nodes. To support a single access point, assign a virtual IP Address (VIP) to the NSX-T Management layer. Once the VIP is assigned, any UI and API requests to NSX-T are redirected to the virtual IP address of the cluster, which is owned by the leader node. The leader node then routes the request forward to the other components of the appliance.

Using a VIP makes the NSX Management Cluster highly-available. If you need to scale, an alternative to the VIP is to provision a load balancer for the NSX-T Management Cluster. Provisioning a load balancer requires that NSX-T be fully installed and configured. It is recommended that you configure the VIP now, then install a load balancer ater NSX-T is installed and configured, and only if needed.

Complete the following instructions to create a VIP for the NSX Management Cluster. The IP address you use for the VIP must be part of the same subnet as the NSX-T Management nodes.

  1. In the NSX Management Console, navigate to System > Appliances.
  2. Click the Set Virtual IP button.
  3. Enter a Virtual IP address, such as 10.173.62.47.
  4. Verify that the VIP is added.
  5. Access the NSX-T Management console using the VIP, such as https://10.173.62.47/login.jsp.

Enable the NSX-T Manager Interface

The NSX Management Console provides two user interfaces: Policy and Manager. TKGI requires the Manager interface for configuring its networking and security objects. Do NOT use the Policy interface for TKGI objects.

  1. In the NSX-T Manager console, navigate to System > User Interface Settings.
  2. Click Edit.
  3. For the Toggle Visibility field, select Visible to all Users.
  4. For the Default Mode field, select Manager.
  5. Click Save.
  6. Refresh the NSX-T Manager Console and navigate to an area of the console that is not listed under System.
  7. In the upper-right area of the console, verify that the Manager option is enabled.

Add the NSX-T Manager License

If you do not add the proper NSX-T license, you will receive an error when you try to deploy a Edge Node VM.

  1. In the NSX-T Manager console, navigate to System > Licenses.
  2. Add the NSX Data Center Advanced (CPU) license.
  3. Verify that the license is added.

Generate and Register the NSX-T Management SSL Certificate and Private Key

A SSL certificate is automatically created for each NSX-T Manager. You can verify this by SSHing to one of the NSX Manager nodes and running the following command.

nsx-manager-1> get certificate cluster

You will see that common name (CN) listed in the certificate is the hostname of the appliance, for example CN=nsx-manager-1. This means the cluster certificate is linked to a particular NSX Manager, in this case NSX-T Manager 1.

If you go to System > Certificates, you will see there is no certificate for the NSX-T manager VIP. As such you need to generate a new SSL certificate that uses the NSX-T Management VIP address so that the the cluster certificate contains CN=VIP-ADDRESS.

Complete the following steps to generate and register a SSL certificate and private key that uses the VIP address. The following steps assume that you are working on a Linux host where OpenSSL is installed.

Generate the SSL Certificate and Private Key

  1. Create a certificate signing request file named nsx-cert.cnf and populate it with the contents below.

    [ req ]
    default_bits = 2048
    default_md = sha256
    prompt = no
    distinguished_name = req_distinguished_name
    x509_extensions = SAN
    
    [ req_distinguished_name ]
    countryName = US
    stateOrProvinceName = California
    localityName = CA
    organizationName = NSX
    commonName = NSX-VIP-FQDN     # REPLACE
    
    [ SAN ]
    subjectAltName = DNS:NSX-VIP-FQDN,IP:IP-ADDRESS   # REPLACE
    

    Where:

    • NSX-VIP-FQDN is your NSX VIP FQDN.
    • IP-ADDRESS is your IP address.
  2. Copy the nsx-cert.cnf file to a machine with openssl if yours doesn’t have it.

  3. Use OpenSSL to generate the SSL certificate and private key.

    openssl req -newkey rsa -nodes -days 1100 -x509 -config nsx-cert.cnf -keyout nsx.key -out nsx.crt
    
  4. Verify that you see the following:

    Generating a 2048 bit RSA private key
    ...............+++
    ................+++
    writing new private key to 'nsx.key'
    
  5. Verify certificate and key generation by running the ls command.

    You should see 3 files: the initial signing request, and the certificate and private key generated by running the signing request.

    nsx-cert.cnf  nsx.crt  nsx.key
    
  6. Run the following command to verify the certificate and private key.

    openssl x509 -in nsx.crt -text -noout
    

    You should see that the common name (CN) and Subject Alternative Name are both the VIP address. For example:

    Subject: C=US, ST=California, L=CA, O=NSX, CN= myvip.mydomain.com
    ...
    X509v3 extensions:
        X509v3 Subject Alternative Name:
            DNS:myvip.mydomain.com, IP Address:10.11.12.13
    

Import the SSL Certificate and Private Key to the NSX-T Management Console

Import certificate and private key to NSX-T by completing the following steps. These steps require populating the NSX-T Managment Console fields with the certficate and private key. You can copy/paste the contents, or if you save the nsx.crt and nsx.key files to your local machine, you can import them.

  1. In the NSX-T Management Console, navigate to the System > Certificates page.
  2. Click Import > Import Certificate. The Import Certificate screen is displayed.

    Note: Be sure to select Import Certificate and not Import CA Certificate.

  3. Enter a Name, such as CERT-NSX-T-VIP.
  4. Copy and paste the Certificate Contents from the nsx.crt file. Or, import the nsx.crt file clicking Browse and selecting it.
  5. Copy and paste the Private Key from the nsx.key file. Or, import the nsx.key file by clicking Browse and selecting it.
  6. For the Service Certificate option, make sure to select No.
  7. Click Import.
  8. Verify that you see the certificate in the list of Certificates.

Register the SSL Certificate and Private Key with the NSX-T API Server

To register the imported VIP certificate with the NSX-T Management Cluster Certificate API, complete the following steps:

  1. In the NSX-T Management Console, navigate to the System > Certificates page.
  2. View the UDID of the certificate from the NSX-T Management Console > Certificates screen.
  3. Copy the UUID to the clipboard, such as 170a6d52-5c61-4fef-a9e0-09c6229fe833.
  4. Create the following environment variables. Replace the IP address with your VIP address and the UUID with the UUID of the imported certificate.

    export NSX_MANAGER_IP_ADDRESS=10.173.62.47
    export CERTIFICATE_ID=170a6d52-5c61-4fef-a9e0-09c6229fe833
    
  5. Post the certificate to the NSX-T Manager API.

    curl --insecure -u admin:'VMware1!VMware1!' -X POST "https://$NSX_MANAGER_IP_ADDRESS/api/v1/cluster/api-certificate?action=set_cluster_certificate&certificate_id=$CERTIFICATE_ID"
    {
      "certificate_id": "170a6d52-5c61-4fef-a9e0-09c6229fe833"
    }
    
  6. Verify by SSHing to one of the NSX-T Management nodes and running the following command.

    The certificate that is returned should match the generated one.

    nsx-manager-1> get certificate cluster
    

Create an IP Pool for VTEP

Tunnel endpoints (TEPs) are the source and destination IP addresses used in the external IP header to identify the ESXi hosts that originate and end the NSX-T encapsulation of overlay frames. The TEP addresses do not need to be routable so you can use any random IP addressing scheme you want. For more information, refer to the NSX-T Data Center documentation.

  1. In the NSX-T Management Console, select the Manager interface (upper right).
  2. Navigate to Networking > IP Address Pool.
  3. Click Add.
  4. Enter a Name, such as TEP-IP-POOL.
  5. Enter an IP range, such as 192.23.213.1 - 192.23.213.10.
  6. Enter a CIDR address, such as 192.23.213.0/24.
  7. Click Add.
  8. Verify that the pool is added.

Create Transport Zones

You need two transport zones for TKGI: an Overlay TZ for Transport Nodes and a VLAN TZ for Edge Nodes. You have two options: use the defeault transport zones or create custom ones.

By default NSX-T v3.0 creates two transport zones for you: nsx-overlay-transportzone and nsx-vlan-transportzone. To use the default transport zones with TKGI, when you configure the NSX-T Edge Nodes, you MUST use the exact N-VDS switch name to be the same that is defined on the default transport zones, which is nsxHostSwitch.

Alternatively, you can create custom transport zones for the Edge Nodes and use your own switch names, for example:

  • tz-overlay (switch name: switch-overlay), and
  • tz-vlan (switch name: switch-vlan)

The advantage of using the default transport zones is that you only need a single N-VDS for the Edge Nodes. The caveat is that you must specify this exact switch name nsxHostSwitch. If you are going to use the default transport zones, skip to the next section. To create custom transport zones, follow the steps below.

Note: If you use the default transport zones, but do not use the exact name `nsxHostSwitch` when configuring the Edge Node N-VDS switch, you will receive the `pks-nsx-t-osb-proxy` BOSH error when you try to deploy TKGI. Refer to the NSX-T documentation for more information.

Create the Overlay TZ

  1. In the NSX-T Management Console, navigate to System > Fabric > Transport Zone.
  2. Click Add.
  3. Enter a Name, such as tz-overlay.
  4. Enter a switch name, such as switch-overlay.
  5. For the Traffic Type, select Overlay.
  6. Click Add.
  7. Verify that you see the newly created TZ named tz-overlay in the list.

Create the VLAN TZ

  1. In the NSX-T Management Console, navigate to System > Fabric > Transport Zone.
  2. Click Add.
  3. Enter a name, such as tz-vlan.
  4. Enter a switch name, such as switch-vlan.
  5. For the Traffic Type, select VLAN.
  6. Click Add.
  7. Verify that you see the newly created TZ named tz-vlan in the list.

Configure vSphere Networking for ESXi Hosts

In this section, you configure the vSphere networking and port groups for ESXi hosts (the vSwitch). If you have created separate vSphere clusters for Management and Compute, perform this operation on each ESXi host in the Management cluster. If you have not created separate vSphere clusters, perform this operation on each ESXi host in the cluster.

The following instructions describe how to configure a vSphere Virtual Standard vSwitch (VSS). For production environments, it is recommended that you configure a Virtual Distributed vSwitch (VDS). You configure the VDS from the vCenter Networking tab and then add the ESXi hosts to the VDS. The configuration settings for the VDS are similar to the VSS configuration described below. For instructions on configuring the VDS, see Create a vSphere Distributed Switch in the vSphere 7 documentation.

Refer to the Release Notes for details on TKGI support for vSphere 7 VDS for NSX-T transport node traffic.

Create vSwitch Port-Groups for Edge Nodes

Create vSwitch Port-Groups for the Edge Nodes on the ESXi hosts in the MANAGEMENT-cluster.

For each ESXi host in the MANAGEMENT-cluster, create the following vSwitch Port Groups:

  • EDGE-VTEP-PG: VLAN 3127
  • EDGE-UPLINK-PG: VLAN trunk (All (4095))
  1. Log in to the vCenter Server.
  2. Select the ESXi host in the MANAGEMENT-cluster.
  3. Select Configure > Virtual switches.
  4. Select A*dd Networking* (upper right).
  5. Select the option Virtual Machine Port Group for a Standard Switch and click Next.
  6. Select the existing standard switch named vSwitch0 and click Next.
  7. Enter a Network Label, such as EDGE-VTEP-PG.
  8. Enter a VLAN ID, such as 3127.
  9. Click Finish.
  10. Verify that you see the newly created port group.
  11. Select Add Networking (upper right).
  12. Select the option Virtual Machine Port Group for a Standard Switch and click Next.
  13. Select the existing standard switch named vSwitch0 and click Next.
  14. Enter a Network Label, such as EDGE-UPLINK-PG.
  15. For the VLAN ID, select All (4095) from the drop-down.
  16. Click Finish.
  17. Verify that you see the newly created port group.

Set vSwitch0 with MTU at 9000

For each ESXi host in the MANAGEMENT-cluster, or each ESXi host in the vCenter cluster if you have not created separate Management and Compute clusters, you must enable the virtual switch with jumbo MTU, that is, set vSwitch0 with MTU=9000. If you do not do this, network overlay traffic will jam. The TEP interface for the NSX-T Edge Nodes must be connected to a port group that supports > 1600 bytes. The default is 1500.

  1. Select the Virtual Switch on each ESXi host in the MANAGEMENT-cluster, or each host in the vCenter cluster.
  2. Click Edit.
  3. For the MTU (bytes) setting, enter 9000.
  4. Click OK to complete the operation.

Deploy NSX-T Edge Nodes

In this section you deploy two NSX-T Edge Nodes.

NSX Edge Nodes provide the bridge between the virtual network environment implemented using NSX-T and the physical network. Edge Nodes for Tanzu Kubernetes Grid Integrated Edition run load balancers for TKGI API traffic, Kubernetes load balancer services, and ingress controllers. See Load Balancers in Tanzu Kubernetes Grid Integrated Edition for more information.

In NSX-T, a load balancer is deployed on the Edge Nodes as a virtual server. The following virtual servers are required for Tanzu Kubernetes Grid Integrated Edition:

  • 1 TCP Layer 4 virtual server for each Kubernetes service of type:LoadBalancer
  • 2 Layer 7 global virtual servers for Kubernetes pod ingress resources (HTTP and HTTPS)
  • 1 global virtual server for the TKGI API

The number of virtual servers that can be run depends on the size of the load balancer which depends on the size of the Edge Node. Tanzu Kubernetes Grid Integrated Edition supports the medium and large VM Edge Node form factor, as well as the bare metal Edge Node. The default size of the load balancer deployed by NSX-T for a Kubernetes cluster is small. The size of the load balancer can be customized using Network Profiles.

For this installation, we use the Large VM form factor for the Edge Node. See VMware Configuration Maximums for more information.

Install Edge Node 1

Deploy the Edge Node 1 VM using the NSX-T Manager interface.

  1. From your browser, log in with admin privileges to NSX Manager at https://NSX-MANAGER-IP-ADDRESS.

  2. In NSX Manager, go to System > Fabric > Nodes > Edge Transport Nodes.

  3. Click Add Edge VM.

  4. Configure the Edge VM as follows:

    • Name: edge-node-1
    • Host name/FQDN: edge-node-1.lab.com
    • Form Factor: Large
  5. Configure Credentials as follows:

    • CLI User Name: admin
    • CLI Password: Enter a strong password for the admin user that complies with the NSX-T requirements.
    • Enable SSH Login: Yes
    • System Root Password: Enter a strong password for the root user that complies with the NSX-T requirements.
    • Enable Root SSH Login: Yes
    • Audit Credentials: Enter an audit user name and password.
  6. Configure the deployment as follows:

    • Compute Manager: vCenter
    • Cluster: MANAGEMENT-Cluster
    • Datastore: Select the datastore
  7. Configure the node settings as follows:

    • IP Assignment: Static
    • Management IP: 10.173.62.49/24, for example
    • Default Gateway: 10.173.62.253, for example
    • Management Interface: PG-MGMT-VLAN-1548, for example

Configure the N-VDS Switch

The next step is to configure the N-VDS switch and transport zones for the NSX Edge Node.

How you do this differs depending on if you are using the default or custom transport zones. - If you are using the default transport zones, configure a single switch, use the nsxHostSwitch name, and specify both default transport zones, nsx-overlay-transportzone and nsx-vlan-transportzone, in the same N-VDS configuration. - If you are using custom transport zones, configure two N-VDS switches, specify the custom transports names for each, and the corresponding custom switch name for each.

N-VDS Configuration for Default Transport Zones

Configure the NSX switch for the Edge Node as follows:

  • Edge Switch Name: Use the following switch name: - nsxHostSwitch
  • Transport Zone: Select both the of the default transport zones: - nsx-overlay-transportzone - nsx-vlan-transportzone
  • Uplink Profile: nsx-edge-single-nic-uplink-profile
  • IP Assignment: Use IP Pool
  • IP Pool: TEP-IP-POOL
  • Uplinks: uplink-1 / EDGE-VTEP-PG

N-VDS Configuration for Custom Transport Zones

Configure the first NSX switch for the Edge Node as follows:

  • Edge Switch Name: - switch-overlay (use the exact switch name that was configured for the custom transport zone tz-overlay)
  • Transport Zone: - tz-overlay (example custom transport zone)
  • Uplink Profile: nsx-edge-single-nic-uplink-profile
  • IP Assignment: Use IP Pool
  • IP Pool: TEP-IP-POOL
  • Uplinks: uplink-1 / EDGE-VTEP-PG

Configure the second NSX switch for the Edge Node as follows:

  • Click Add Switch (at the top of the dialog)
  • Edge Switch Name: - switch-vlan (use the exact switch name that was configured for the custom transport zone tz-vlan)
  • Transport Zone: - tz-vlan (example custom transport zone)
  • Uplink Profile: nsx-edge-single-nic-uplink-profile
  • Uplinks: uplink-1 / EDGE-UPLINK-PG

Complete the Edge Node 1 Installation

  1. Click Finish to complete the configuration. The installation begins.

  2. In vCenter, use the Recent Tasks panel at the bottom of the page to verify that you see the Edge Node 1 VM being deployed.

  3. Once the process completes, you should see the Edge Node 1 deployed successfully in NSX-T Manager.

  4. Click the N-VDS link and verify that you see both switches.

  5. In vCenter verify that the Edge Node is created.

Install Edge Node 2

Repeat the same operation for Edge Node 2, and for each additional NSX Edge Node pair you intend to use for Tanzu Kubernetes Grid Integrated Edition.

  1. Install nsx-edge-2 following the same procedure as nsx-edge-1.

    • name: edge-node-2
    • hostname/FQDN: edge-node-2.lab.com, for example
    • Form Factor: Large
    • IP Assignment: Static
    • IP: 10.173.62.58/24, for example
    • GW: 10.173.62.253, , for example
    • Management Interface: PG-MGMT-VLAN-1548
    • Edge Switch 1: Name:
      • nsxHostSwitch (if you are using the default TZs)
      • switch-overlay (if you are using a custom TZ)
    • Edge Switch 1: Transport Zone:
      • nsx-overlay-transportzone and nsx-vlan-transportzone (if you are using the default TZs)
      • tz-overlay (if you are using a custom TZ)
    • Uplink Profile: nsx-edge-single-nic-uplink-profile
    • IP Assignment: Use IP Pool
    • IP Pool: TEP-IP-POOL
    • Uplinks: uplink-1 / EDGE-VTEP-PG
    • Edge Switch 2: (only required if you are using a custom TZ)
      • Name: switch-vlan (use the same switch name that was configured for tz-vlan)
      • Transport Zone: tz-vlan
      • Uplink Profile: nsx-edge-single-nic-uplink-profile
      • Uplinks: uplink-1 / EDGE-UPLINK-PG
  2. Once done, you should be able to see both Edge Nodes in NSX Manager.

To configure the TEP, we used the default profile named nsx-default-uplink-hostswitch-profile. However, because the TEP is on VLAN 3127, you must modify the uplink profile for the ESXI Transport Node (TN). NSX-T does not allow you to edit settings the default uplink profile, so we create a new one.

  1. Go to System > Fabric > Profiles > Uplink Profiles.

  2. Click Add.

  3. Configure the New Uplink Profile as follows:

    • Name: nsx-esxi-uplink-hostswitch-profile
    • Teaming Policy: Failover Order
    • Active Uplinks: uplink-1
    • Transport vLAN: 3127
  4. Click Add.

  5. Verify that the Uplink Profile is created.

Deploy ESXi Host Transport Nodes Using VDS

Deploy each ESXi host in the COMPUTE-cluster as an ESXi host transport node (TN) in NSX-T. If you have not created a separate COMPUTE-cluster for ESXi hosts, deploy each ESXi host in the vSphere cluster as a host transport node in NSX-T.

  1. Go to System > Fabric > Nodes > Host Transport Nodes.

  2. Expand the Compute Manager and select the ESXi host in the COMPUTE-cluster, or each ESXi host in the vSphere cluster.

  3. Click Configure NSX.

  4. In the Host Details tab, enter a name, such as 10.172.210.57.

  5. In the Configure NSX tab, configure the transport node as follows:

    • Type: VDS (do not select the N-VDS option)
    • Name: switch-overlay (you must use the same switch name that was configured for tz-overlay transport zone)
    • Transport Zone: tz-overlay
    • NIOC Profile: nsx-default-nioc-hostswitch-profile
    • Uplink Profile: nsx-esxi-uplink-hostswitch-profile
    • LLDP Profile: LLDP [Send Packet Disabled]
    • IP Assignment: Use IP Pool
    • IP Pool: TEP-IP-POOL
    • Teaming Policy Switch Mapping
      • Uplinks: uplink-1
      • Physical NICs: vmnic1
  6. Click Finish.

  7. Verify that the host TN is configured.

Verify TEP to TEP Connectivity

To avoid any overlay communication in the future due to MTU issue, test TEP to TEP connectivity and verify that it is working.

  1. SSH to edge-node-1 and get the local TEP IP address, such as 192.23.213.1. Use the command get vteps to get the IP.

  2. SSH to edge-node-2 and get the local TEP IP address, ushc as 192.23.213.2. Use the command get vteps to get the IP.

  3. SSH to the ESXi host and get the TEP IP address, such as 192.23.213.3. Use the command esxcfg-vmknic -l to get the IP. The interface will be vmk10 and the NetStack will be vxlan.

  4. From each ESXi transport node, test the connections to each NSX-T Edge Node, for example:

    # vmkping ++netstack=vxlan 192.23.213.1 -d -s 1572 -I vmk10: OK
    # vmkping ++netstack=vxlan 192.23.213.2 -d -s 1572 -I vmk10: OK
    
    1. Test the connection from NSX-T edge node 1 and edge node 2 to ESXi TN:

      > vrf 0
      > ping 192.23.213.1 size 1572 dfbit enable: OK
      
    2. Test the connection from NSX-T edge node 1 to NSX-T edge node 2:

      > vrf 0
      > ping 192.23.213.2 size 1572 dfbit enable: OK
      

Create NSX-T Edge Cluster

  1. Go to System > Fabric > Nodes > Edge Clusters.

  2. Click Add.

    • Enter a name, such as edge-cluster-1.
    • Add members, including edge-node-1 and edge-node-2.
  3. Click Add.

  4. Verify.

Create an uplink Logical Switch to be used for the Tier-0 Router.

  1. At upper-right, select the Manager tab.

  2. Go to Networking > Logical Switches.

  3. Click Add.

  4. Configure the new logical switch as follows:

    • Name: LS-T0-uplink
    • Transport Zone: tz-vlan
    • VLAN: 1548
  5. Click Add.

  6. Verify.

Create Tier-0 Router

  1. Select Networking from the Manager tab.

  2. Select Tier-0 Logical Router.

  3. Click Add.

  4. Configure the new Tier-0 Router as follows:

    • Name: T0-router
    • Edge Cluster: edge-cluster-1
    • HA mode: Active-Standby
    • Failover mode: Non-Preemptive
  5. Click Save and verify.

  6. Select the T0 router.

  7. Select Configuration > Router Ports.

  8. Click Add.

  9. Configure a new router port as follows:

    • Name: T0-uplink-1
    • Type: uplink
    • Transport Node: edge-node-1
    • Logical Switch: LS-T0-uplink
    • Logical Switch Port: Attach to a new switch port
    • Subnet: 10.173.62.50 / 24
  10. Click Add and verify.

  11. Select the T0 router.

  12. Select Configuration > Router Ports.

  13. Add a second uplink by creating a second router port for edge-node-2:

    • Name: T0-uplink-1
    • Type: uplink
    • Transport Node: edge-node-2
    • Logical Switch: LS-T0-uplink
    • Logical Switch Port: Attach to a new switch port
    • Subnet: 10.173.62.51 / 24
  14. Once completed, verify that you have two connected router ports.

Configure and Test the Tier-0 Router

Create an HA VIP for the T0 router, and a default route for the T0 router. Then test the T0 router.

  1. Select the Tier-0 Router you created.

  2. Select Configuration > HA VIP.

  3. Click Add.

  4. Configure the HA VIP as follows:

    • VIP address: 10.173.62.52/24, for example
    • Uplink ports: T0-uplink-1 and T0-uplink-2
  5. Click Add and verify.

  6. Select Routing > Static Routes.

  7. Click Add.

    • Network: 0.0.0.0/0
    • Next Hop: 10.173.62.253
  8. Click Add and verify.

  9. Verify the Tier 0 router by making sure the T0 uplinks and HA VIP are reachable from your laptop.

For example:

> ping 10.173.62.50
PING 10.173.62.50 (10.173.62.50): 56 data bytes
Request timeout for icmp_seq 0
64 bytes from 10.173.62.50: icmp_seq=1 ttl=58 time=71.741 ms
64 bytes from 10.173.62.50: icmp_seq=0 ttl=58 time=1074.679 ms

> ping 10.173.62.51
PING 10.173.62.51 (10.173.62.51): 56 data bytes
Request timeout for icmp_seq 0
64 bytes from 10.173.62.51: icmp_seq=0 ttl=58 time=1156.627 ms
64 bytes from 10.173.62.51: icmp_seq=1 ttl=58 time=151.413 ms

> ping 10.173.62.52
PING 10.173.62.52 (10.173.62.52): 56 data bytes
64 bytes from 10.173.62.52: icmp_seq=0 ttl=58 time=6.864 ms
64 bytes from 10.173.62.52: icmp_seq=1 ttl=58 time=7.776 ms

Create IP Blocks and Pool for Compute Plane

TKGI requires a Floating IP Pool for NSX-T load balancer assignment and the following 2 IP blocks for Kubernetes pods and nodes:

  • PKS-POD-IP-BLOCK: 172.16.0.0/16
  • PKS-NODE-IP-BLOCK: 172.23.0.0/16
  1. In the Manager interface, go to Networking > IP Address Pools > IP Block.

  2. Click Add.

  3. Configure the Pod IP Block as follows:

    • Name: PKS-POD-IP-BLOCK
    • CIDR: 172.16.0.0/16
  4. Click Add and verify.

  5. Repeat same operation for the Node IP Block.

    • Name: PKS-NODE-IP-BLOCK
    • CIDR: 172.23.0.0/16
  6. Click Add and verify.

  7. Select IP Pools tab.

  8. Click Add.

  9. Configure the IP pool as follows:

    • Name: PKS-FLOATING-IP-POOL
    • IP ranges: 10.173.62.111 - 10.173.62.150
    • CIDR: 10.173.62.0/24
  10. Click Add and verify.

Create Management Plane

Networking for the TKGI Management Plane consists of a Tier-1 Router and Switch with NAT Rules for the Management Plane VMs.

Create Tier-1 Router and Switch

Create Tier-1 Logical Switch and Router for TKGI Management Plane VMs. Complete the configuration by enabling Route Advertisement on the T1 router.

  1. In the NSX Management console, navigate to Networking > Logical Switches.

  2. Click Add.

  3. Create the LS for TKGI Management plane VMs:

    • Name: LS-PKS-MGMT
    • Transport Zone: tz-overlay
  4. Click Add and verify creation of the T1 logical switch.

  5. Go to Networking > Tier-1 Logical Router.

  6. Click Add.

  7. Configure the Tier-1 logical router as follows:

    • Name: T1-PKS-MGMT
    • To router: T0-router
    • Edge Cluster: edge-cluster-1
    • Edge Cluster Members: edge-node-1 and edge-node-2
  8. Click Add and verify.

  9. Select the T1 router and go to Configuration > Router port.

  10. Click Add.

  11. Configure the T1 router port as follows:

    • Name: T1-PKS-MGMT-port
    • Logical Switch: LS-PKS-MGMT
    • Subnet: 10.1.1.1/24
  12. Click Add and verify.

  13. Select Routing tab.

  14. Click Edit and configure route advertisement as follows:

    • Status: Enabled
    • Advertise All Connected Routes: Yes
  15. Click Save and verify.

Create NAT Rules

You need to create the following NAT rules on the Tier-0 router for the TKGI Management Plane VMs.

  • DNAT: 10.173.62.220 (for example) to access Ops Manager
  • DNAT: 10.173.62.221 (for example) to access Harbor
  • SNAT: 10.173.62.222 (for example) for all TKGI management plane VM traffic destined to the outside world
  1. In the NSX Management console, navigate to Networking > NAT.

  2. In the Logical Router field, select the T0-router you defined for TKGI.

  3. Click Add.

  4. Configure the Ops Manager DNAT rule as follows:

    • Priority: 1000
    • Action: DNAT
    • Protocol: Any Protocol
    • Destination IP: 10.173.62.220, for example
    • Translated IP: 10.1.1.2, for example
  5. Click Add and verify.

  6. Add a second DNAT rule for Harbor by repeating the same operation.

    • Priority: 1000
    • Action: DNAT
    • Protocol: Any Protocol
    • Destination IP: 10.173.62.221, for example
    • Translated IP: 10.1.1.6, for example
  7. Verify the creation of the DNAT rules.

  8. Create the SNAT rule for the management plane traffic as follows:

    • Priority: 9024
    • Action: SNAT
    • Protocol: Any Protocol
    • Source IP: 10.1.1.0/24, for example
    • Translated IP: 10.173.62.222, for example
  9. Verify the creation of the SNAT rule.

Configure the NSX-T Password Interval (Optional)

The default NSX-T password expiration interval is 90 days. After this period, the NSX-T passwords will expire on all NSX-T Manager Nodes and all NSX-T Edge Nodes. To avoid this, you can extend or remove the password expiration interval, or change the password if needed.

Note: For existing Tanzu Kubernetes Grid Integrated Edition deployments, anytime the NSX-T password is changed you must update the BOSH and PKS tiles with the new passwords. See Adding Infrastructure Password Changes to the Tanzu Kubernetes Grid Integrated Edition Tile for more information.

Update the NSX-T Manager Password and Password Interval

To update the NSX Manager password, perform the following actions on one of the NSX Manager nodes. The changes will be propagated to all NSX Manager nodes.

SSH into the NSX Manager Node

To manage user password expiration, you use the CLI on one of the NSX Manager nodes.

To access a NSX Manager node, from Unix hosts use the command ssh USERNAME@IP_ADDRESS_OF_NSX_MANAGER.

For example:

ssh admin@10.196.188.22

On Windows, use Putty and provide the IP address for NSX Manager. Enter the user name and password that you defined during the installation of NSX-T.

Retrieve the Password Expiration Interval

To retrieve the password expiration interval, use the following command:

get user USERNAME password-expiration

For example:

NSX CLI (Manager, Policy, Controller 3.0.0.0.0.15946739). Press ? for cost or enter: help
nsx-mgr-1> get user admin password-expiration
Password expires 90 days after last change

Update the Admin Password

To update the user password, use the following command:

set user USERNAME password NEW-PASSWORD old-password OLD-PASSWORD.

For example:

set user admin password my-new-pwd old-password my-old-pwd

Set the Admin Password Expiration Interval

To set the password expiration interval, use the following command:

set user USERNAME password-expiration PASSWORD-EXPIRATION.

For example, the following command sets the password expiration interval to 120 days:

set user admin password-expiration 120

Remove the Admin Password Expiration Interval

To remove password expiration, use the following command:

clear user USERNAME password-expiration.

For example:

clear user admin password-expiration

To verify:

nsx-mgr-1> clear user admin password-expiration
nsx-mgr-1> get user admin password-expiration
Password expiration not configured for this user

Update the Password for NSX Edge Nodes

To update the NSX Edge Node password, perform the following actions on each NSX Edge Node.

Note: Unlike the NSX-T Manager nodes, you must update the password or password interval on each Edge Node.

Enable SSH

SSH on the Edge Node is disabled by default. You have to enable SSH on the Edge Node using the Console from vSphere.

start service ssh
set service ssh start-on-boot

SSH to the NSX Edge Node

For example:

ssh admin@10.196.188.25

Get the Password Expiration Interval for the Edge Node

For example:

nsx-edge> get user admin password-expiration
Password expires 90 days after last change

Update the User Password for the Edge Node

For example:

nsx-edge> set user admin password my-new-pwd old-password my-old-pwd

Set the Password Expiration Interval

For example, the following command sets the password expiration interval to 120 days:

nsx-edge> set user admin password-expiration 120

Remove the Password Expiration Interval

For example:

NSX CLI (Edge 3.0.0.0.0.15946012). Press ? for command list or enter: help
nsx-edge-2> get user admin password-expiration
Password expires 90 days after last change. Current password will expire in 7 days.

nsx-edge-2> clear user admin password-expiration
nsx-edge-2> get user admin password-expiration
Password expiration not configured for this user

Next Steps

Once you have completed the installation of NSX-T v3.0, return to the TKGI installation workflow and proceed with the next phase of the process. See Install Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX-T Using Ops Manager.


Please send any feedback you have to pks-feedback@pivotal.io.