LATEST VERSION: v1.4 - RELEASE NOTES
Pivotal Container Service v1.2

Deploying NSX-T for PKS

Page last updated:

To deploy NSX-T for PKS, complete the following set of procedures, in the order presented.

Before you begin this procedure, ensure that you have successfully completed all preceding steps for installing PKS on vSphere with NSX-T, including:

Step 1: Deploy NSX Manager

The NSX Manager is provided as an OVA file named NSX Unified Appliance that you import into your vSphere environment and configure.

Complete either of the following procedures to deploy the NSX Manager appliance:

To verify deployment of the NSX Manager:

  1. Power on the NSX Manager VM.
  2. Ping the NSX Manager VM. Get the IP address for the NSX Manager from the Summary tab in vCenter. Verify that you can ping the host. For example, run ping 10.196.188.21.
  3. SSH to the VM. Use the IP address for the NSX Manager to remotely connect using SSH. From Unix hosts use the command ssh admin@IP_ADDRESS_OF_NSX_MANAGER. For example, run ssh admin@10.196.188.21. On Windows use Putty and provide the IP address. Enter the CLI user name and password that you defined during OVA import.
  4. Review NSX CLI usage. Once you are logged into the NSX Manager VM, enter ? to view the command usage and options for the NSX CLI.
  5. Connect to the NSX Manager web interface using a supported browser at the URL https://IP_ADDRESS_OF_NSX_MANAGER. For example, https://10.16.176.10.

Step 2: Deploy NSX Controllers

The NSX Controller provides communications for NSX-T components.

You must deploy at least one NSX Controller for PKS. Three NSX Controllers are recommended.

Complete either of the following procedures to deploy an NSX Controller:

To verify deployment of the NSX Controller:

  1. Power on the NSX Controller VM.
  2. Ping the NSX Controller VM. Get the IP address for the NSX Controller from the Summary tab in vCenter. Make sure you use a routable IP. If necessary click View all X IP addresses to reveal the proper IP address. Verify that you can ping the Controller host. For example, run ping 10.196.188.22.
  3. SSH to the VM. Use the IP address for the NSX Controller to remotely connect using SSH. From Unix hosts use the command ssh admin@IP_ADDRESS_OF_NSX_CONTROLLER. For example, run ssh admin@10.196.188.22. On Windows use Putty and provide the IP address. Enter the CLI admin user name and password that you defined during installation.
  4. Review NSX CLI usage. After you are logged into the NSX Controller VM, enter ? to view the command usage and options for the NSX CLI.

Note: Repeat the deployment and verification procedure for each NSX Controller you intend to use for PKS.

Step 3: Create NSX Clusters (Management and Control)

In this section you create NSX Clusters for the PKS Management Plane and Control Plane.

  1. Complete this procedure to create the NSX Management Cluster: Join NSX Controllers with the NSX Manager.
  2. Complete this procedure to create the NSX Control Cluster: Initialize Control Cluster.
  3. If you are deploying more than one NSX Controller, complete this procedure: Join Additional NSX Controllers with the Cluster Master.

To verify the creation of NSX Clusters:

  1. Verify that the NSX Controller is Connected to the NSX Manager:

    NSX-CONTROLLER-1> get managers
    
  2. Verify that the status of the Control Cluster is active:

    NSX-CONTROLLER-1> get control-cluster status
    
  3. Verify that the Management Cluster is STABLE:

    NSX-MGR-1-1-0> get management-cluster status
    
  4. Verify the configuration of the NSX Clusters.

    • Connect to the NSX Manager web interface using a supported browser at the URL https://IP_ADDRESS_OF_NSX_MANAGER. For example, https://10.16.176.10.
    • Log in using your admin credentials.
    • Select Dashboard > System > Overview.
    • Confirm that the status of the NSX Manager and each NSX Controller is green.

Step 4: Deploy NSX Edge Nodes

Edge Nodes provide the bridge between the virtual network environment implemented using NSX-T and the physical network. Edge Nodes for PKS run load balancers for PKS API traffic, Kubenetes pod LB services, and pod ingress controllers.

PKS supports active/standby Edge Node failover and requires at least two Edge Nodes. In addition, PKS requires the Edge Node Large VM (8 vCPU, 16 GB of RAM, and 120 GB of storage). The Small and Medium VMs are not suitable for use with PKS. See Edge Node Requirements in the VMware documentation for details.

For information about load balancers, see Scaling Load Balancer Resources in the VMware documentation.

Complete either of the following procedures to deploy an NSX Edge Node:

When deploying the Edge Node, be sure to connect the vNICs of the NSX Edge VMs to an appropriate PortGroup for your environment:

  • Network 0: For management purposes. Connect the first Edge interface to your environment’s PortGroup/VLAN where your Edge Management IP can route and communicate with the NSX Manager.
  • Network 1: For TEP (Tunnel End Point). Connect the second Edge interface to your environment’s PortGroup/VLAN where your GENEVE VTEPs can route and communicate with each other. Your VTEP CIDR should be routable to this PortGroup.
  • Network 2: For uplink connectivity to external physical router. Connect the third Edge interface to your environment’s PortGroup/VLAN where your T0 uplink interface is located.
  • Network 3: Unused (select any port group)

For example:

To verify Edge Node deployment:

  1. Power on the Edge Node VM.

  2. Ping the Edge VM. Get the IP address for the NSX Manager from the Summary tab in vCenter. Verify that you can ping the host by running ping IP_ADDRESS_OF_NSX_EDGE_NODE. For example, run ping 10.196.188.21.

  3. SSH to the Edge VM. Use the IP address for the NSX Manager to remotely connect using SSH. From Unix hosts use the command ssh admin@IP_ADDRESS_OF_NSX_EDGE_NODE. For example, run ssh admin@10.196.188.21. On Windows use Putty and provide the IP address. Enter the CLI admin user name and password that you defined in the Customize template > Application section.

  4. Review NSX CLI usage. After you are logged into the NSX Manager VM, enter ? to view the command usage and options for the NSX CLI.

Note: Repeat the deployment and verification process for each NSX Edge Node you intend to use for PKS.

Step 5: Register NSX Edge Nodes with NSX Manager

To register an Edge Node with NSX Manager, complete this procedure: Join NSX Edge with the Management Plane.

To verify Edge Node registration with NSX Manager:

  1. SSH to the Edge Node and run the following command. Verify that the Status is Connected:

    nsx-edge-1> get managers
    
  2. In the NSX Manager Web UI, go to Fabric > Nodes > Edges. You should see each registered Edge Node.

Note: Repeat this procedure for each NSX Edge Node you are deploying for PKS.

Step 6: Enable Repository Service on NSX Manager

To enable VIB installation from the NSX Manager repository, the repository service needs to be be enabled in NSX Manager.

  1. SSH into NSX Manager by using the command ssh admin@IP_ADDRESS_OF_NSX_MANAGER (Unix) or Putty (Windows).

  2. Run the following command:

    nsx-manager> set service install-upgrade enable
    

Step 7: Create TEP IP Pool

To create the TEP IP Pool, complete this procedure: Create an IP Pool for Tunnel Endpoint IP Addresses.

When creating the TEP IP Pool, refer to the following example:

To verify TEP IP Pool configuration:

  1. In NSX Manager, select Inventory > Groups > IP Pools.

  2. Verify that the TEP IP Pool you created is present.

Step 8: Create Overlay Transport Zone

Create an Overlay Transport Zone (TZ-Overlay) for PKS control plane services and Kubernetes clusters associated with associated with VDS hostswitch1.

To create TZ-Overlay, complete this procedure: Create Transport Zones.

When creating the TZ-Overlay for PKS, refer to the following example:

To verify TZ-Overlay creation:

  1. In NSX Manager select Fabric > Transport Zones.

  2. Verify that you see the TZ-Overlay transport zone you created:

Step 9: Create VLAN Transport Zone

Create the VLAN Transport Zone (TZ-VLAN) for NSX Edge Node uplinks (ingress/egress) for PKS Kubernetes clusters associated with VDS hostswitch2.

To create TZ-VLAN, complete this procedure: Create Transport Zones.

When creating the TZ-VLAN for PKS, refer to the following example:

To verify TZ-VLAN creation:

  1. In NSX Manager select Fabric > Transport Zones.

  2. Verify that you see the TZ-VLAN transport zone:

To create an Uplink Profile, complete this procedure: Create an Uplink Profile.

When creating the Uplink Profile for PKS, refer to the following example:

To verify Uplink Profile creation:

  1. In NSX Manager select Fabric > Profiles > Uplink Profiles.

  2. Verify that you see the Edge Node uplink profile you created:

Step 11: Create Edge Transport Nodes

Create NSX Edge Transport Nodes which allow Edge Nodes to exchange virtual network traffic with other NSX nodes.

Be sure to add both the VLAN and OVERLAY NSX Transport Zones to the NSX Edge Transport Nodes and confirm NSX Controller and Manager connectivity. Use the MAC addresses of the Edge VM interfaces to deploy the virtual NSX Edges:

  • Connect the OVERLAY N-VDS to the vNIC (fp-eth#) that matches the MAC address of the second NIC from your deployed Edge VM.
  • Connect the VLAN N-VDS to the vNIC (fp-eth#) that matches the MAC address of the third NIC from your deployed Edge VM.

To create an Edge Transport Node for PKS:

  1. Log in to NSX Manager (https://IP_ADDRESS_OF_NSX_MANAGERs).
  2. Go to Fabric > Nodes > Edges.
  3. Select an Edge Node.
  4. Click Actions > Configure as Transport Node.
  5. In the General tab, enter a name and select both Transport Zones: TZ-Overlay (Overlay) and TZ-VLAN (VLAN).
  6. Select the Host Switches tab.
  7. Configure the first transport node switch. For example:
    • Edge Switch Name: hostswitch1
    • Uplink Profile: edge-uplink-profile
    • IP Assignment: Use IP Pool
    • IP Pool: TEP-ESXi-POOL
    • Virtual NICs: fp-eth0 (corresponds to Edge VM vnic1 (second vnic))
  8. Click Add Host Switch.
  9. Configure the second transport node switch. For example:
    • Edge Switch Name: hostswitch2
    • Uplink Profile: edge-uplink-profile
    • Virtual NICs: fp-eth1 (corresponds to Edge VM vnic2 (third vnic))

      Note: Repeat this procedure for the second Edge Transport Node (Edge-TN2), as well as additional Edge Node pairs you deploy for PKS.

To verify the creation of Edge Transport Nodes:

  1. In NSX Manager, select Fabric > Nodes > Edges.
  2. Verify that Controller Connectivity and Manager Connectivity are UP for both Edge Nodes.
  3. In NSX Manager, select Fabric > Nodes > Transport Node.
  4. Verify that the configuration state is Success.
  5. SSH to each NSX Edge VM and verify that the Edge Transport Node is “connected” to the Controller.

    nsx-edge-1> get controllers
    

Step 12: Create Edge Cluster

Create an NSX Edge Cluster and add each Edge Transport Node to the Edge Cluster by completing this procedure: Create an NSX Edge Cluster.

When creating the Edge Cluster for PKS, refer to the following example:

To verify Edge Cluster creation:

  1. In NSX Manager, select Fabric > Nodes > Edge Clusters.
  2. Verify that you see the new Edge Cluster.
  3. Select Edge Cluster > Related > Transport Nodes.
  4. Verify that all Edge Transport Nodes are members of the Edge Cluster.
  5. SSH to NSX Edge Node 1 and run the following commands to verify proper connectivity.

    nsx-edge-1> get vteps
    nsx-edge-1> get host-switches
    nsx-edge-1> get edge-cluster status
    nsx-edge-1> get controller sessions
    
  6. SSSH to NSX Edge Node 2 and repeat the above commands to verify proper connectivity.

  7. Verify Edge-TN1 to Edge-TN2 connectivity (TEP to TEP).

    nsx-edge-1> get logical-router
    nsx-edge-1> vrf 0
    nsx-edge-1(vrf)> ping IP-ADDRESS-EDGE-2
    

Step 13: Create T0 Logical Router

Create a Tier-0 Logical Router for PKS. The Tier-0 Logical Router is used to route data between the physical network and the NSX-T-defined virtual network.

To create a Tier-0 (T0) logical router:

  1. Define a T0 logical switch with an ingress/egress uplink port. Attach the T0 LS to the VLAN Transport Zone.
  2. Create a logical router port and assign to it a routable CIDR block, for example 10.172.1.0/28, that your environment uses to route to all PKS assigned IP pools and IP blocks.
  3. Connect the T0 router to the uplink VLAN logical switch.
  4. Attach the T0 router to the Edge Cluster and set HA mode to Active-Standby. NAT rules are applied on the T0 by NCP. If the T0 router is not set in Active-Standby mode, the router does not support NAT rule configuration.
  5. Lastly, configure T0 routing to the rest of your environment using the appropriate routing protocol for your environment or by using static routes.

Create VLAN Logical Switch (LS)

  1. In NSX Manager, go to Switching > Switches.
  2. Click Add and create a VLAN logical switch (LS). For example:
  3. Click Save and verify that you see the new LS:

Create T0 Router Instance

  1. In NSX Manager, go to Routing > Routers.
  2. Click Add and select the Tier-0 Router option.
  3. Create new T0 router as follows:
    • Name: Enter a name for the T0 router, such as T0-LR or t0-pks, for example.
    • Edge Cluster: Select the Edge Cluster, edgecluster1 or edge-cluster-pks, for example.
    • High Availability Mode: Select Active-Standby (required).
  4. Click Save and verify you see the new T0 Router instance:

Note: Be sure to select Active/Standby. NAT rules are be applied on T0 by NCP. If not set Active-Standby, NCP will not be able to create NAT rules on the T0 Router.

Create T0 Router Port

  1. In NSX Manager, go to Routing > Routers.
  2. Select the T0 Router you just created.
  3. Select Configuration > Router Ports.
  4. Select the T0 Router and click Add.
  5. Create new T0 router port. Attach the T0 router port to the uplink logical switch you created (uplink-LS1, for example). Assign an IP address and CIDR that your environment uses to route to all PKS assigned IP pools and IP blocks. For example:
    • Name: Uplink1
    • Type: Uplink
    • Transport Node: edge-TN1
    • Logical Switch: uplink-LS1
    • Logical Switch Port: uplink1-port
    • IP Address/mask: 10.40.206.24/25 (for example)
  6. Click Save and verify that you see the new port interface:

Define Default Static Route

Configure T0 routing to the rest of your environment using the appropriate routing protocol (if you are using no-NAT-mode), or using static routes (if you are using NAT-mode). The following example uses static routes for the T0 router. The CIDR used must route to the IP you just assigned to your T0 uplink interface.

  1. Go to Routing > Routers and select the T0 Router.
  2. Select Routing > Static Routes and click Add.
  3. Create a new static route for the T0 router. For example:
    • Network: 0.0.0.0/0
    • Next Hop: 10.40.206.125 (for example)
    • Admin Distance: 1
    • Logical Router Port: Uplink1
  4. Click Save and verify that see the newly created static route:

Verify T0 Router Creation

The T0 router uplink IP should be reachable from the corporate network. From your local laptop or workstation, ping the uplink IP address. For example:

PING 10.40.206.24 (10.40.206.24): 56 data bytes
64 bytes from 10.40.206.24: icmp_seq=0 ttl=53 time=33.738 ms
64 bytes from 10.40.206.24: icmp_seq=1 ttl=53 time=36.965 ms

Step 14: Configure Edge Nodes for HA

Configure high-availability (HA) for NSX Edge Nodes. If the T0 Router is not correctly configured for HA, failover to the standby Edge Node will not occur.

Proper configuration requires two new uplinks on the T0 router: one attached to Edge TN1, and the other attached to Edge TN2. In addition, you need to create a VIP that is the IP address used for the T0 uplink defined when the T0 Router was created.

Create Uplink1 for Edge-TN1

On the T0 router, create the Uplink1 router port and attach it to Egde TN1. For example:

  • IP Address/Mask: `10.40.206.10/25
  • URPF Mode: None (optional)
  • Transport Node: edge-TN1
  • Logical Switch: uplink-LS1

Create Uplink2 for Edge-TN2

On the T0 router, create the Uplink2 router port and attach it to Egde TN2. For example:

  • IP Address/Mask: 10.40.206.9/25
  • URPF Mode: None (optional)
  • Transport Node: edge-TN2
  • Logical Switch: uplink-LS1

Create HA VIP

Create an HA virtual IP (VIP) address. Once created the HA VIP becomes the official IP for the T0 router uplink. External router devices peering with the T0 router must use this IP address.

Note: The IP addresses for uplink-1, uplink-2 and HA VIP must belong to same subnet.

  1. On the T0 router, create the HA VIP. For example:
    • VIP Address: 10.40.206.24/25
    • Uplinks Ports: Uplink-1 and Uplink-2
  2. Verify creation of the HA VIP.

Create Static Route for HA

  1. On the T0 router, create a static default route so that the next hop points to the HA VIP address. For example:
    • Network: 0.0.0.0/0
    • Next Hop: 10.40.206.125
    • Logical Router Port: empty
  2. Using vCenter, disconnect any unused vNIC interface in each Edge Node VM (this interface can cause duplicate packets.) For example, in the screenshot below, Network adpater 4 is not being used, so it is disconnected:

Note: Disconnect unused vNICs to prevent the duplication of traffic from two vNICs connected to same VLAN. This can occur when you configure HA for an active/standby Edge Node pair.

Verify Edge Node HA

  1. The T0 router should display both Edge TNs in active/standby pairing.
  2. Run the following commands to verify HA channels:

    nsx-edge-n-1> get high-availability channels
    nsx-edge-n-1> get high-availability channels stats
    nsx-edge-n-1> get logical-router
    nsx-edge-n-1> get logical-router ROUTER-UUID high-availability status
    

Step 15: Prepare ESXi Servers for the PKS Compute Cluster

For each ESXi host in the NSX-T Fabric to be used for PKS Compute purposes, create an associated transport node. For example, if you have three ESXi hosts in the NSX-T Fabric, create three nodes named tnode-host-1, tnode-host-2, and tnode-host-3. Add the Overlay Transport Zone to each ESXi Host Transport Node.

Prepare each ESXi server dedicated for the PKS Compute Cluster as a Transport Node. These instructions assume that for each participating ESXi host the ESXi hypervisor is installed and the vmk0 is configured. In addition, each ESXi host must have at least one free nic/vmnic for use with NSX Host Transport Nodes that is not already in use by other vSwitches on the ESXi host. Make sure the vmnic1 (second physical interface) of the ESXi host is not used. NSX will take ownership of it (opaque NSX vswitch will use it as uplink). For more information, see Add a Hypervisor Host to the NSX-T Fabric in the VMware NSX-T documentation.

Add ESXi Host to NSX-T Fabric

Complete the following operation for each ESXi host to be used by the PKS Compute Cluster.

  1. Go to Fabric > Nodes > Hosts.
  2. Click Add and create a new host. For example:
    • IP Address: 10.115.40.72
    • OS: ESXi
    • Username: root
    • Password: PASSWORD
  3. After clicking Save, click Yes if the following invalid thumbprint message appears.
  4. NSX installs VIBs on the ESXi host. In a few moments, you should see the new defined host. Deployment status should show NSX Installed and Manager Connectivity should show Up.

Create Transport Node

  1. In NSX Manager, go to Fabric > Nodes > Transport Nodes.
  2. Click Add and create a new Transport Node. For example:
    • Name: ESXi-COMP-1-TN
    • Node: ESXi-COMP-1
    • TZ: TZ-Overlay
  3. Select the Host Switches tab.
  4. Configure a Host Switch. For example:
    • Host Switch Name: hostswitch1
    • Uplink Profile: nsx-default-uplink-hostswitch-profile
    • IP Assignment: Use IP Pool
    • IP POOL: TEP-ESXi-POOL
    • Physical NICs: vmnic1

Verify ESXi Host Preparation for PKS Compute Cluster

  1. Verify that you see the ESXi Compute Transport Node:
  2. Verify the status is Up.

    Note: If you are using NSX-T 2.3, the status should be up. If you are using NSX-T 2.2, the status may incorrectly show as down (because the Tunnel Status is Down.) Either way, verify TEP communications as described in the next step.

  3. Make sure the NSX TEP vmk is created on ESXi host and TEP to TEP communication (with Edge TN for instance) works.

    [root@ESXi-1:~] esxcfg-vmknic -l
    
    [root@ESXi-1:~] vmkping ++netstack=vxlan <IP of the vmk10 interface> -d -s 1500
    

Next Step

After you complete this procedure, follow the instructions in Creating the PKS Management Plane.


Please send any feedback you have to pks-feedback@pivotal.io.

Create a pull request or raise an issue on the source for this page in GitHub