Provisioning a Load Balancer for the NSX-T v2.4 Management Cluster

Page last updated:

This topic describes how to deploy a load balancer for the NSX-T Management Cluster for Enterprise PKS.

Note: The instructions provided in this topic are for NSX-T v2.4.

About the NSX-T Management Cluster

This section describes the NSX-T Management Cluster and the external load balancer for use with Enterprise PKS.

Overview

NSX-T v2.4 introduces a converged management and control plane that is referred to as the NSX-T Management Cluster. The new deployment model delivers high availability of the NSX-T Manager node, reduces the likelihood of operation failures of NSX-T, and provides API and UI clients with multiple endpoints or a single VIP for high availability.

While using a VIP to access the NSX-T Management layer provides high-availability, it does not balance the workload. To avoid overloading a single NSX-T Manager, as may be the case when HA VIP addressing is used, an NSX-T load balancer can be provisioned to allow NCP and other components orchestrated by Enterprise PKS to distribute load efficiently among NSX Manager nodes.

The diagram below shows an external load balancer fronting the NSX Manager nodes. The load balancer is deployed within the NSX-T environment and intercepts requests to the NSX-T Management Cluster. The load balancer selects one of the NSX-T Manager nodes to handle the request and rewrites the destination IP address to reflect the selection.

NSX Managment Cluster with Load Balancer

Note: The load balancer VIP load balances traffic to all NSX-T Manager instances in round robin fashion. A Cluster HA VIP, on the other hand, only sends traffic one of the NSX-T Manager instances that is mapped to the Cluster IP VIP; the other NSX-T manager instances do not receive any traffic.

Note: If you are using VMware Identity Manager (VIDM) to authenticate with the NSX Management environment, you need two separate load balancer VIPs: one for VIDM and one for PKS. Refer to [Configure VMware Identity Manager Integration](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.5/administration/GUID-EAAD1FBE-F750-4A5A-A3BF-92B1E7D016FE.html) in the VMware NSX-T Data Center documentation.

Component Interaction with the NSX-T Management Cluster

Various components in an Enterprise PKS deployment interact with the NSX Management Cluster.

PKS Control Plane components:

  • Ops Manager
  • BOSH CPI (Cloud Provider Interface)
  • NSX-T OSB Proxy

Kubernetes Cluster components:

  • BOSH jobs running on Kubernetes master nodes to prepare and update Kubernetes clusters
  • NSX-T Container Plugin (NCP)

The interaction of the PKS Control Plane components and the BOSH jobs with the NSX-T Management Cluster is sporadic. However, the NCP component may demand a high level of scalability for the NSX-T API processing capability of the NSX Management Cluster, and NCP is vital to the networking needs of each Kubernetes cluster. When a high number of Kubernetes clusters are subjected to concurrent activities, such as Kubernetes Pod and Service lifecycle operations, multiple NCP instances may tax the system and push NSX-T API processing to its limits.

Load Balancer Provisioning for the NSX-T Management Cluster

For scalability, consider deploying a load balancer in front of the NSX-T Manager nodes. As a general rule of thumb, if you are using Enterprise PKS with NSX-T to deploy more than 25 Kubernetes clusters, you should use a load balancer in front of the NSX-T Management Cluster.

Note: If you do not require scalability, you can configure a Cluster VIP to achieve HA for the NSX-T Management Cluster. See HA VIP addressing.

For general purposes, a small NSX-T load balancer is sufficient. However, refer to the Scaling Load Balancer Resources to ensure that the load balancer you choose is sufficient to meet your needs.

When provisioning the load balancer, you configure a virtual server on the load balancer, and associate a virtual IP address with the virtual server. This load balancer VIP can be used as the entry-point for PKS- and NCP-related API requests on the NSX-T Control Plane. The virtual server includes a member pool where all NSX-T Management Cluster nodes belong. Additionally, health monitoring is enabled for the member pool to quickly and efficiently address potential node failures detected among the NSX-T Management Cluster.

Prerequisites for Provisioning a Load Balancer for the NSX-T Management Cluster

Before you provision a load balancer for the NSX-T Management Cluster, ensure that your environment is configured as follows:

  • NSX-T 2.4 is installed and configured for servicing Enterprise PKS. See Installing PKS v1.4 with NSX-T v2.4.
  • Transport zone, transport node, Edge Cluster, Edge Node connectivity, and Tier-0 Router are deployed and operational with proper static routes or BGP. See Installing PKS v1.4 with NSX-T v2.4.
  • A NSX-T Management Cluster with 3 NSX Manager nodes is provisioned. See Installing PKS v1.4 with NSX-T v2.4.
  • Your NSX-T environment has enough Edge Cluster resources to deploy a new small-size load balancer VM.
  • A dedicated IP Address is available to be used as the VIP and SNAT IP address for the new load balancer. The load balancer VIP address must be globally routable from networks external to NSX-T. This IP address can be carved out from the standard IP Pool required by Enterprise PKS PKS.

Provision the NSX-T Load Balancer for the Management Cluster

To provision the load balancer for the NSX-T Management Cluster, complete the following steps.

Step 1: Log in to NSX-T Manager.

  1. Log in to NSX-T Manager.

    Note: You can connect to any NSX-T Manager node in the management cluster to provision the load balancer.

  2. Select the Advanced Networking & Security tab.

    Note: You must use the Advanced Networking and Security tab in NSX-T Manager to create, read, update, and delete all NSX-T networking objects used for Enterprise PKS.

Step 2: Add and configure a logical switch.

Add and configure a new logical switch for the load balancer.

  • Select Switching.
  • Click Add.
  • Configure the logical switch:
    • Name: Enter a name for the logical switch, such as LS-NSX-T-EXTERNAL-LB.
    • Transport Zone: Select the overlay transport zone, such as TZ-OVerlay.
  • Click Add.

Add New Logical Switch

Step 3: Add and configure a Tier-1 Router.

Configure a new Tier-1 Router in Active/StandBy mode. Create the Tier-1 Router on the same Edge Cluster where the Tier-0 Router that provides external connectivity to vCenter and NSX Manager is located.

  • Select Routers.
  • Click Add > Tier-1 Router.
  • Configure the new Tier-1 Router and click Add.
    • Name: T1-NSX-T-EXTERNAL-LB, for example.
    • Tier-0 Router: Connect the Tier-1 Router to the Tier-0 Router, for example Shared-T0.
    • Edge Cluster: Select the same Edge Cluster where the Tier-0 Router is located, such as edgecluster1.
    • Edge Cluster members: edge-TN1 and edge-TN2, for example.

Configure Tier-1 Router

Step 4: Configure route advertisement.

Configure Route Advertisement for the Tier-1 Router.

  • Select the Tier-1 Router.
  • Select the Routing tab.
  • Select Route Advertisement > Edit.
  • Enable Route Advertisement for all load balancer VIP routes for the Tier-1 Router:
    • Status: enabled
    • Advertise all LB VIP routes: yes
    • Advertise all LB SNAT IP routes: yes
    • Click Save

Advertise Routes Advertise Routes Advertise Routes

Step 5: Verify logical switch and router configuration.

Verify successful creation and configuration of the logical switch and router.

  • Select the Tier-1 Router.
  • Select the Configuration tab.
  • At this point the T1 should have a single linked port connecting the Tier-1 Router to the Tier-0 Router.

Verify Logical Switch and Router

Step 6: Add and configure a load balancer.

Create a new small-size Load Balancer and attach it to the Tier1 router previously created.

Note: The small-size VM is suitable for the NSX Management Cluster load balancer. Make sure you have enough Edge Cluster resources to provision the load balancer.

  • Select Load Balancers.
  • Click Add.
  • Select the Small load balancer and name it.
  • Click OK.

Add Load Balancer Configure Load Balancer Confirm Load Balancer

Step 7: Attach the load balancer to the router.

Attach the load balancer to the Tier-1 Router previously created.

  • Select the load balancer you just provisioned.
  • Select the Overview tab.
  • Select Attachment > Edit.
  • Tier-1 Logical Router: Enter the name of the Tier-1 router you configured, for example T1-NSX-T-EXTERNAL-LB.
  • Click OK.

Attach Load Balancer 1 Attach Load Balancer 2

Step 8: Add and configure a virtual server for the load balancer.

  • Select Load Balancers > Virtual Servers.
  • Click Add.

Add Virtual Server

Configure General Properties for the Virtual Server.

  • Name: VS-NSX-T-EXTERNAL-LB
  • Appliaiton Types: Layer 4 TCP
  • Application Profile: default-tcp-lb-app-profile
  • Access Log: Disabled
  • Click Next

Virtual Server General Properties

Configure Virtual Server Identifiers.

  • IP Address: Etner an IP address from the floating pool, such as 10.40.14.250.
  • Port: 443
  • Click Next.

Virtual Server Identifiers

Configure Virtual Server Pool.

  • Click Create a New Server Pool.

Add Server Pool

Configure General Properites for the Server Pool:

  • Name the server pool, for example NSX-T-MGRS-SRV-POOL
  • Load Balancing Algorithm: ROUND_ROBIN
  • Click Next

Server Pool General Properties

Configure SNAT Translation for the Server Pool:

  • Translation Mode: IP List
  • IP address: Enter the NSX-T Virtual Switch IP (VIP) address here, for example 10.40.14.250.
  • Click Next.

Server Pool SNAT Translation

Configure Pool Members for the Server Pool:

  • Membership Type: Static.
  • Add all 3 NSX Managers as members, each with port 443.
  • Click Next.

Server Pool Members

Configure Health Monitors:

  • We will create the Health Monitors separately.
  • Click **Finish.

Health Monitors

Back at the Server Pool screen, click Next.

Server Pool

Configure Load Balancing Profiles:

  • Persistence Profile: Source IP: default-source-ip-lb-persistence-profile
  • Click Finish.

Note: If a proxy is used between the NSX Management Cluster and the PKS Control Plane, do not configure a persistence profile.

Load Balancing Persistence Profile Load Balancing Persistence Profile

Step 9: Attach the virtual server to the load balancer.

Attach the virtual switch to the NSX-T load balancer.

  • Click on the Virtual Server and then select Load Balancers tab.
  • Click Attach.
  • Load Balancer: Specificy the load balancer to attach, such as NSX-T-EXTERNAL-LB.
  • Click OK.

Attach Load Balancer 1
Attach Load Balancer 2
Attach Load Balancer 2

Step 10: Verify the load balancer.

Once the load balancer is configured, you should be able to do the following:

  • Ping the NSX-T load balancer VIP address from your local machine.
  • Access the NSX-T load balancer VIP address, for example https://10.40.14.250.

Note: Because you selected the `default-source-ip-lb-persistence-profile`, the URL redirects to the same NSX-T Manager. Persistence is done on the source IP.

Step 11: Create a new Active Health Monitor (HM).

Create a new Active Health Monitor (HM) for NSX Management Cluster members. Configure the new Active Health Monitor with the Health Check protocol LbHttpsHeathMonitor. To do this:

  • Select Load Balancers > Server Pools.
  • Select the server pool previously created (for example, NSX-T-MGRS-SRV-POOL). Select Server Pool
  • Select the Overview tab
  • Click Health Monitor > Edit Health Monitor Edit
  • Click Create a new active monitor New Active HM

Configure Monitor Properties:

  • Name: NSX-T-Mgr-Health-Monitor
  • Health Check Protocol: LbHttpsMonitor
  • Monitoring Port: 443

Monitor Properties

Configure Health Check Parameters.

Configure the new Active HM with specific HTTP request fields as follows:

  • SSL Protocols: Select the TLS_v2 and v2 prootocols.
  • SSL Ciphers: Select Balanced (recommended)

SSL Protocols

HTTP Request Configuration:

  • HTTP Method: GET
  • HTTP Request URL: /api/v1/reverse-proxy/node/health
  • HTTP Request Version: HTTP_VERION_1_1

HTTP Request Configuration

HTTP Request Headers:

  • Authorization: Basic YWRtaW46Vk13YXJlMSE=, which is base64 encoded.
  • Content-Type: application/json
  • Accept: application/json

HTTP Request Headers

Note: In the example, “YWRtaW46Vk13YXJlMSE=” is the base64-encoded value of the NSX-T administrator credentials, expressed in the form ‘admin-user:password’. You can use the free online service to base64 encode your values.

HTTP Response Configuration:

  • HTTP Response Code: 200.
  • Click Finish.

Health Check Parameters

Lastly, back at the Health Monitors screen, specify the Active Health Monitor you just created:

  • Active Health Monitor: NSX-T-Mgr-Health-Monitor.
  • Click Finish.

Health Check Parameters Health Check Parameters

Step 12: Create SNAT Rule.

If your Enterprise PKS deployment uses NAT mode, make sure Health Monitoring traffic is correctly SNAT-translated when leaving the NSX-T topology. Add a specific SNAT rule that intercepts HM traffic generated by the load balancer and translates this to a globally-routable IP Address allocated using the same principle of the load balancer VIP. The following screenshot illustrates an example of SNAT rule added to the Tier0 Router to enable HM SNAT translation. In the example, 100.64.128.0/31 is the subnet for the Load Balancer Tier-1 uplink interface.

To do this you need to retrieve the IP of the T1 uplink (Tier-1 Router that connected the NSX-T LB instance). In the example below, the T1 uplink IP is 100.64.112.37/31. Retrieve IP of the T1 Uplink

Create the following SNAT rule on the Tier-0 Router:

  • Priority: 2000
  • Action: SNAT
  • Source IP: 100.64.112.36/31
  • Destination IP: 10.40.206.0/25
  • Translated IP: 10.40.14.251
  • Click Save

SNAT rule for Health Monitor HTTP traffic, added to Tier0 router

  • Verify configuration of the SNAT rule and server pool health:

Verify SNAT Verify SNAT Verify SNAT

Step 13: Verify that traffic is load balanced.

Verify the load balancer and that traffic is load balanced.

  • Confirm that the status of the Logical Switch for the load balancer is Up. Verify LS
  • Confirm that the status of the Virtual Server for the load balancer is Up. Verify VS
  • Confirm that the status of the Server Pool is Up. Verify LS
  • Open an HTTPS session using multiple browser clients and confirm that traffic is load-balanced across different NSX-T Managers: Verify LS

You can use the NSX API to validate that secure HTTP requests against the new VIP address are associated with the load balancer’s Virtual Server. Relying on the SuperUser Principal Identity created as part of PKS provisioning steps, you can cURL the NSX Management Cluster using the standard HA-VIP address or the newly-provisioned virtual server VIP. For example:

Before load balancer provisioning is completed:

curl -k -X GET "https://192.168.6.210/api/v1/trust-management/principal-identities" --cert $(pwd)/pks-nsx-t-superuser.crt --key $(pwd)/pks-nsx-t-superuser.key

After load balancer provisioning is completed:

curl -k -X GET "https://91.0.0.1/api/v1/trust-management/principal-identities" --cert $(pwd)/pks-nsx-t-superuser.crt --key $(pwd)/pks-nsx-t-superuser.key

Key behavioral differences among the two API calls is the fact that the call toward the Virtual Server VIP will effectively Load Balance requests among the NSX-T Server Pool members. On the other hand, the call made toward the HA VIP address would ALWAYS select the same member (the Active Member) of the NSX Management Cluster.

Residual configuration step would be to change PKS Tile configuration for NSX-Manager IP Address to use the newly-provisioned Virtual IP Address. This configuration will enable any component internal to PKS (NCP, NSX OSB Proxy, BOSH CPI, etc…) to use the new Load Balancer functionality.

Generate a NSX-T Manager CA Certificate

Generate a new NSX-T Manager CA certificate using the external NSX-T LB VS IP.

Step 1: Create the Certificate Signing Request (CSR) file named nsx-crt.cnf.

There are various configurations for the CSR. Listed below are exampales for each.

Using a fully-qualified domain name (FQDN)

Using a fully-qualified domain name (FQDN), the commonName is a wildcard FQDN (*.pks.vmware.local, for example) and the subjectAltName (SAN) includes the same wildcard FQDN (*.pks.vmware.local, for example) and the load balancer VIP (192.168.160.100, for example).

[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no
[ req_distinguished_name ]
countryName = US
stateOrProvinceName = California
localityName = CA
organizationName = NSX
commonName = *.pks.vmware.local
[ v3_req ]
subjectAltName = @alt_names
[alt_names]
DNS.1 = *.pks.vmware.local
DNS.2  = 192.168.160.100

Using the Cluster HA VIP as the commonName

If you have previously configured the Cluster HA VIP, an alternative approach is to use the Cluster HA VIP as the commonName (10.196.188.27, for example), and the subjectAltName (SAN) includes the load balancer VIP (192.168.160.100, for example) and all 3 of the NSX Manager IP addresses.

[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
req_extensions = req_ext
prompt = no
[ req_distinguished_name ]
countryName = US
stateOrProvinceName = California
localityName = CA
organizationName = NSX
commonName = 10.196.188.27
[ req_ext ]
subjectAltName = @alt_names
[alt_names]
DNS.1 = 192.168.160.100
DNS.1 = 10.196.188.21
DNS.1 = 10.196.188.22
DNS.1 = 10.196.188.23

Step 2: Generate the certificate and private key.

Define environment variables for the NSX_MANAGER_IP_ADDRESS and the NSX_MANAGER_COMMONNAME.

For example:

export NSX_MANAGER_IP_ADDRESS=*.pks.vmware.local
export NSX_MANAGER_COMMONNAME=*.pks.vmware.local

Where: - NSX_MANAGER_IP_ADDRESS is a wildcard FQDN (*.pks.vmware.local, for example) or all three of the NSX-T Manager IP addresses. - NSX_MANAGER_COMMONNAME is a wildcard FQDN or the Cluster VIP address.

Run the following command to generate the certificate and priate key:

openssl req -newkey rsa:2048 -x509 -nodes -keyout nsx.key -new -out nsx.crt -subj /CN=$NSX_MANAGER_COMMONNAME -reqexts SAN -extensions SAN -config <(cat ./nsx-cert.cnf <(printf "[SAN]\nsubjectAltName=DNS:$NSX_MANAGER_COMMONNAME,IP:$NSX_MANAGER_IP_ADDRESS")) -sha256 -days 365

Step 3: Add the certificate to each of the NSX-T Manager nodes.

Add the certificate to each of the 3 NSX-T Managers. Once this is done, the same certificate is then replicated to the other NSX-T manager instances.

  • Go to one of the NSX Managers in the NSX-T Managment Cluster.
  • Select System > Certificate. System > Certificate" width=“525”> </li>
<li>Select <strong>Import</strong> > <strong>Import Certificate</strong>.
<img src=
  • Name: Enter a name for the certificate.
  • Certificate Contents: Copy/paste the certitificate contents to this field.
  • Priviate Key: Copy the private key to this field.
  • Click Save.

Import Certificate

Step 4: Register the certificate with NSX-T Manager appliances.

Run the following set of command for each NSX-T manager instance. The CERTIFICATE_ID should be the same for all 3 NSX-T Manager instances. For example:

export NSX_MANAGER_IP_ADDRESS=10.40.206.2
export CERTIFICATE_ID="ea65ee14-d7d3-49c3-b656-ee0864282654"
curl --insecure -u admin:'VMware1!' -X POST "https://$NSX_MANAGER_IP_ADDRESS/api/v1/node/services/http?action=apply_certificate&certificate_id=$CERTIFICATE_ID"

Step 5: Verify that the certificate is being used.

  • Proceed to the site and enter username/password.
  • Double check the new cert is used by the site.
  • Check using SSH on the NSX-T mgr node: nsx-manager-1> get certificate api.
  • Access NSX-T Manager using the NSX-T load balancer VIP (for example, 10.40.14.250).
  • Verify that this node is using the correct certificate.

Update the NSX-T Manager IP and Certificate for BOSH

  1. Log in to Ops Manager.
  2. Edit the BOSH Tile.
  3. Select the vCenter config tab.
  4. Update the following fields:
  5. NSX Address: Update with the load balancer VIP
  6. NSX CA Cert: Update with the newly generated certificate
  7. Click Save.

Update BOSH Save BOSH Edits

Update the NSX-T Manager IP and Certificate for PKS

  1. Log in to Ops Manager.
  2. Edit the PKS tile.
  3. Select the Networking tab.
  4. Update the following fields:
  5. NSX Manager hostname: Update with the load balancer VIP
  6. NSX Manager CA cert: Update with the newly generated certificate
  7. Click Save.

Update PKS Tile Save PKS Tile edits

Deploy Enterprise PKS

  1. At the Ops Manager Installation Dashboard, click Review Pending Changes. Review Changes
  2. Verify that the Update all clusters errand is enabled for Enterprise PKS. Apply Changes
  3. Click Apply Changes. Changes Applied

Please send any feedback you have to pks-feedback@pivotal.io.