Provisioning a Load Balancer for the NSX-T Management Cluster

Page last updated:

This topic describes how to deploy a load balancer for the NSX-T Management Cluster for Enterprise PKS.

Note: The instructions provided in this topic are for NSX-T v2.4.

About the NSX-T Management Cluster

This section describes the NSX-T Management Cluster and the external load balancer for use with Enterprise PKS.

Overview

NSX-T v2.4 introduces a converged management and control plane that is referred to as the NSX-T Management Cluster. The new deployment model delivers high availability of the NSX-T Manager node, reduces the likelihood of operation failures of NSX-T, and provides API and UI clients with multiple endpoints or a single VIP for high availability.

While using a VIP to access the NSX-T Management layer provides high-availability, it does not balance the workload. To avoid overloading a single NSX-T Manager, as may be the case when HA VIP addressing is used, an NSX-T load balancer can be provisioned to allow NCP and other components orchestrated by Enterprise PKS to distribute load efficiently among NSX Manager nodes.

The diagram below shows an external load balancer fronting the NSX Manager nodes. The load balancer is deployed within the NSX-T environment and intercepts requests to the NSX-T Management Cluster. The load balancer selects one of the NSX-T Manager nodes to handle the request and rewrites the destination IP address to reflect the selection.

NSX Management Cluster with Load Balancer

Note: The load balancer VIP load balances traffic to all NSX-T Manager instances in round robin fashion. A Cluster HA VIP, on the other hand, only sends traffic one of the NSX-T Manager instances that is mapped to the Cluster IP VIP; the other NSX-T manager instances do not receive any traffic.

Component Interaction with the NSX-T Management Cluster

Various components in an Enterprise PKS deployment interact with the NSX Management Cluster.

PKS Control Plane components:

  • Ops Manager
  • BOSH CPI (Cloud Provider Interface)
  • NSX-T OSB Proxy

Kubernetes Cluster components:

  • BOSH jobs running on Kubernetes master nodes to prepare and update Kubernetes clusters
  • NSX-T Container Plugin (NCP)

The interaction of the PKS Control Plane components and the BOSH jobs with the NSX-T Management Cluster is sporadic. However, the NCP component may demand a high level of scalability for the NSX-T API processing capability of the NSX Management Cluster, and NCP is vital to the networking needs of each Kubernetes cluster. When a high number of Kubernetes clusters are subjected to concurrent activities, such as Kubernetes Pod and Service lifecycle operations, multiple NCP instances may tax the system and push NSX-T API processing to its limits.

Load Balancer Provisioning for the NSX-T Management Cluster

For scalability, consider deploying a load balancer in front of the NSX-T Manager nodes. As a general rule of thumb, if you are using Enterprise PKS with NSX-T to deploy more than 25 Kubernetes clusters, you should use a load balancer in front of the NSX-T Management Cluster.

Note: If you do not require scalability, you can configure a Cluster VIP to achieve HA for the NSX-T Management Cluster. See HA VIP addressing.

For general purposes, a small NSX-T load balancer is sufficient. However, refer to the Scaling Load Balancer Resources to ensure that the load balancer you choose is sufficient to meet your needs.

When provisioning the load balancer, you configure a virtual server on the load balancer, and associate a virtual IP address with the virtual server. This load balancer VIP can be used as the entry-point for PKS- and NCP-related API requests on the NSX-T Control Plane. The virtual server includes a member pool where all NSX-T Management Cluster nodes belong. Additionally, health monitoring is enabled for the member pool to quickly and efficiently address potential node failures detected among the NSX-T Management Cluster.

Prerequisites for Provisioning a Load Balancer for the NSX-T Management Cluster

Before you provision a load balancer for the NSX-T Management Cluster, ensure that your environment is configured as follows:

  • NSX-T is installed and configured for servicing Enterprise PKS. See Installing and Configuring NSX-T for Enterprise PKS.
  • Transport zone, transport node, Edge Cluster, Edge Node connectivity, and Tier-0 Router are deployed and operational with proper static routes or BGP. See Installing and Configuring NSX-T for Enterprise PKS.
  • A NSX-T Management Cluster with 3 NSX Manager nodes is provisioned. See Installing and Configuring NSX-T for Enterprise PKS.
  • Your NSX-T environment has enough Edge Cluster resources to deploy a new small-size load balancer VM.
  • A dedicated IP Address is available to be used as the VIP and SNAT IP address for the new load balancer. The load balancer VIP address must be globally routable from networks external to NSX-T. This IP address can be carved out from the standard IP Pool required by Enterprise PKS PKS.

Provision the NSX-T Load Balancer for the Management Cluster

To provision the load balancer for the NSX-T Management Cluster, complete the following steps.

Step 1: Log in to an NSX-T Manager Node

  1. Log in to an NSX-T Manager Node.

    Note: You can connect to any NSX-T Manager Node in the management cluster to provision the load balancer.

  2. Select the Advanced Networking & Security tab.

    Note: You must use the Advanced Networking and Security tab in NSX-T Manager to create, read, update, and delete all NSX-T networking objects used for Enterprise PKS.

Step 2: Configure a Logical Switch

Add and configure a new logical switch for the load balancer.

  • Select Switching.
  • Click Add.
  • Configure the logical switch:
    • Name: Enter a name for the logical switch, such as LS-NSX-T-EXTERNAL-LB.
    • Transport Zone: Select the overlay transport zone, such as TZ-Overlay.
  • Click Add.

Add New Logical Switch

Step 3: Configure a Tier-1 Logical Router

Configure a new Tier-1 Router in Active/StandBy mode. Create the Tier-1 Router on the same Edge Cluster where the Tier-0 Router that provides external connectivity to vCenter and NSX Manager is located.

  • Select Routers.
  • Click Add > Tier-1 Router.
  • Configure the new Tier-1 Router and click Add.
    • Name: T1-NSX-T-EXTERNAL-LB, for example.
    • Tier-0 Router: Connect the Tier-1 Router to the Tier-0 Router, for example Shared-T0.
    • Edge Cluster: Select the same Edge Cluster where the Tier-0 Router is located, such as edgecluster1.
    • Edge Cluster Members: Select nsx-edge-1-tn and nsx-edge-2-tn, for example.

Configure Tier-1 Router

Step 4: Advertise the Routes

Configure Route Advertisement for the Tier-1 Router.

  • Select the Tier-1 Router.
  • Select the Routing tab.
  • Select Route Advertisement > Edit.
  • Enable Route Advertisement for all load balancer VIP routes for the Tier-1 Router:
    • Status: enabled
    • Advertise all LB VIP routes: yes
    • Advertise all LB SNAT IP routes: yes
    • Click Save

Advertise Routes Advertise Routes Advertise Routes

Step 5: Verify Router and Switch Configuration

Verify successful creation and configuration of the logical switch and router.

  • Select the Tier-1 Router.
  • Select the Configuration tab and the Ports option.
  • Verify that the router has a single linked port connecting the Tier-1 Router to the Tier-0 Router.

Verify Logical Switch and Router

Step 6: Configure a Small Load Balancer

Create a new small-size Load Balancer and attach it to the Tier1 router previously created.

Note: The small-size VM is suitable for the NSX Management Cluster load balancer. Make sure you have enough Edge Cluster resources to provision a small load balancer.

  • Select Load Balancers.
  • Click Add.
  • Enter a Name for the load balancer.
  • Select the SMALL size load balancer.
  • Click OK.

Add Load Balancer Configure Load Balancer Confirm Load Balancer

Step 7: Attach the Load Balancer to the Router

Attach the load balancer to the Tier-1 Router previously created.

  • Select the load balancer you just provisioned.
  • Select the Overview tab.
  • Select Attachment > Edit.
  • Tier-1 Logical Router: Enter the name of the Tier-1 Router you configured, for example T1-NSX-T-EXTERNAL-LB.
  • Click OK.

Attach Load Balancer 1 Attach Load Balancer 2

Step 8: Configure a Virtual Server

Add and configure a virtual server for the load balancer.

  • Select Load Balancers > Virtual Servers.
  • Click Add.

Add Virtual Server

Configure General Properties for the virtual server:

  • Name: VS-NSX-T-EXTERNAL-LB
  • Application Types: Layer 4 TCP
  • Application Profile: default-tcp-lb-app-profile
  • Access Log: Disabled
  • Click Next

Virtual Server General Properties

Configure Virtual Server Identifiers for the virtual server:

  • IP Address: Enter an IP address from the floating pool, such as 10.40.14.250.
  • Port: 443
  • Click Next.

Virtual Server Identifiers

Configure Virtual Server Pool for the virtual server:

  • Click Create a New Server Pool.

Add Server Pool

Configure General Properties for the server pool:

  • Name: For example NSX-T-MGRS-SRV-POOL
  • Load Balancing Algorithm: ROUND_ROBIN
  • Click Next

Server Pool General Properties

Configure SNAT Translation for the server pool:

  • Translation Mode: IP List
  • IP address: Enter the Virtual Switch IP (VIP) address here, for example 10.40.14.250.
  • Click Next.

Server Pool SNAT Translation

Configure Pool Members for the server pool:

  • Membership Type: Static.
  • Static Membership: Add all 3 NSX Managers as members by entering the node name, IP address, and port (443) for each node.
  • Click Next.

Server Pool Members

Configure Health Monitors:

  • We will create the Health Monitors separately.
  • Click Finish.

Health Monitors

Back at the Server Pool screen, click Next.

Server Pool

Configure Load Balancing Profiles for the load balancer:

  • Persistence Profile > Source IP: Select default-source-ip-lb-persistence-profile
  • Click Finish.

Load Balancing Persistence Profile Load Balancing Persistence Profile

Step 9: Attach the Virtual Server to the Load Balancer

Attach the virtual switch to the NSX-T load balancer.

  • In the Load Balancing panel, select the Virtual Server you created.
  • Select the Load Balancers tab.
  • Click Attach.
  • Load Balancer: Select the load balancer to attach, such as NSX-T-EXTERNAL-LB.
  • Click OK.

Attach Load Balancer 1
Attach Load Balancer 2
Attach Load Balancer 2

Step 10: Verify the Load Balancer.

Once the load balancer is configured, verify it by doing the following:

  • Ping the NSX-T load balancer VIP address from your local machine.
  • Access the NSX-T Manager interface using the load balancer VIP address, for example https://10.40.14.250.

Note: The URL redirects to the same NSX-T Manager. Persistence is done on the source IP based on the persistence profile you selected.

Step 11: Create an Active Health Monitor (HM)

Create a new Active Health Monitor (HM) for NSX Management Cluster members using the NSX-T Health Check protocol.

  • Select Load Balancers > Server Pools.
  • Select the server pool you created (for example, NSX-T-MGRS-SRV-POOL). Select Server Pool
  • Select the Overview tab
  • Click Health Monitor > Edit Health Monitor Edit
  • Click Create a new active monitor New Active HM

Configure Monitor Properties:

  • Name: NSX-T-Mgr-Health-Monitor
  • Health Check Protocol: LbHttpsMonitor
  • Monitoring Port: 443

Monitor Properties

Configure Health Check Parameters.

Configure the new Active HM with specific HTTP request fields as follows:

  • SSL Protocols: Select the TLS_v1 and TLS_v2 protocols.
  • SSL Ciphers: Select Balanced (recommended)

SSL Protocols

Configure the HTTP Request Configuration settings for the health monitor:

  • HTTP Method: GET
  • HTTP Request URL: /api/v1/reverse-proxy/node/health
  • HTTP Request Version: HTTP_VERSION_1_1

HTTP Request Configuration

Configure the HTTP Request Headers for the health monitor:

  • Authorization: Basic YWRtaW46Vk13YXJlMSE=, which is the base64-encoded value of the NSX-T administrator credentials
  • Content-Type: application/json
  • Accept: application/json

HTTP Request Headers

Note: In the example, YWRtaW46Vk13YXJlMSE= is the base64-encoded value of the NSX-T administrator credentials, expressed in the form admin-user:password. You can use the free online service www.base64encode.org to base64 encode your NSX-T administrator credentials.

Configure the HTTP Response Configuration for the health monitor:

  • HTTP Response Code: 200
  • Click Finish.

Health Check Parameters

At the Health Monitors screen, specify the Active Health Monitor you just created:

  • Active Health Monitor: Enter a name for the health monitor, such as NSX-T-Mgr-Health-Monitor.
  • Click Finish.

Health Check Parameters Health Check Parameters

Step 12: Create SNAT Rule

If your Enterprise PKS deployment uses NAT mode, make sure Health Monitoring traffic is correctly SNAT-translated when leaving the NSX-T topology. Add a specific SNAT rule that intercepts HM traffic generated by the load balancer and translates this to a globally-routable IP Address allocated using the same principle of the load balancer VIP. The following screenshot illustrates an example of SNAT rule added to the Tier0 Router to enable HM SNAT translation. In the example, 100.64.128.0/31 is the subnet for the Load Balancer Tier-1 uplink interface.

To do this you need to retrieve the IP of the T1 uplink (Tier-1 Router that connected the NSX-T LB instance). In the example below, the T1 uplink IP is 100.64.112.37/31. Retrieve IP of the T1 Uplink

Create the following SNAT rule on the Tier-0 Router:

  • Priority: 2000
  • Action: SNAT
  • Source IP: 100.64.112.36/31, for example
  • Destination IP: 10.40.206.0/25, for example
  • Translated IP: 10.40.14.251, for example
  • Click Save

SNAT rule for Health Monitor HTTP traffic, added to Tier0 router

  • Verify configuration of the SNAT rule and server pool health:

Verify SNAT Verify SNAT Verify SNAT

Step 13: Verify that NSX Manager Traffic Is Load Balanced

Verify the load balancer and that traffic is load balanced.

  • Confirm that the status of the Logical Switch for the load balancer is Up. Verify LS
  • Confirm that the status of the Virtual Server for the load balancer is Up. Verify VS
  • Confirm that the status of the Server Pool is Up. Verify LS
  • Open an HTTPS session using multiple browser clients and confirm that traffic is load-balanced across different NSX-T Managers: Verify LS

You can use the NSX API to validate that secure HTTP requests against the new VIP address are associated with the load balancer’s Virtual Server. Relying on the SuperUser Principal Identity created as part of PKS provisioning steps, you can cURL the NSX Management Cluster using the standard HA-VIP address or the newly-provisioned virtual server VIP. For example:

Before load balancer provisioning is completed:

curl -k -X GET "https://192.168.6.210/api/v1/trust-management/principal-identities" --cert $(pwd)/pks-nsx-t-superuser.crt --key $(pwd)/pks-nsx-t-superuser.key

After load balancer provisioning is completed:

curl -k -X GET "https://91.0.0.1/api/v1/trust-management/principal-identities" --cert $(pwd)/pks-nsx-t-superuser.crt --key $(pwd)/pks-nsx-t-superuser.key

Key behavioral differences among the two API calls is the fact that the call toward the Virtual Server VIP will effectively Load Balance requests among the NSX-T Server Pool members. On the other hand, the call made toward the HA VIP address would ALWAYS select the same member (the Active Member) of the NSX Management Cluster.

Residual configuration step would be to change PKS Tile configuration for NSX-Manager IP Address to use the newly-provisioned Virtual IP Address. This configuration will enable any component internal to PKS (NCP, NSX OSB Proxy, BOSH CPI, etc.) to use the new Load Balancer functionality.

NSX-T Installation Instructions Home

Installing and Configuring NSX-T for Enterprise PKS.


Please send any feedback you have to pks-feedback@pivotal.io.