Configure VIP Address and Certificate for the NSX Management Cluster

Page last updated:

This topic explains how to configure a Virtual IP address (VIP) and generate and register a certificate using the VIP for the NSX Managment Cluster for Enterprise Pivotal Container Service (Enterprise PKS) environments using vSphere with NSX-T.

Prerequisites

This procedure assumes you have installed 3 NSX-T v2.4 Managers in your environment as described in Installing PKS v1.4 with NSX-T v2.4.

VIP vs. Load Balancer

With NSX-T v2.4, the NSX Controller component is now part of the NSX Manager. Previously the NSX Manager was a singleton, and HA was achieved using multiple NSX Controllers. With NSX-T v2.4, since the standalone NSX Controller component is no longer used, to achieve HA you need to deploy multiple (three) NSX Managers.

Since you have deployed two additional NSX Managers (for a total of three), to support a single access point for the NSX Manager interface you need to do one of the following:

Which approach to choose depends on your requirements. If you need to scale, provision a load balancer for the NSX-T Management Cluster. If you do not require scalability, configure a Cluster VIP to achieve HA for the NSX-T Management Cluster.

Cluster Certificate

Both the BOSH Director tile and the PKS tile expect the NSX Manager CA certificate.

If you are installing NSX-T v2.4, you will need to generate and register the certificate using the VIP address.

If you are upgrading to NSX-T v2.4, since the current NSX Manager CA certificate is associated with the original NSX Manager IP address, you need to generate a new NSX Manager CA cert using the VIP address, then register this certificate with NSX-T using the appropriate API.

If you have assigned a Virtual IP Address and Certificate to the NSX-T Management Cluster, complete the following set of instructions to generate, import, and register a New NSX Manager CA Cert with the Cluster API.

If you have deployed a load balancer, see Provision a Load Balancer for the NSX-T v2.4 Management Cluster for instructions to generate, import, and register a New NSX Manager CA Cert for the load balancer.

Instructions for Assigning a VIP and Registering the Cluster Certificate

Complete the following set of instructions to create a VIP for the NSX Managment Cluster and generate and register a certifcate for the NSX Management Cluster.

Step 1: Configure a VIP Address for the NSX Management Cluster

  1. From a browser, log in with admin privileges to an NSX Manager at https://.
  2. Go to System > Overview.
  3. Click Edit next to the Virtual IP field.
  4. Enter a VIP for the cluster, such as 172.16.11.81. Ensure that the VIP is part of the same subnet as the other NSX Management nodes.
  5. Click Save.
  6. When prompted click Refresh.

Any API requests to NSX-T is redirected to the virtual IP address of the cluster, which is owned by the leader node. The leader node then routes the request forward to the other components of the appliance.

To verify the VIP and troubleshoot issues:

  1. To check the API leader of the HTTPS group, enter the following command in the NSX Manager CLI get cluster status verbose.
  2. To troubleshoot VIP issues, verify Reverse Proxy logs at /var/log/proxy/reverse-proxy.log and cluster manager logs at /var/log/cbm/cbm.log.

Step 2: Generate a New NSX Manager CA Certificate and Private Key

To generate a new NSX Manager CA certificate and private key using the VIP address, follow the instructions below. Make sure you use the VIP address, such as 10.40.206.5.

Below is an example Certificate Signing Request (CSR) named nsx-cert.cnf. In this example, the IP address 10.40.206.5 is the IP address of the VIP. Substitute this IP address with the VIP you generated.

[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
req_extensions = req_ext
prompt = no
[ req_distinguished_name ]
countryName = US
stateOrProvinceName = California
localityName = CA
organizationName = NSX
commonName = 10.40.206.5
[ req_ext ]
subjectAltName = @alt_names
[alt_names]
DNS.1 = 10.40.206.5

To generate the certficate and private key using the above CSR, run the following commands:

# export NSX_MANAGER_IP_ADDRESS=10.40.206.5
# export NSX_MANAGER_COMMONNAME=10.40.206.5
# openssl req -newkey rsa:2048 -x509 -nodes \

> -keyout nsx.key -new -out nsx.crt -subj /CN=$NSX_MANAGER_COMMONNAME \
> -reqexts SAN -extensions SAN -config <(cat ./nsx-cert.cnf \
>  <(printf "[SAN]\nsubjectAltName=DNS:$NSX_MANAGER_COMMONNAME,IP:$NSX_MANAGER_IP_ADDRESS")) -sha256 -days 365

The result is nsx.crt and nsx.key. You can verify the certificate using the command # openssl x509 -in nsx.crt -text -noout.

Step 3: Import the New Certificate to NSX Manager

Complete the following steps to import the certificate to the NSX Manager:

  1. Log in to the NSX Manager UI.

  2. Navigate to System > Trust > Certificates.

  3. Click Import > Import Certificate.

    Import the NSX Manager CA certificate to the NSX Manager

    Note: Make sure you select Import Certificate and not Import CA Certificate.

  4. Give the certificate a unique name, such as NSX-API-CERT-NEW.

    Note: Use a unique name for the new certificate you import. The default NSX Manager CA certificate is typically named NSX-API-CERT.

  5. Browse to and select the CA certificate and private key you generated in the previous section of steps.

  6. Click Save.

    Import the NSX Manager CA certificate to the NSX Manager

Step 4: Register the New Certificate with NSX Manager

Once you have imported the NSX Manager certificate, register this certificate with the NSX Management cluster using a cURL command against the Cluster Certificate API.

First, create environment variables for the VIP address and the certificate ID. In this example, 10.40.206.5 is the VIP address. The certificate ID is obtained from the NSX Manager UI where you imported the certificate.

export NSX_MANAGER_IP_ADDRESS=10.40.206.5
export CERTIFICATE_ID="63bb6646-052c-49df-b603-64d7e5bdb5bf"

Next, register the new NSX-T Manager CA cert using a cURL request to the Cluster Certificate API. Substitute PASSWORD with the password for NSX Manager.

curl --insecure -u admin:'PASSWORD' -X POST "https://$NSX_MANAGER_IP_ADDRESS/api/v1/cluster/api-certificate?action=set_cluster_certificate&certificate_id=$CERTIFICATE_ID"

The certificate will be registered with the NSX Manager that the VIP address is associated with.

To verify, using a browser go to the VIP address of the NSX Manager. Login and check that the new cert is used by the site (accessed using the VIP address).

To further verify, SSH to each NSX Manager host and run the following two commands. All certificates returned should be the same.

get certificate api
get certificate cluster

Step 5: Update the PKS and BOSH Tiles with the NSX Manager Cluster Certficate

The last procedure in the upgrade process is to update the BOSH Tile and the PKS Tile with the new VIP address for the NSX Manager and the new NSX-T Manager CA cert (using VIP info).

To update the BOSH tile:

  1. Log into Ops Manager.
  2. In the BOSH Director tile, select the vCenter Configuration tab.
  3. In the NSX Address field, enter the VIP address for the NSX Management Cluster.
  4. In the NSX CA Cert field, enter the new CA certificate for the NSX Management Cluster that uses the VIP address.
  5. Save the BOSH tile changes. Update BOSH with VIP and Cert

To update the PKS tile:

  1. Log into Ops Manager.
  2. In the PKS tile, select the Networking tab.
  3. In the NSX Manager hostname field, enter the VIP address for the NSX Management Cluster.
  4. In the NSX Manager CA Cert field, enter the new CA certificate for the NSX Management Cluster (that uses the VIP address).
  5. Save the PKS tile changes. Update PKS with VIP and Cert

Step 6: Upgrade Kubernetes Clusters

Once you have updated the PKS and BOSH tiles, apply the changes. Be sure to run the “Upgrade all [Kubernetes] clusters errand”. Doing so will allow NCP configurations on all Kubernetes clusters to be updated with the new NSX-T Management Cluster VIP and CA certificate.

To update Kubernetes clusters:

  1. Go to the Installation Dashboard in Ops Manager.
  2. Click Review Pending Changes.
  3. Expand the Errands list for PKS.
  4. Ensure that the Upgrade all clusters errand is selected.
  5. Click Apply Changes. Upgrade all Kubernetes clusters

Please send any feedback you have to pks-feedback@pivotal.io.