Upgrading Enterprise PKS with NSX-T

Page last updated:

This topic explains how to upgrade Enterprise Pivotal Container Service (Enterprise PKS) for environments using vSphere with NSX-T.

Before You Upgrade

This section describes the activities you must perform before upgrading Enterprise PKS.

Consult Compatibility Charts

For information about PKS with NSX-T and Ops Manager compatibility, see the Enterprise PKS Release Notes.

Determine Your Upgrade Path

For information about the supported upgrade path, see Updgrade Path section of the Enterprise PKS Release Notes.

Prepare to Upgrade

If you have not already, complete all of the steps in the Upgrade Preparation Checklist for PKS v1.4.

Upgrade Enterprise PKS with NSX-T

This section describes the steps required to upgrade to Enterprise PKS v1.4 and NSX-T v2.4.

Step 1: Upgrade to a Supported Version of Ops Manager

Before you upgrade to PKS v1.4.x, you must upgrade to a supported version of Ops Manager. For more information, see the Enterprise PKS Release Notes.

To upgrade Ops Manager, follow the upgrade checklist and procedures for the Ops Manager version you are targeting:

Ops Manager v2.4.x

Ops Manager v2.5.x

Ops Manager v2.6.x

Step 2: Upgrade to PKS v1.4.x

Upgrade the PKS tile from a supported version to PKS v1.4.x. When you upgrade the PKS tile, the target version of NCP is installed (v2.4.0 in this case). This must be done before you upgrade to NSX-T v2.4.x.

To upgrade the PKS tile to v1.4.x, complete the following steps.

  1. Download the PKS version from Pivotal Network.

  2. Navigate to the Ops Manager Installation Dashboard and click Import a Product.

  3. Browse to the Enterprise PKS product file and select it. Uploading the file takes several minutes. Upload PKS product file to Ops Manager

  4. Under the Import a Product button, click + next to Enterprise PKS. This adds the tile to your staging area.

    Import the PKS product file

  5. Import the required Xenial stemcell for PKS v1.4.0 (250.25 or later).

    • On the Enterprise PKS tile, click on the Missing stemcell link.
    • In the Stemcell Library, locate Enterprise PKS and note the required stemcell version.
    • Visit the Stemcells for PCF (Ubuntu Xenial) page on Pivotal Network, and download the required stemcell version for vSphere.
    • Return to the Installation Dashboard in Ops Manager, and click on Stemcell Library.
    • On the Stemcell Library page, click Import Stemcell and select the stemcell file you downloaded from Pivotal Network.
    • Select the Enterprise PKS product and click Apply Stemcell to Products.
    • Verify that Ops Manager successfully applied the stemcell.
  6. Return to the Installation Dashboard in Ops Manager.

  7. Click Review Pending Changes. For more information about this Ops Manager page, see Reviewing Pending Product Changes.

  8. In the Enterprise PKS tile, click Errands.

  9. Under Post-Deploy Errands, verify that the listed errands are configured as follows:

    • NSX-T Validation errand: Set to On
    • Upgrade all clusters errand: Set to Default (On)
    • Create pre-defined Wavefront alerts errand: Set to Default (Off)
    • Run smoke tests: Set to On. The errand uses the PKS Command Line Interface (PKS CLI) to create a Kubernetes cluster and then delete it. If the creation or deletion fails, the errand fails and the installation of the Enterprise PKS tile is aborted.

    WARNING: Because of the upgrade requirements for PSK 1.4 with NSX-T 2.4, you will deploy the PKS tile twice. It is up to you if you want to enable the “Upgrade all clusters errand” at this time, or when you perform the second deployment of PKS. If you are performing the upgrade during a maintenance window, it is not necessary to upgrade the Kubernetes clusters at this time, so you can deselect the upgrade all clusters errand for PKS. However, if you want your Kubernetes clusters to be upgraded immediately, ensure that the upgrade all clusters errand is enabled.

  10. Click Apply Changes to deploy the PKS 1.4.x tile.

Step 3: Upgrade from NSX-T v2.3.1 to NSX-T v2.4.x

The section describes the how to upgrade to NSX-T 2.4.x. It includes requirements and recommendations for before, during, and after the upgrade.

Before the Upgrade

Before you upgrade to NSX-T v2.4.x, review the following requirements and recommendations:

  • Supported vSphere and patch version: Before upgrading NSX-T, make sure you are on the supported vSphere and patch version. Refer to the VMware Product Interoperability Matrices for NSX-T v2.4 and vSphere v6.5 and v6.7.

  • Review upgrade path: See the Upgrade Path section of the Release Notes for more information.

Upgrade to NSX-T 2.4.x

To upgrade to NSX-T 2.4.x, see Upgrading NSX-T Data Center in the VMware documentation.

Note: When you upgrade NSX-T, at the stage that the ESXi Transport Nodes are upgraded (“Hosts”), you may want to create a different host group for each ESXi host in the correct order so that hosts in maintenance mode only get upgraded. In vCenter, put each EXSi Transport Node (TN) host into maintenance mode, 1 at a time. Create the host group for that ESXi host and upgrade only it, then remove it from maintenance mode. Repeat this process for all ESXi TN hosts.

After the Upgrade

After you upgrade to NSX-T 2.4.x, review the following requirements and recommendations:

  • T0 routers in Advanced Networking Configuration tab: After you upgrade to NSX-T 2.4, the T0 router(s) and all other management plane objects can be seen only from the Advanced Networking Configuration tab. They will not be migrated to the new Policy UI.

  • NSX-T v2.4 architectural changes: There are architectural changes in NSX 2.4. The NSX Controller is now a component of the NSX Manager. After the NSX-T upgrade is complete, you will have a single NSX-T Manager node. Power off the NSX Controllers. At the end of the upgrade, you can delete the NSX Controller VMs. For more information, see Delete NSX Controllers in the NSX-T documentation.

  • Verify functionality of PKS environment after upgrade: After the upgrade to NSX 2.4 is complete, you may want to verify that your PKS environment is functioning properly by logging in to PKS and creating a small test cluster. If you cannot do this, troubleshoot the upgrade before proceeding. For more information, see Troubleshooting Upgrade Failures in the NSX-T documentation.

Step 4: Deploy Two Additional NSX Managers

With NSX-T v2.4, the NSX Controller component is now part of the NSX Manager. Previously the NSX Manager was a singleton, and HA was achieved using multiple NSX Controllers. With NSX-T v2.4, because the standalone NSX Controller component is no longer used, to achieve HA you need to deploy multiple (three) NSX Managers. Refer to the corresponding step in the Installing NSX-T v2.4 Data Center topic.

Note: When you add additional NSX Managers, the system prompts you to enter a Compute Manager, which is a vCenter Server. Refer to the corresponding step in the topic Installing NSX-T v2.4 Data Center.

Step 5: Configure the NSX Manager VIP for the NSX Managment Cluster or Deploy a Load Balancer

Because you have deployed two additional NSX Managers (for a total of three), to support a single access point for the NSX Manager interface you need to do one of the following:

Note: If you need to scale, provision a load balancer for the NSX-T Management Cluster. If you do not require scalability, can configure a Cluster VIP to achieve HA for the NSX-T Management Cluster.

Step 6: Generate, Import, and Register a New NSX Manager CA Cert

Both the BOSH Director tile and the PKS tile expect the NSX Manager CA certificate. However, the current NSX Manager CA certificate is associated with the original NSX Manager IP address. You need to generate a new NSX Manager CA cert using the load balancer or VIP address, then register this certificate with NSX-T using the appropriate API.

If you have assigned a Virtual IP Address and Certificate to the NSX-T Management Cluster, see Configure a VIP and Cluster Certificate for the NSX-T v2.4 Management Cluster for instructions to generate, import, and register a New NSX Manager CA Cert for the VIP.

If you have deployed a load balancer, see Provision a Load Balancer for the NSX-T v2.4 Management Cluster for instructions to generate, import, and register a New NSX Manager CA Cert for the load balancer.

Step 7: Update the PKS and BOSH Tiles with the New NSX Manager Cluster Certficate and Load Balancer or VIP Address

The last procedure in the upgrade process is to modify the BOSH Tile and the PKS Tile with the new VIP address for the NSX Manager and the new NSX-T Manager CA cert (using VIP info). Apply the changes and ensure that the Upgrade all clusters errand is selected, then deploy PKS.

To update the BOSH tile:

  1. Log into Ops Manager.
  2. In the BOSH Director tile, select the vCenter Configuration tab.
  3. In the NSX Address field, enter the VIP address for the NSX Management Cluster.
  4. In the NSX CA Cert field, enter the new CA certificate for the NSX Management Cluster that uses the VIP address.
  5. Save the BOSH tile changes. Update BOSH with VIP and Cert

To update the PKS tile:

  1. Log into Ops Manager.
  2. In the PKS tile, select the Networking tab.
  3. In the NSX Manager hostname field, enter the VIP address for the NSX Management Cluster.
  4. In the NSX Manager CA Cert field, enter the new CA certificate for the NSX Management Cluster (that uses the VIP address).
  5. Save the PKS tile changes. Update PKS with VIP and Cert

Step 8: Upgrade all Kubernetes Clusters

Once you have updated the PKS and BOSH tiles, apply the changes. Be sure to run the “Upgrade all [Kubernetes] clusters errand”. Doing so will allow NCP configurations on all Kubernetes clusters to be updated with the new NSX-T Management Cluster VIP and CA certificate.

To complete the upgrade:

  1. Go to the Installation Dashboard in Ops Manager.
  2. Click Review Pending Changes.
  3. Expand the Errands list for PKS.
  4. Ensure that the Upgrade all clusters errand is selected.
  5. Click Apply Changes. Upgrade all Kubernetes clusters

Step 9: Verify the Upgrade

Once the upgrade is complete, verify that NCP configuration is automatically updated with the new VIP (instead of individual NSX-T Manager node IP address).

To do this, run a command similar to the following for each Kubernetes cluster (service-instance_UUID):

bosh ssh master/0 -d service-instance_d9b662d0-23e1-4239-b641-ed20ee62e692

Note the “nsx_api_managers” address. It should be the VIP.

After the Upgrade

After you complete the upgrade to Enterprise PKS v1.4.x and NSX-T v2.3, complete the following verifications and upgrades.

Update PKS and Kubernetes CLIs

Update the PKS and Kubernetes CLIs on any local machine where you run commands that interact with your upgraded version of Enterprise PKS.

To update your CLIs, download and re-install the PKS and Kubernetes CLI distributions that are provided with Enterprise PKS on Pivotal Network.

For more information about installing the CLIs, see the following topics:

Verify Deployment Health

After you apply changes to the Enterprise PKS tile and the upgrade is complete, verify that your Kubernetes environment is healthy and confirm that NCP is running on the master node VM.

To verify the health of your Kubernetes environment and NCP, see Verifying Deployment Health.


Please send any feedback you have to pks-feedback@pivotal.io.