Upgrade Preparation Checklist for Tanzu Kubernetes Grid Integrated Edition v1.9
Page last updated:
This topic serves as a checklist for preparing to upgrade VMware Tanzu Kubernetes Grid Integrated Edition from v1.8 to v1.9.
This topic lists steps that you must follow before beginning your upgrade. Failure to follow these instructions may jeopardize your existing deployment data and cause the upgrade to fail.
After completing the steps in this topic, continue to Upgrading Tanzu Kubernetes Grid Integrated Edition (Flannel Networking) or Upgrading Tanzu Kubernetes Grid Integrated Edition (NSX-T Networking).
VMware recommends backing up your Tanzu Kubernetes Grid Integrated Edition deployment and workloads before upgrading. To back up Tanzu Kubernetes Grid Integrated Edition, see Backing Up and Restoring Tanzu Kubernetes Grid Integrated Edition.
If you have not already done so, review About Tanzu Kubernetes Grid Integrated Edition Upgrades.
Plan your upgrade based on your workload capacity and uptime requirements.
Review the Release Notes for Tanzu Kubernetes Grid Integrated Edition v1.9.
To determine the upgrade order for your Tanzu Kubernetes Grid Integrated Edition environment, review Upgrade Order for Tanzu Kubernetes Grid Integrated Edition Environments on vSphere.
Coordinate the Tanzu Kubernetes Grid Integrated Edition upgrade with cluster admins and users. During the upgrade:
Their workloads will remain active and accessible.
They will be unable to perform cluster management functions, including creating, resizing, updating, and deleting clusters.
They will be unable to log in to TKGI or use the TKGI CLI and other TKGI control plane services.
Note: Cluster admins should not start any cluster management tasks right before an upgrade. Wait for cluster operations to complete before upgrading.
Tanzu Kubernetes Grid Integrated Edition v1.9 does not support clusters running versions of TKGI earlier than v1.8.
Before you upgrade from Tanzu Kubernetes Grid Integrated Edition v1.8 to v1.9, you must upgrade all of your TKGI-provisioned clusters to v1.8.
To upgrade TKGI-provisioned clusters:
Check the version of your clusters:
If one or more of your clusters are running a version of TKGI earlier than v1.8, upgrade the clusters. For instructions, see Upgrading Clusters.
It is critical that you confirm that a cluster’s resource usage is within the recommended maximum limits before upgrading the cluster.
VMware Tanzu Kubernetes Grid Integrated Edition upgrades a cluster by upgrading master and worker nodes individually. The upgrade processes a master node by redistributing the node’s workload, stopping the node, upgrading it and restoring its workload. This redistribution of a node’s workloads increases the resource usage on the remaining nodes during the upgrade process.
If a Kubernetes cluster master VM is operating too close to capacity, the upgrade can fail.
Warning: Downtime is required to repair a cluster failure resulting from upgrading an overloaded Kubernetes cluster master VM.
To prevent workload downtime during a cluster upgrade, complete the following before upgrading a cluster:
Ensure none of the master VMs being upgraded will become overloaded during the cluster upgrade. See Master Node VM Size for more information.
Review the cluster’s workload resource usage in Dashboard. For more information, see Accessing Dashboard.
Scale up the cluster if it is near capacity on its existing infrastructure. Scale up your cluster by running
tkgi resizeor create a cluster using a larger plan. For more information, see Changing Cluster Configurations.
Run the cluster’s workloads on at least three worker VMs using multiple replicas of your workloads spread across those VMs. For more information, see Maintaining Workload Uptime.
Verify that your Kubernetes environment is healthy. To verify the health of your Kubernetes environment, see Verifying Deployment Health.
If you are upgrading Tanzu Kubernetes Grid Integrated Edition for environments using vSphere with NSX-T, perform the following steps:
- Verify that the vSphere datastores have enough space.
- Verify that the vSphere hosts have enough memory.
- Verify that there are no alarms in vSphere.
- Verify that the vSphere hosts are in a good state.
- Verify that NSX Edge is configured for high availability using Active-Standby mode.
Note: Workloads in your Kubernetes cluster are unavailable while the NSX Edge nodes run the upgrade unless you configure NSX Edge for high availability. For more information, see the Configure NSX Edge for High Availability (HA) section of Preparing NSX-T Before Deploying Tanzu Kubernetes Grid Integrated Edition.
Clean up or fix any previous failed attempts to create TKGI clusters with the TKGI Command Line Interface (TKGI CLI) by performing the following steps:
View your deployed clusters by running the following command:
Statusof any cluster displays as
FAILED, continue to the next step. If no cluster displays as
FAILED, no action is required. Continue to the next section.
To troubleshoot and fix failed clusters, perform the procedure in Cluster Creation Fails.
To clean up failed BOSH deployments related to failed clusters, perform the procedure in Cannot Re-Create a Cluster that Failed to Deploy.
After fixing and cleaning up any failed clusters, view your deployed clusters again by running
For more information about troubleshooting and fixing failed clusters, see the Knowledge Base.
Verify that existing Kubernetes clusters have unique external hostnames by checking for multiple Kubernetes clusters with the same external hostname. Perform the following steps:
Log in to the TKGI CLI. For more information, see Logging in to Tanzu Kubernetes Grid Integrated Edition. You must log in with an account that has the UAA scope of
pks.clusters.admin. For more information about UAA scopes, see Managing Tanzu Kubernetes Grid Integrated Edition Users with UAA.
View your deployed TKGI clusters by running the following command:
For each deployed cluster, run
tkgi cluster CLUSTER-NAMEto view the details of the cluster. For example:
$ tkgi cluster my-cluster
Examine the output to verify that the
Kubernetes Master Hostis unique for each cluster.
Verify your current TKGI proxy configuration by performing the following steps:
Check whether an existing proxy is enabled:
- Log in to Ops Manager.
- Click the VMware Tanzu Kubernetes Grid Integrated Edition tile.
- Click Networking.
- If HTTP/HTTPS Proxy is Disabled, no action is required. Continue to the next section. If HTTP/HTTPS Proxy is Enabled, continue to the next step.
If the existing No Proxy field contains any of the following values, or you plan to add any of the following values, contact Support:
- Hostnames containing dashes, such as
Tanzu Kubernetes Grid Integrated Edition upgrades can run without ever completing if any Kubernetes app has a
maxUnavailable set to
To ensure that no apps have a
maxUnavailable set to
Run the following
kubectlcommand to verify the
PodDisruptionBudgetas the cluster administrator:
kubectl get poddisruptionbudgets --all-namespaces
Examine the output to verify that no app displays
During the Tanzu Kubernetes Grid Integrated Edition upgrade process, worker nodes are cordoned and drained. Workloads can prevent worker nodes from draining and cause the upgrade to fail or hang.
To prevent hanging cluster upgrades, you can configure default node drain behavior in Tanzu Kubernetes Grid Integrated Edition tile or with the TKGI CLI.
The new default behavior takes effect during the next upgrade, not immediately after configuring the behavior.
To configure node drain behavior in the Tanzu Kubernetes Grid Integrated Edition tile, see Worker Node Hangs Indefinitely in Troubleshooting.
To configure default node drain behavior with the TKGI CLI:
View the current node drain behavior by running the following command:
tkgi cluster CLUSTER-NAME --details
CLUSTER-NAMEis the name of your cluster.
$ tkgi cluster my-cluster --details
Name: my-cluster Plan Name: small UUID: f55ed6c4-c0a7-451d-b735-56c89fdb2ad7 Last Action: CREATE Last Action State: succeeded Last Action Description: Instance provisioning completed Kubernetes Master Host: my-cluster.tkgi.local Kubernetes Master Port: 8443 Worker Nodes: 3 Kubernetes Master IP(s): 10.196.219.88 Network Profile Name: Kubernetes Settings Details: Set by Cluster: Kubelet Node Drain timeout (mins) (kubelet-drain-timeout): 10 Kubelet Node Drain grace-period (mins) (kubelet-drain-grace-period): 10 Kubelet Node Drain force (kubelet-drain-force): true Set by Plan: Kubelet Node Drain force-node (kubelet-drain-force-node): true Kubelet Node Drain ignore-daemonsets (kubelet-drain-ignore-daemonsets): true Kubelet Node Drain delete-local-data (kubelet-drain-delete-local-data): true
Configure the default node drain behavior by running the following command:
tkgi update-cluster CLUSTER-NAME FLAG
CLUSTER-NAMEis the name of your cluster.
FLAGis an action flag for updating the node drain behavior.
$ tkgi update-cluster my-cluster --kubelet-drain-timeout 1 --kubelet-drain-grace-period 5
Update summary for cluster my-cluster: Kubelet Drain Timeout: 1 Kubelet Drain Grace Period: 5 Are you sure you want to continue? (y/n): y Use 'tkgi cluster my-cluster' to monitor the state of your cluster
For a list of the available action flags for setting node drain behavior, see tkgi update-cluster in TKGI CLI.
Warning: In TKGI v1.9.4 and earlier, do not use
tkgi update-clusteron clusters configured with a network profile CNI configuration. For more information, see The TKGI CLI Resize and Update Cluster Commands Remove the Network Profile CNI Configuration from a Cluster in the Release Notes.
Please send any feedback you have to firstname.lastname@example.org.