Upgrade Preparation Checklist for Enterprise PKS v1.6
Page last updated:
This topic serves as a checklist for preparing to upgrade VMware Enterprise PKS v1.5 to VMware Enterprise PKS v1.6.
This topic contains important preparation steps that you must follow before beginning your upgrade. Failure to follow these instructions may jeopardize your existing deployment data and cause the upgrade to fail.
After completing the steps in this topic, you can continue to Upgrading Enterprise PKS. If you are upgrading Enterprise PKS for environments using vSphere with NSX-T, continue to Upgrading Enterprise PKS with NSX-T.
We recommend backing up your Enterprise PKS deployment before upgrading.
If you are upgrading Enterprise PKS for environments using vSphere with NSX-T, back up your environment using the procedures in the following topics:
- Back up Enterprise PKS
- Back up NSX-T
- Back up vCenter
Note: If you choose not to back up Enterprise PKS, NSX-T, or vCenter, we recommend backing up the NSX-T and NSX-T Container Plugin (NCP) logs.
If you are upgrading Enterprise PKS for any other IaaS, back up the existing Enterprise PKS control plane. For more information, see Backing Up and Restoring Enterprise PKS.
If you have not already done so, review What Happens During Enterprise PKS Upgrades.
Plan your upgrade based on your workload capacity and uptime requirements.
Review the Release Notes for Enterprise PKS v1.6.
Before you upgrade to Enterprise PKS v1.6, you must upgrade all clusters to same patch version of Enterprise PKS v1.5.x.
Enterprise PKS supports running clusters on the current version (N) and the last version release (N - 1). For example, you can upgrade to Enterprise PKS v1.6.0 and run Enterprise PKS v1.5.x clusters, but you cannot upgrade to Enterprise PKS v1.6.0 while running a Enterprise PKS v1.5.1 cluster alongside a v1.5.0 cluster or v1.4.3 cluster. After the upgrade, the older version clusters can no longer be managed in Enterprise PKS v1.6.0.
To check the version of existing clusters and availability of upgrade, run:
To upgrade one or more clusters, see Upgrading Clusters.
View your workload resource usage in Dashboard. For more information, see Accessing Dashboard.
If workers are operating too close to their capacity, the upgrade can fail.
To prevent workload downtime during a cluster upgrade, we recommend running your workload on at least three worker VMs, using multiple replicas of your workloads spread across those VMs. For more information, see Maintaining Workload Uptime.
If your clusters are near capacity for your existing infrastructure,
we recommend scaling up your clusters before you upgrade.
Scale up your cluster by running
pks resize or create a cluster
using a larger plan. For more information, see Scaling Existing Clusters.
Verify that your Kubernetes environment is healthy. To verify the health of your Kubernetes environment, see Verifying Deployment Health.
If you are upgrading Enterprise PKS for environments using vSphere with NSX-T, perform the following steps:
- Verify that the vSphere datastores have enough space.
- Verify that the vSphere hosts have enough memory.
- Verify that there are no alarms in vSphere.
- Verify that the vSphere hosts are in a good state.
- Verify that NSX Edge is configured for high availability using Active/Standby mode.
Note: Workloads in your Kubernetes cluster are unavailable while the NSX Edge nodes run the upgrade unless you configure NSX Edge for high availability. For more information, see the Configure NSX Edge for High Availability (HA) section of Preparing NSX-T Before Deploying Enterprise PKS.
Clean up or fix any previous failed attempts to create PKS clusters with the PKS Command Line Interface (PKS CLI) by performing the following steps:
View your deployed clusters by running the following command:
Statusof any cluster displays as
FAILED, continue to the next step. If no cluster displays as
FAILED, no action is required. Continue to the next section.
To troubleshoot and fix failed clusters, perform the procedure in Cluster Creation Fails.
To clean up failed BOSH deployments related to failed clusters, perform the procedure in Cannot Re-Create a Cluster that Failed to Deploy.
After fixing and cleaning up any failed clusters, view your deployed clusters again by running
For more information about troubleshooting and fixing failed clusters, see the Pivotal Support Knowledge Base.
Verify that existing Kubernetes clusters have unique external hostnames by checking for multiple Kubernetes clusters with the same external hostname. Perform the following steps:
Log in to the PKS CLI. For more information, see Logging in to Enterprise PKS. You must log in with an account that has the UAA scope of
pks.clusters.admin. For more information about UAA scopes, see Managing Enterprise PKS Users with UAA.
View your deployed PKS clusters by running the following command:
For each deployed cluster, run
pks cluster CLUSTER-NAMEto view the details of the cluster. For example:
$ pks cluster my-cluster
Examine the output to verify that the
Kubernetes Master Hostis unique for each cluster.
Verify your current PKS proxy configuration by performing the following steps:
Check whether an existing proxy is enabled:
- Log in to Ops Manager.
- Click the Pivotal Container Service tile.
- Click Networking.
- If HTTP/HTTPS Proxy is Disabled, no action is required. Continue to the next section. If HTTP/HTTPS Proxy is Enabled, continue to the next step.
If the existing No Proxy field contains any of the following values, or you plan to add any of the following values, contact PKS Support:
- Hostnames containing dashes, such as
Enterprise PKS upgrades can run without ever completing if any Kubernetes app has a
maxUnavailable set to
0. To ensure that no apps have a
maxUnavailable set to
0, perform the following steps:
Use the Kubernetes CLI,
kubectl, to verify the
PodDisruptionBudgetas the cluster administrator. Run the following command:
kubectl get poddisruptionbudgets --all-namespaces
Examine the output. Verify that no app displays
During the Enterprise PKS tile upgrade process, worker nodes are cordoned and drained. Workloads can prevent worker nodes from draining and cause the upgrade to fail or hang.
To prevent hanging cluster upgrades, you can use the PKS CLI to configure the default node drain behavior. The new default behavior takes effect during the next upgrade, not immediately after configuring the behavior.
Note: You can also configure node drain behavior in the Enterprise PKS tile. For information about configuring default node drain behavior in the Enterprise PKS tile, see Worker Node Hangs Indefinitely in Troubleshooting.
To configure default node drain behavior, do the following:
View the current node drain behavior by running the following command:
pks cluster CLUSTER-NAME --details
CLUSTER-NAMEis the name of you cluster.
$ pks cluster my-cluster --details
Name: my-cluster Plan Name: small UUID: f55ed6c4-c0a7-451d-b735-56c89fdb2ad7 Last Action: CREATE Last Action State: succeeded Last Action Description: Instance provisioning completed Kubernetes Master Host: my-cluster.pks.local Kubernetes Master Port: 8443 Worker Nodes: 3 Kubernetes Master IP(s): 10.196.219.88 Network Profile Name: Kubernetes Settings Details: Set by Cluster: Kubelet Node Drain timeout (mins) (kubelet-drain-timeout): 10 Kubelet Node Drain grace-period (mins) (kubelet-drain-grace-period): 10 Kubelet Node Drain force (kubelet-drain-force): true Set by Plan: Kubelet Node Drain force-node (kubelet-drain-force-node): true Kubelet Node Drain ignore-daemonsets (kubelet-drain-ignore-daemonsets): true Kubelet Node Drain delete-local-data (kubelet-drain-delete-local-data): true
Configure the default node drain behavior by running the following command:
pks update-cluster CLUSTER-NAME FLAG
CLUSTER-NAMEis the name of your cluster.
FLAGis an action flag for updating the node drain behavior.
$ pks update-cluster my-cluster --kubelet-drain-timeout 1 --kubelet-drain-grace-period 5
Update summary for cluster my-cluster: Kubelet Drain Timeout: 1 Kubelet Drain Grace Period: 5 Are you sure you want to continue? (y/n): y Use 'pks cluster my-cluster' to monitor the state of your cluster
For a list of the available action flags for setting node drain behavior, see pks update-cluster in PKS CLI.
If you are running Enterprise PKS on Azure,
you must add the
"Microsoft.Compute/virtualMachines/read" action to the worker node managed identity.
Note: You do not need to modify the worker node managed identity role if you are running Enterprise PKS on AWS, GCP, vSphere, or vSphere with NSX-T. Modifying the role for Azure is a requirement as of Kubernetes v1.14.5.
To add the
"Microsoft.Compute/virtualMachines/read" action, do the following:
List your roles using the Azure CLI. For example:
$ az role definition list --custom-role-only true -o json
Retrieve the definition of the
"PKS worker"role using the
roleNamekey. For example:
$ az role definition list --custom-role-only true -o json | jq -r '. | select(.roleName=="PKS worker")'
Copy the JSON to a file and add
Save your template as
Run the following command to update the role:
az role definition update --role-definition pks_worker_role.json
For more information about creating managed identities for Enterprise PKS, see Creating Managed Identities in Azure for Enterprise PKS.
Please send any feedback you have to firstname.lastname@example.org.