Upgrade Preparation Checklist for Enterprise PKS v1.4
Page last updated:
This topic serves as a checklist for preparing to upgrade Pivotal Container Service (PKS) v1.3 to Enterprise Pivotal Container Service (Enterprise PKS) v1.4.
This topic contains important preparation steps that you must follow before beginning your upgrade. Failure to follow these instructions may jeopardize your existing deployment data and cause the upgrade to fail.
After completing the steps in this topic, you can continue to Upgrading Enterprise PKS. If you are upgrading PKS for environments using vSphere with NSX-T, continue to Upgrading Enterprise PKS with NSX-T.
We recommend backing up your PKS deployment before upgrading, to restore in case of failure.
If you are upgrading PKS for environments using vSphere with NSX-T, back up your environment using the procedures in the following topics:
- Backup PKS
- Backup NSX-T
- Backup vCenter
Note: If you choose not to back up PKS, NSX-T, or vCenter, we recommend backing up the NSX-T and NSX-T Container Plugin (NCP) logs.
If you are upgrading PKS for any other IaaS, back up the PKS v1.3 control plane. For more information, see Backing Up and Restoring PKS.
Review the Release Notes for Enterprise PKS v1.4.
Pod Security Policies (PSPs) are a Kubernetes security feature available in PKS starting with v1.4.0. For more information about PSPs in Enterprise PKS, see the Pod Security Policy documentation.
When you install or upgrade to PKS 1.4.0, PSPs are not enabled by default. If you want to to enable PSPs for a new or existing cluster, you must define the necessary RBAC objects (role and binding) and PSP BEFORE you deploy the cluster, otherwise users will not be able to access the Kubernetes cluster after deployment. See Enabling PSPs for more information
Enterprise PKS provides a PSP named
pks-restricted that you can leverage, or you can define your own. You can also leverage one of the cluster roles already in the system. At a minium, you will need to define the proper cluster role binding so that users can access a cluster with PSPs enabled. See Configuring PSP for Developers to Use for more information.
Review What Happens During PKS Upgrades, and evaluate your workload capacity and uptime requirements.
View your workload resource usage in Dashboard. For more information, see Accessing Dashboard.
If workers are operating too close to their capacity, the upgrade can fail. To prevent workload downtime during a cluster upgrade, we recommend running your workload on at least three worker VMs, using multiple replicas of your workloads spread across those VMs. For more information, see Maintaining Workload Uptime.
If your clusters are near capacity for your existing infrastructure, we recommend scaling up your
clusters before you upgrade. Scale up your cluster by running
pks resize or create a cluster
using a larger plan. For more information, see Scaling Existing Clusters.
Verify that your Kubernetes environment is healthy. To verify the health of your Kubernetes environment, see Verifying Deployment Health.
If you are upgrading PKS for environments using vSphere with NSX-T, perform the following steps:
- Verify that the vSphere datastores have enough space.
- Verify that the vSphere hosts have enough memory.
- Verify that there are no alarms in vSphere.
- Verify that the vSphere hosts are in a good state.
- Verify that NSX Edge is configured for high availability using Active/Standby mode.
Note: Workloads in your Kubernetes cluster are unavailable while the NSX Edge nodes run the upgrade unless you configure NSX Edge for high availability. For more information, see the Configure NSX Edge for High Availability (HA) section of Preparing NSX-T Before Deploying PKS.
Clean up previous failed attempts to delete PKS clusters with the PKS Command Line Interface (PKS CLI) by performing the following steps:
View your deployed clusters by running the following command:
Statusof any cluster displays as
FAILED, continue to the next step. If no cluster displays as
FAILED, no action is required. Continue to the next section.
Perform the procedures in Cannot Re-Create a Cluster that Failed to Deploy to clean up the failed BOSH deployment.
View your deployed clusters again by running
pks clusters. If any clusters remain in a
FAILEDstate, contact PKS Support.
Verify that existing Kubernetes clusters have unique external hostnames by checking for multiple Kubernetes clusters with the same external hostname. Perform the following steps:
Log in to the PKS CLI. For more information, see Logging in to Enterprise PKS. You must log in with an account that has the UAA scope of
pks.clusters.admin. For more information about UAA scopes, see Managing Users in Enterprise PKS with UAA.
View your deployed PKS clusters by running the following command:
For each deployed cluster, run
pks cluster CLUSTER-NAMEto view the details of the cluster. For example:
$ pks cluster my-cluster
Examine the output to verify that the
Kubernetes Master Hostis unique for each cluster.
Verify your current PKS proxy configuration by performing the following steps:
Check whether an existing proxy is enabled:
- Log in to Ops Manager.
- Click the Pivotal Container Service tile.
- Click Networking.
- If HTTP/HTTPS Proxy is Disabled, no action is required. Continue to the next section. If HTTP/HTTPS Proxy is Enabled, continue to the next step.
If the existing No Proxy field contains any of the following values, or you plan to add any of the following values, contact PKS Support:
- Hostnames containing dashes, such as
Upgrading to Enterprise PKS v1.4.x includes a MySQL migration from MariaDB to Percona, which requires allocating additional disk space to the PKS VM.
Before you upgrade to PKS 1.4.x, you must increase the size of persistent disk for the PKS VM. For more information, see Upgrade Fails Due to Insufficient Disk Space in the Known Issues.
A PKS upgrade will run without ever completing if any Kubernetes app has a
maxUnavailable set to
0. To ensure that no apps have a
maxUnavailable set to
0, perform the following steps:
Use the Kubernetes CLI,
kubectl, to verify the
PodDisruptionBudgetas the cluster administrator. Run the following command:
kubectl get poddisruptionbudgets --all-namespaces
Examine the output. Verify that no app displays
Please send any feedback you have to email@example.com.