Page last updated:
This topic contains release notes for VMware Enterprise PKS v1.7.
Warning: Before installing or upgrading to Enterprise PKS v1.7, review the Breaking Changes below.
Release Date: April 2, 2020
|Release date||April 2, 2020|
|Percona XtraDB Cluster (PXC)||v0.22.0|
|Ops Manager||See VMware Tanzu Network|
|vSphere||See VMware Product Interoperability Matrices|
|NSX-T||v2.5.1, v2.5.0, v2.4.3|
|Xenial stemcells||See VMware Tanzu Network|
|Windows stemcells||v2019.15 and later|
|Backup and Restore SDK||v1.17.0|
The supported upgrade paths to Enterprise PKS v1.7.0 are from Enterprise PKS v1.6.0 and later.
This section describes new features and changes in VMware Enterprise PKS v1.7.0.
- Introduces Kubernetes profiles that customize Kubernetes component settings for PKS-provisioned clusters. Kubernetes profiles can include validated configurations supported by the PKS team and unvalidated configurations for evaluation only. For more information, see Using Kubernetes Profiles.
- Supports VMware Tanzu Service Mesh by VMware NSX (Beta).
- Uses TLS v1.2+ with strong ciphers for all internal components except for the
metrics-server(see Known Issues).
- Supports resizing and updating Kubernetes clusters that have not been upgraded from the previously installed Enterprise PKS version.
- Splits the PKS control plane into separate PKS API and PKS Database VMs.
- Increases to 65,535 the maximum allowed number of open files for the Docker process on worker nodes.
- Removes the Server SSL Cert AltName field from under the Enterprise PKS tile > UAA > LDAP Server. Enterprise PKS no longer uses this field.
For security, PKS v1.7 does not include the Kubernetes Dashboard Web UI because it uses weak TLS ciphers. Clusters created with PKS v1.7 do not have Dashboard installed on creation.
- You can install Dashboard on new clusters by following the Deploying the Dashboard UI instructions in the Kubernetes documentation.
- Clusters upgraded from PKS v1.6 to v1.7 have leftover, cached dashboard objects.
- The service of these cached Dashboard objects may degrade over time, as resize or upgrade operations rebuild node VMs.
- To remove the leftover Dashboard objects from a cluster, run
kubectl delete service,deployment,configmap,rolebinding,role,serviceaccount -l k8s-app=kubernetes-dashboard -n kube-system
The bump to Kubernetes v1.16 removes some API version definitions in favor of newer, more stable elements. See Deprecated APIs Removed In 1.16: Here’s What You Need To Know in the Kubernetes Blog. Kubernetes recommends that you:
- Update the following to reference the newer API definitions:
- Custom integrations and controllers
- Third-party tools such as ingress controllers and continuous delivery systems
- VMware recommends that customers selectively upgrade Kubernetes 1.15.x clusters before upgrading a fleet of clusters. See Upgrade a Single Cluster for how to upgrade clusters selectively.
- Update the following to reference the newer API definitions:
Adds new metrics that Enterprise PKS can send to your monitoring service:
- Node Exporter metrics from the PKS API VM and worker nodes in Prometheus format
- Metrics from the Kubernetes controller manager and Kubernetes API server on master nodes
For configuration instructions, see Telegraf in the Installing topic for your IaaS.
Improves the in-cluster Node Exporter component, which is enabled in the Enterprise PKS tile > In-Cluster Monitoring.
Deprecates the billing database. The billing database is scheduled to be removed in Enterprise PKS v1.9. You can use Telemetry instead of the billing database. For more information about the billing database and Telemetry, see Viewing Usage Data and Telemetry. If you are impacted by this deprecation, please reach out to firstname.lastname@example.org.
Makes HTTP and HTTPS proxy available on Windows nodes, to support downloading Windows Docker images through a proxy.
- You must configure a global proxy in the Enterprise PKS tile > Networking pane before you create any Windows workers that use the proxy.
When upgrading Windows clusters, Enterprise PKS upgrades Windows workers with the latest Windows stemcell.
Enables Windows pods to access DNS services from
Windows pods can now consume the same pod CIDR as Linux pods.
Windows pods can now egress to applications or workloads outside of the pod’s IP subnet.
- Supports highly resilient workloads using stretched clusters. See Solution Guide for Enabling Highly Resilient Kubernetes Workloads Using vSAN Stretched Clusters.
Changes the default topology for newly-created clusters to Shared Tier-1 Router topology, from Dedicated Tier-1 Router topology.
- This change does not affect the topology of previously-existing clusters, which continue to run during and after upgrade to v1.7.
- You cannot change the topology of an existing cluster from Dedicated Tier-1 to Shared Tier-1.
- See Network Profile for Dedicated Tier-1 Topology for how to use a network profile to override the default and create new Dedicated Tier-1 Router clusters.
Supports backup and restore of the PKS control plane, Kubernetes clusters, and stateless workloads networked with vSphere NSX-T.
- Adds proxy support for Telemetry. You can configure your Enterprise PKS proxy settings in the Enterprise PKS tile > Networking.
- Bumps Kubernetes to v1.16.7.
- Bumps NCP to v2.5.1.
- Bumps UAA to v74.5.10.
Enterprise PKS v1.7.0 includes the following bug fixes:
Improves the behavior of the
pks get-credentialscommands during cluster updates and upgrades. You can now run the
pks get-kubeconfigcommand during single- and multi-master cluster updates. Additionally, you can run the
pks get-credentialscommand during multi-master cluster upgrades.
When upgrading Windows clusters, Enterprise PKS upgrades the Windows stemcell and Kubernetes version on Windows worker nodes. See Windows on PKS for more information.
Enterprise PKS v1.7.0 has the following breaking changes:
Prior versions of Enterprise PKS did not require the hostname in the PKS API certificate to match the PKS API hostname. In PKS v1.7, the PKS API certificate hostname must contain a valid hostname for the PKS API. This breaks the existing PKS API certificate if it has an invalid hostname.
To fix this issue, create a new PKS API certificate with the hostname that you entered in the Enterprise PKS tile > PKS API pane > API Hostname (FQDN) field.
Clusters created with PKS v1.7 do not have Dashboard installed on creation. For more information, see the Kubernetes Control Plane section.
The bump to Kubernetes v1.16 removes some API version definitions in favor of newer, more stable definitions. This change may break some integrations, controllers, and pipelines. For more information, see the Kubernetes Control Plane section.
This release migrates the Enterprise PKS control plane database from the PKS API VM to a new database VM, PKS Database.
Enterprise PKS performs the database migration as part of your Enterprise PKS upgrade to v1.7. During the upgrade, the Enterprise PKS control plane and the PKS CLI will be unavailable. Your Kubernetes workloads remain accessible throughout the upgrade.
These topics contain important preparation and upgrade configuration steps you must follow before and during your upgrade to v1.7.
Warning: If your Enterprise PKS environment was originally created using Enterprise PKS v1.2 or earlier, see Leftover Tables From PKS v1.2 Prevent Database Migration in the Known Issues below.
Enterprise PKS v1.7.0 has the following known issues:
PKS v1.2 created a
pkswatermark tables in the Telemetry database on the
pivotal-container-service VM. These leftover tables interfere with database migration.
If your PKS installation was originally created using PKS v1.2 or earlier, you must complete the procedures in the KB article For environments created before VMware Enterprise PKS 1.3, upgrading to PKS 1.7 fails during data migration when running clone-db errand before upgrading to Enterprise PKS v1.7.
After you create a PKS cluster with a network profile that specifies a value for
ingress_ip, clusters created subsequently, either with or without a network profile, incorrectly retain this same value for
Because this address is already allocated in the FIP pool, creating the new clusters either fails to create HTTP and HTTPS servers for the clusters, or creates them with conflicting addresses.
Creating a cluster with a network profile that defines
ingress_ip also sets the value for
http_and_https_ingress_ip in the NSX-T Container Plugin (NCP) configuration, at
When Enterprise PKS creates new clusters, it does not overwrite this value in the NCP configuration.
See the Resolution section in the Knowledge Base article Same ingress IP used across multiple clusters in PKS 1.7.
pks upgrade-cluster fails with an error that stopping the
dockerd process timed out after 60 seconds.
Changes to how Docker shuts down containers affects interaction between kubelet stop and Docker stop, and requires graceful shutdown to start with kubelet.
For each worker node in the cluster, run
monit unmonitor alland
monit stop all.
Enterprise PKS v1.7 installations with Windows worker-based Kubernetes clusters on vSphere (Flannel) are not compatible with Ops Manager v2.9. If you do not intend to deploy and run Windows worker-based Kubernetes clusters, you can use Ops Manager v2.9 with Enterprise PKS v1.7.
For Ops Manager compatibility information, see VMware Tanzu Network.
Enterprise PKS inherits a Kubernetes limitation that prevents ICMP packets from returning to their source.
As a result, the
ping command might not work for Windows workers in PKS clusters.
For information about this limitation, see Limitations > Networking in the Windows in Kubernetes documentation.
Enterprise PKS on Google Cloud Platform (GCP) does not support Tanzu Mission Control integration, which is configured in the Enterprise PKS tile > the Tanzu Mission Control (Experimental) pane.
If you intend to run Enterprise PKS v1.7 on GCP, skip this pane when configuring the Enterprise PKS tile.
You experience a “502 Bad Gateway” error from the NSX load balancer after you log in to OIDC.
A large response header has exceeded your NSX-T load balancer maximum response header size. The default maximum response header size is 10,240 characters and should be resized to 50,000.
If you experience this issue, manually reconfigure your NSX-T
response_header_size to 50,000 characters.
For information about configuring NSX-T default header sizes,
see OIDC Response Header Overflow in the Knowledge Base.
One of your plan IDs is one character longer than your other plan IDs.
In Enterprise PKS, each plan has a unique plan ID. A plan ID is normally a UUID consisting of 32 alphanumeric characters and 4 hyphens. However, the Plan 4 ID consists of 33 alphanumeric characters and 4 hyphens.
You can safely configure and use Plan 4. The length of the Plan 4 ID does not affect the functionality of Plan 4 clusters.
If you require all plan IDs to have identical length, do not activate or use Plan 4.
You have configured your NSX-T Edge Node VM as
and the NSX-T Pre-Check Errand fails with the following error:
“ERROR: NSX-T Precheck failed due to Edge Node … no of cpu cores is less than 8”.
The NSX-T Pre-Check Errand is erroneously returning the “cpu cores is less than 8” error.
You can safely configure your NSX-T Edge Node VMs as
medium size and ignore the error.
You must configure a global proxy in the Enterprise PKS tile > Networking pane before you create any Windows workers that use the proxy.
You cannot change the proxy configuration for Windows workers in an existing cluster.
metrics-server component communicates over TLS v1.2 with weak ciphers.
All other Enterprise PKS components use TLS v1.2 with strong ciphers.
Release Date: April 16, 2020
Enterprise PKS Management Console v1.7.0 updates include:
- Role-based access control (RBAC). For information, see Identity Management in the Management Console.
- Quota management. For information, see Assign Resource Quotas to Users.
- Create, reconfigure, and delete clusters. For information, see Create Clusters in the Management Console.
- Network profiles. For information, see Working with Network Profiles.
- Kubernetes profile support. For information, see Create Clusters in the Management Console.
- Ops Manager FQDN support. For information, see Generate Configuration File and Deploy Enterprise PKS.
- Hybrid NAT mode for bring your own topology (BYOT) deployments. For information, see Configure a Bring Your Own Topology Deployment to NSX-T Data Center.
- LDAP validation. For information, see Use an External LDAP Server.
- Set multiple reserved IP ranges. For information, see Generate Configuration File and Deploy Enterprise PKS.
- Set DRS VM-host affinity rule to
SHOULDif you select host groups for availability zones. For information, see Configure Availability Zones.
- Enable HA on the Linux worker nodes that provide services to Windows clusters. For information, see Configure Plans.
Note: Enterprise PKS Management Console provides an opinionated installation of Enterprise PKS. The supported versions may differ from or be more limited than what is generally supported by Enterprise PKS.
|Release date||April 16, 2020|
|Installed Enterprise PKS version||v1.7.0|
|Installed Ops Manager version||v2.8.5|
|Installed Kubernetes version||v1.16.7|
|Compatible NSX-T versions||v2.5.1, v2.5.0, v2.4.3|
|Installed Harbor Registry version||v1.10.1|
The supported upgrade path to Enterprise PKS Management Console v1.7.0 is from Enterprise PKS v1.6.0 and later.
The Enterprise PKS Management Console v1.7.0 appliance and user interface have the following known issues.
Some file arguments in Kubernetes profiles are base64 encoded. When the management console displays the Kubernetes profile, some file arguments are not decoded.
echo "$content" | base64 --decode
This release allows you to specify an FQDN for the Ops Manager VM during deployment of Enterprise PKS. This only applies to new deployments. You cannot specify an FQDN during upgrade.
If you create network profiles and then try to apply them in the Create Cluster page, the new profiles are not available for selection.
Log out of the management console and log back in again.
In the cluster summary page, only default IP pool, pod IP block, node IP block values are displayed, rather than the real-time values from the associated network profile.
Please send any feedback you have to email@example.com.