Release Notes
Page last updated:
Warning: VMware Enterprise PKS v1.6 is no longer supported because it has reached the End of General Support (EOGS) phase as defined by the Support Lifecycle Policy. To stay up to date with the latest software and security updates, upgrade to a supported version.
This topic contains release notes for VMware Enterprise PKS v1.6.
Warning: Before installing or upgrading to Enterprise PKS v1.6, review the Breaking Changes below.
v1.6.3
Release Date: July 28, 2020
Features
New features and changes in this release:
- Bumps Kubernetes to v1.15.12.
- Bumps UAA to v73.4.21.
- Bumps the Windows stemcell to v2019.15.
Product Snapshot
Release | Details |
---|---|
Version | v1.6.3 |
Release date | July 28, 2020 |
Component | Version |
Kubernetes | v1.15.12 |
CoreDNS | v1.3.1 |
Docker | v18.09.9 |
etcd | v3.3.12 |
Metrics Server | v0.3.3 |
NCP | v2.5.1 |
On-Demand Broker | v0.38.0 |
Percona XtraDB Cluster (PXC) | v0.22.0 |
UAA | v73.4.21 |
Compatibilities | Versions |
Ops Manager | See Pivotal Network |
Xenial stemcells | See Pivotal Network |
Windows stemcells | v2019.15 |
Backup and Restore SDK | v1.17.0 |
vSphere | See VMware Product Interoperability Matrices |
NSX-T | v2.5.2, v2.5.1, v2.5.0*, v2.4.3 |
* VMware recommends NSX-T v2.5.1 or later for NSX-T v2.5 integration.
Upgrade Path
The supported upgrade paths to Enterprise PKS v1.6.3 are from Enterprise PKS v1.5.0 and later.
Breaking Changes
All breaking changes in Enterprise PKS v1.6.3 are also in Enterprise PKS v1.6.0. See Breaking Changes in Enterprise PKS v1.6.0.
Known Issues
All known issues in Enterprise PKS v1.6.3 are also in Enterprise PKS v1.6.0. See Known Issues in Enterprise PKS v1.6.0.
v1.6.2
Release Date: April 29, 2020
Features
New features and changes in this release:
- Bumps Kubernetes to v1.15.10.
- Bumps UAA to v73.4.20.
- Bumps Percona XtraDB Cluster (PXC) to v0.22.
- Bumps Windows Stemcell to v2019.15.
- Bumps ODB to v0.38.0.
- Bumps Apache Tomcat (in PKS API) to v9.0.31.
- [Security Fix] UAA bump fixes blind SCIM injection vulnerability, CVE-2019-11282.
- [Security Fix] UAA bump fixes CSRF attack vulnerability.
- [Security Fix] PXC bump fixes cURL/libcURL buffer overflow vulnerability, CVE-2019-3822.
- [Bug Fix] Improves the behavior of the
pks get-kubeconfig
andpks get-credentials
commands during cluster updates and upgrades. You can now run thepks get-kubeconfig
command during single- and multi-master cluster updates. Additionally, you can run thepks get-credentials
command during multi-master cluster upgrades. - [Bug Fix] New UAA version includes Apache Tomcat bump that fixes SAML login issues.
Product Snapshot
Release | Details |
---|---|
Version | v1.6.2 |
Release date | April 29, 2020 |
Component | Version |
Kubernetes | v1.15.10 |
CoreDNS | v1.3.1 |
Docker | v18.09.9 |
etcd | v3.3.12 |
Metrics Server | v0.3.3 |
NCP | v2.5.1 |
On-Demand Broker | v0.38.0 |
UAA | v73.4.20 |
Compatibilities | Versions |
Ops Manager | See Pivotal Network |
Xenial stemcells | See Pivotal Network |
Windows stemcells | v2019.15 |
Backup and Restore SDK | v1.17.0 |
vSphere | See VMware Product Interoperability Matrices |
NSX-T | v2.5.2, v2.5.1, v2.5.0, v2.4.3 |
Upgrade Path
The supported upgrade paths to Enterprise PKS v1.6.2 are from Enterprise PKS v1.5.0 and later.
Breaking Changes
All breaking changes in Enterprise PKS v1.6.2 are also in Enterprise PKS v1.6.0. See Breaking Changes in Enterprise PKS v1.6.0.
Known Issues
All known issues in Enterprise PKS v1.6.2 are also in Enterprise PKS v1.6.0. See Known Issues in Enterprise PKS v1.6.0.
v1.6.1
Release Date: Jan 13, 2020
Features
New features and changes in this release:
- [Security Fix] Secures traffic into Kubernetes clusters with up-to-date TLS (v1.2+) and approved cipher suites.
- [Security Fix] Bumps UAA to v73.4.16. This update prevents logging of secure information and enables the PKS UAA to start with the
env.no_proxy
property set. - [Bug Fix] Resolves an issue where if you are using Ops Manager v2.7 and PKS v1.6 as a fresh install, enabling Plans 11, 12, or 13 does not enable Windows worker-based clusters. It creates Linux-based clusters only. For more information, see Enterprise PKS Creates a Linux Cluster When You Expect a Windows Cluster.
- [Bug Fix] Resolves an issue where applying changes to Enterprise PKS fails if Plan 8 is enabled in the Enterprise PKS tile. For more information, see Applying Changes Fails If Plan 8 Is Enabled.
- [Bug Fix] Resolves an issue where the
pks update-cluster --network-profile
command setssubnet_prefix
to 0 in the ncp.ini file if the network profile does not havepod_subnet_prefix
. For more information, see Network Profile for “pks update-cluster” Does Not Use the Defaults from the Original Cluster Manifest. - [Bug Fix] Resolves an issue where trying to create a cluster with a long network profile causes an error
Data too long for column 'nsxt_network_profile'
. - Updates the supported NCP version to NCP v2.5.1. Refer to the NCP Release Notes for more information.
- Support for NSX-T v2.5.1.
Product Snapshot
Release | Details |
---|---|
Version | v1.6.1 |
Release date | January 13, 2020 |
Component | Version |
Kubernetes | v1.15.5 |
CoreDNS | v1.3.1 |
Docker | v18.09.9 |
etcd | v3.3.12 |
Metrics Server | v0.3.3 |
NCP | v2.5.1 |
On-Demand Broker | v0.29.0 |
UAA | v73.4.16 |
Compatibilities | Versions |
Ops Manager | See Pivotal Network |
Xenial stemcells | See Pivotal Network |
Windows stemcells | v2019.7 |
Backup and Restore SDK | v1.17.0 |
vSphere | See VMware Product Interoperability Matrices |
NSX-T | v2.5.1, v2.5.0, v2.4.3 |
Upgrade Path
The supported upgrade paths to Enterprise PKS v1.6.1 are from Enterprise PKS v1.5.0 and later.
Breaking Changes
All breaking changes in Enterprise PKS v1.6.1 are also in Enterprise PKS v1.6.0. See Breaking Changes in Enterprise PKS v1.6.0.
Known Issues
All known issues in Enterprise PKS v1.6.1 are also in Enterprise PKS v1.6.0. See Known Issues in Enterprise PKS v1.6.0.
v1.6.0
Release Date: November 14, 2019
Features
This section describes new features and changes in this release.
PKS Control Plane and API
Enterprise PKS v1.6.0 updates include:
- Enables operators to upgrade multiple Kubernetes clusters simultaneously and to designate specific upgrade clusters as canary clusters. For more information about multiple cluster upgrades, see Upgrade Clusters in Upgrading Clusters.
- Adds a new UAA scope,
pks.clusters.admin.read
, for Enterprise PKS users. For information about UAA scopes, see UAA Scopes for Enterprise PKS Users and Managing Enterprise PKS Users with UAA. - Provides experimental integration with Tanzu Mission Control. For more information, see Tanzu Mission Control Integration.
- Enables operators to limit the total number of clusters a user can provision in Enterprise PKS. For more information about quotas, see Managing Resource Usage with Quotas and Viewing Usage Quotas.
- Enables operators to configure a single Kubernetes cluster with a specific Docker Registry CA certificate. For more information about configuring a cluster with a Docker Registry CA certificate, see Configuring Enterprise PKS Clusters with Private Docker Registry CA Certificates (Beta).
- Updates the
pks delete-cluster
PKS CLI command so that all cluster objects, including NSX-T networking objects, are deleted without the need to use thebosh delete deployment
command to remove failed cluster deletions.
Kubernetes Control Plane
Enterprise PKS v1.6.0 updates include:
- Increases the Worker VM Max in Flight default value from
1
to4
in the PKS API configuration pane, which accelerates cluster creation by allowing up to four new nodes to be provisioned simultaneously. The updated default value is only applied during new Enterprise PKS installation and is not applied during an Enterprise PKS upgrade. If you are upgrading Enterprise PKS from a previous version and want to accelerate multi-cluster provisioning, you can increase the value of Worker VM Max in Flight manually.
PKS Monitoring and Logging
Enterprise PKS v1.6.0 updates include:
- Redesigns the Logging and Monitoring panes of the Enterprise PKS tile and renames them to Host Monitoring and In-Cluster Monitoring. For information about configuring these panes, see the Installing Enterprise PKS topic for your IaaS.
- Adds the Max Message Size field in the Host Monitoring pane.
This allows you to configure the maximum number of characters of a log message that is forwarded to a syslog endpoint.
This feature helps ensure that log messages are not truncated at the syslog endpoint.
By default, the Max Message Size field is 10,000 characters.
For more information, see Host Monitoring in the Installing Enterprise PKS topic for your IaaS.
- Adds the Include kubelet metrics setting. This enables operators to collect workload metrics across all Kubernetes clusters. For more information, see Host Monitoring in the Installing Enterprise PKS topic for your IaaS.
- Adds support for Fluent Bit output plugins to log sinks. For information about configuring Fluent Bit output plugins, see Create a ClusterLogSink or LogSink Resource with a Fluent Bit Output Plugin in Creating and Managing Sink Resources.
- Adds support for filtering logs and events from a
ClusterLogSink
orLogSink
resource. For more information, see Filter Sinks in Creating and Managing Sink Resources.
Windows on PKS
Enterprise PKS v1.6.0 updates include:
- Adds support for floating Windows stemcells on vSphere. For information about Kubernetes clusters with Windows workers in Enterprise PKS, see Configuring Windows Worker-Based Kubernetes Clusters (Beta).
- Enables operators to configure the location of the Windows pause image. For information about configuring Kubelet customization - Windows pause image location, see Plans in Configuring Windows Worker-Based Kubernetes Clusters (Beta).
PKS with NSX-T Networking
Enterprise PKS v1.6.0 updates include:
- NSX Error CRD lets cluster managers and users view NSX errors in Kubernetes resource annotations, and use the command
kubectl get nsxerror
to view the health status of NSX-T cluster networking objects (NCP v2.5.0+). For more information, see Viewing the Health Status of Cluster Networking Objects (NSX-T only). - DFW log control for dropped traffic lets cluster administrators define network profile to turn on logging and log any dropped or rejected packet by NSX-T distributed firewall rules (NCP v2.5.0+). For more information, see Defining Network Profiles for NCP Logging.
- Load balancer and ingress resource capacity observability using the NSXLoadBalancerMonitor CRD lets cluster managers and users use the command
kubectl get nsxLoadBalancerMonitors
to view a health score that reflects the current performance of the NSX-T load balancer service, including usage, traffic, and current status (NCP v2.5.1+). For more information, see Ingress Scaling (NSX-T only). - Ingress scale out using the LoadBalancer CRD lets cluster managers scale out the NSX-T load balancer for ingress routing (NCP v2.5.1+). For more information, see Ingress Scaling (NSX-T only).
- Support for Ingress URL Rewrite. For more information, see Using Ingress URL Rewrite.
- Support for Active–Active Tier-0 router configuration when using a Shared-Tier-1 topology.
- Ability to place the load balancer and Tier-1 Active/Standby routers on different failure domains. See Multisite Deployment of NSX-T Data Center for more information.
PKS on AWS Networking
Enterprise PKS v1.6.0 updates include:
- Support for HTTP/HTTPS Proxy on AWS. For more information see, Using Proxies with Enterprise PKS on AWS.
Customer Experience Improvement Program
Enterprise PKS v1.6.0 updates include:
- Administrators can name Enterprise PKS installations so they are more easily recognizable in reports. For more information, see Sample Reports.
Component Updates
Enterprise PKS v1.6.0 updates include:
- Bumps Kubernetes to v1.15.5.
- Bumps UAA to v73.4.8.
- Bumps Jackson dependencies in the PKS API.
Bug Fixes
Enterprise PKS v1.6.0 includes the following bug fixes:
- Fixes an issue where enabling the Availability Sets mode at the BOSH Director > Azure Config resulted in the kubelet failing to start on provisioning of a Kubernetes cluster.
- Fixes an issue where persistent volume attachment failed on vSphere in a scenario where an AZ defined in Ops Manager does not contain a resource pool.
- Increases
network_profile
column size. - Fixes a Telemetry event generation issue where the
upgrade_cluster_end
event is not sent for completed cluster upgrades. - Fixes an issue where networking changes did not propagate when upgrading from Enterprise PKS v1.5 or later.
- Fixes an issue where the Ingress IP address was excluded from the Enterprise PKS floating IP pool.
- Fixes an issue where the PKS OSB Proxy start was delayed by scanning all NSX-T firewall rules.
- Fixes an issue with the PKS clusters upgrade errand not pushing the latest NSX-T certificate to Kubernetes Master nodes.
- Fixes an issue with the PKS OSB Proxy taking a long time to start due to scanning all NSX-T firewall rules.
- Fixes an issue with PKS releasing floating IP addresses incompletely while deleting clusters under active/active mode.
- Fixes an issue with the DNS Lookup Feature: INGRESS IP not kept out of PKS Floating IP pool.
- Fixes an issue with the command
pks cluster details
does not display NS Group ID of master VMs. - Checks the high availability mode of the Tier-0 router before creating PKS a cluster.
Product Snapshot
Release | Details |
---|---|
Version | v1.6.0 |
Release date | November 14, 2019 |
Component | Version |
Kubernetes | v1.15.5 |
CoreDNS | v1.3.1 |
Docker | v18.09.9 |
etcd | v3.3.12 |
Metrics Server | v0.3.3 |
NCP | v2.5.1 |
On-Demand Broker | v0.29.0 |
UAA | v73.4.8 |
Compatibilities | Versions |
Ops Manager | See Pivotal Network |
Xenial stemcells | See Pivotal Network |
Windows stemcells | v2019.7 |
Backup and Restore SDK | v1.17.0 |
vSphere | See VMware Product Interoperability Matrices |
NSX-T | v2.5.0, v2.4.3 |
Upgrade Path
The supported upgrade paths to Enterprise PKS v1.6.0 are from Enterprise PKS v1.5.0 and later.
Breaking Changes
Enterprise PKS v1.6.0 has the following breaking changes:
Persistent Volume Data Loss with Worker Reboot
With old versions of Ops Manager, PKS worker nodes with persistent disk volumes may get stuck in a startup state and lose data when they are rebooted manually from the dashboard or automatically by vSphere HA.
This issue is fixed in the following Ops Manager versions:
- v2.8.0+
- v2.7.6+
- v2.6.16+
For all PKS installations that host workers using persistent volumes, Pivotal recommends upgrading to one of the Ops Manager versions above.
Enterprise PKS Removes Sink Commands in the PKS CLI
Enterprise PKS removes the following Enterprise PKS Command Line Interface (PKS CLI) commands:
pks create-sink
pks sinks
pks delete-sink
You can use the following Kubernetes CLI commands instead:
kubectl apply -f YOUR-SINK.yml
kubectl get clusterlogsinks
kubectl delete clusterlogsink YOUR-SINK
For more information about defining and managing sink resources, see Creating and Managing Sink Resources.
Changes to PKS API Endpoints
This release moves the clusters
, compute-profiles
, quotas
, and usages
PKS API endpoints from v1beta1
to v1
.
v1beta1
is no longer supported for these endpoints. You must use v1
.
For example, instead of https://YOUR-PKS-API-FQDN:9021/v1beta1/quotas
, use https://YOUR-PKS-API-FQDN:9021/v1/quotas
.
Known Issues
Enterprise PKS v1.6.0 has the following known issues.
Your Kubernetes API Server CA Certificate Expires Unless You Regenerate It
Symptom
Your Kubernetes API server’s tls-kubernetes-2018
certificate is a one-year certificate
instead of a four-year certificate.
Explanation
When you upgraded from PKS v1.2.7 to PKS v1.3.1, the upgrade process extended the lifespan of all PKS CA certificates to four years, except for the Kubernetes API server’s tls-kubernetes-2018
certificate. The tls-kubernetes-2018
certificate remained a one-year certificate.
Unless you regenerate the tls-kubernetes-2018
certificate it retains its one-year lifespan, even through subsequent Enterprise PKS upgrades.
Workaround
If you have not already done so, you should replace the Kubernetes API server’s one-year tls-kubernetes-2018
certificate before it expires.
For information about generating and applying a new four-year tls-kubernetes-2018
certificate, see
How to regenerate tls-kubernetes-2018 certificate when it is not regenerated in the upgrade to PKS v1.3.x in the Pivotal Knowledge Base.
Cluster Upgrade Does Not Upgrade Kubernetes Version on Windows Workers
When PKS clusters are upgraded, Windows worker nodes in the cluster do not upgrade their Kubernetes version. The master and Linux worker nodes in the cluster do upgrade their Kubernetes version as expected.
When the Kubernetes version of a Windows worker does not exactly match the version of the master node, the cluster still functions. kube-apiserver
has no restriction on lagging patch bumps.
PKS clusters upgrade manually with the pks upgrade-cluster
command, or automatically with PKS upgrades when the Upgrade all clusters errand is set to Default (On) in the PKS tile Errands pane.
Network Profile for “pks update-cluster” Does Not Use the Defaults from the Original Cluster Manifest
Note: This issue is resolved in Enterprise PKS v1.6.1.
Symptom
The Network profile for pks update-cluster
uses contents that are being updated and
not using the defaults from the original cluster manifest.
Explanation
The pks update-cluster
operation sets the subnet_prefix
to 0 in the ncp.ini file when
the network-profile has pod_ip_block_ids
set but it does not have pod_subnet_prefix
.
Workaround
When creating the network profile to be used for update, include all the below fields. Then update-cluster with the network profile should work.
{
"name": "np",
"parameters": {
"t0_router_id": "c501f114-870b-4eda-99ac-966adf464452",
"fip_pool_ids": ["b7acbda8-46de-4195-add2-5fb11ca46cbf"],
"pod_ip_block_ids": ["b03bff60-854b-4ccb-9b2b-016867b319c9","234c3652-69e7-4365-9627-8e0d8d4a6b86"],
"pod_subnet_prefix": 24,
"single_tier_topology": false
}
}
Azure Default Security Group Is Not Automatically Assigned to Cluster VMs
Symptom
You experience issues when configuring a load balancer for a multi-master Kubernetes cluster or creating a service of type LoadBalancer
.
Additionally, in the Azure portal, the VM > Networking page does not display
any inbound and outbound traffic rules for your cluster VMs.
Explanation
As part of configuring the Enterprise PKS tile for Azure, you enter Default Security Group in the Kubernetes Cloud Provider pane. When you create a Kubernetes cluster, Enterprise PKS automatically assigns this security group to each VM in the cluster. However, on Azure the automatic assignment may not occur.
As a result, your inbound and outbound traffic rules defined in the security group are not applied to the cluster VMs.
Workaround
If you experience this issue, manually assign the default security group to each VM NIC in your cluster.
Cluster Creation Fails When First AZ Runs Out of Resources
Symptom
If the first availability zone (AZ) used by a plan with multiple AZs runs out of resources, cluster creation fails with an error like the following:
L Error: CPI error 'Bosh::Clouds::CloudError' with message 'No valid placement found for requested memory: 4096
Explanation
BOSH creates VMs for your Enterprise PKS deployment using a round-robin algorithm, creating the first VM in the first AZ that your plan uses. If the AZ runs out of resources, cluster creation fails because BOSH cannot create the cluster VM.
For example, if you have three AZs and you create two clusters with four worker VMs each, BOSH deploys VMs in the following AZs:
AZ1 | AZ2 | AZ3 | |
---|---|---|---|
Cluster 1 | Worker VM 1 | Worker VM 2 | Worker VM 3 |
Worker VM 4 | |||
Cluster 2 | Worker VM 1 | Worker VM 2 | Worker VM 3 |
Worker VM 4 |
In this scenario, AZ1 has twice as many VMs as AZ2 or AZ3.
Cluster Creation Fails with Long Network Profile
Note: This issue is resolved in Enterprise PKS v1.6.1.
Creating a cluster with a long network profile, such as with multiple pod_ip_block_ids
values, causes an error Data too long for column 'nsxt_network_profile'
.
Azure Worker Node Communication Fails after Upgrade
Symptom
Outbound communication from a worker node VM fails after upgrading Enterprise PKS.
Explanation
Enterprise PKS uses Azure Availability Sets to improve the uptime of workloads and worker nodes in the event of Azure platform failures. Worker node VMs are distributed evenly across Availability Sets.
Azure Standard SKU Load Balancers are recommended for the Kubernetes control plane and Kubernetes ingress and egress. This load balancer type provides an IP address for outbound communication using SNAT.
During an upgrade, when BOSH rebuilds a given worker instance in an Availability Set, Azure can time out while re-attaching the worker node network interface to the back-end pool of the Standard SKU Load Balancer.
For more information, see Outbound connections in Azure in the Azure documentation.
Workaround
You can manually re-attach the worker instance to the back-end pool of the Azure Standard SKU Load Balancer in your Azure console.
Error During Individual Cluster Upgrades
Symptom
While submitting a large number of cluster upgrade requests using the pks upgrade-cluster
command, some of your Kubernetes clusters are marked as failed.
Explanation
BOSH upgrades Kubernetes clusters in parallel with a limit of up to four concurrent cluster upgrades by default. If you schedule more than four cluster upgrades, Enterprise PKS queues the upgrades and waits for BOSH to finish the last upgrade. When BOSH finishes the last upgrade, it starts working on the next upgrade request.
If you submit too many cluster upgrades to BOSH, an error may occur,
where some of your clusters are marked as FAILED
because BOSH can start the upgrade only within the specified timeout.
The timeout is set to 168 hours by default.
However, BOSH does not remove the task from the queue or stop working on the upgrade if it has been picked up.
Solution
If you expect that upgrading all of your Kubernetes clusters takes more than 168 hours, do not use a script that submits upgrade requests for all of your clusters at once. For information about upgrading Kubernetes clusters provisioned by Enterprise PKS, see Upgrading Clusters.
Kubectl CLI Commands Do Not Work after Changing an Existing Plan to a Different AZ
Symptom
After you update the AZ of an existing plan, kubectl CLI commands do not work for your clusters associated with the plan.
Explanation
This issue occurs in IaaS environments that do not support attaching a disk across multiple AZs.
When the plan of an existing cluster changes to a different AZ, BOSH migrates the cluster by creating VMs for the cluster in the new AZ and removing your cluster VMs from the original AZ.
On an IaaS that does not support attaching VM disks across AZs, the disks BOSH attaches to the new VMs do not have the original content.
Workaround
If you cannot run kubectl CLI commands after reconfiguring the AZ of an existing cluster, contact Support for assistance.
Applying Changes Fails If Plan 8 Is Enabled
Note: This issue is resolved in Enterprise PKS v1.6.1.
Symptom
After you click Apply Changes on the Ops Manager Installation Dashboard,
the following error occurs: Cannot generate manifest for product Enterprise PKS
.
Explanation
This error occurs if Plan 8 is enabled in your Enterprise PKS v1.6.0 tile.
Workaround
Disable Plan 8 in the Enterprise PKS tile and move your plan settings to a plan that is available for configuration, for example, Plan 9 or 10.
To disable Plan 8:
- In Plan 8, select Plan > Inactive.
- Click Save.
One Plan ID Longer than Other Plan IDs
Symptom
One of your plan IDs is one character longer than your other plan IDs.
Explanation
In Enterprise PKS, each plan has a unique plan ID. A plan ID is normally a UUID consisting of 32 alphanumeric characters and 4 hyphens. However, the Plan 4 ID consists of 33 alphanumeric characters and 4 hyphens.
Solution
You can safely configure and use Plan 4. The length of the Plan 4 ID does not affect the functionality of Plan 4 clusters.
If you require all plan IDs to have identical length, do not activate or use Plan 4.
Kubernetes Cluster Name Limitation for Tanzu Mission Control Integration
Tanzu Mission Control integration cannot attach Tanzu Mission Control to Kubernetes clusters that have uppercase letters in their names.
Symptom
Clusters that you create with pks create-cluster
do not appear in the Tanzu Mission Control, even though you configured Tanzu Mission Control integration as described in Integrate Tanzu Mission Control.
Explanation
The regex pattern that parses cluster names in Tanzu Mission Control integration fails with names that contain uppercase letters.
Solution
When running pks create-cluster
to create clusters that you want to track in Tanzu Mission Control, pass in names that contain only lowercase letters and numbers.
Enterprise PKS Creates a Linux Cluster When You Expect a Windows Cluster
Note: This issue is resolved in Enterprise PKS v1.6.1.
Symptom
When you create an Enterprise PKS cluster using either Plan 11, 12 or 13 the cluster is created as a Linux cluster instead of a Windows cluster.
Explanation
When you create an Enterprise PKS cluster using either Plan 11, 12 or 13 a Windows cluster should be created. If you are using Enterprise PKS v1.6 with Operations Manager v2.7 a Linux cluster is created instead.
Saving UAA Tab Settings Fails With Error: ‘InvalidURIError bad URI’
Symptom
When you save your UAA tab with LDAP Server selected and multiple LDAP servers specified,
you receive the error: URI::InvalidURIError bad URI(is not URI?):LDAP URLs
.
Explanation
When you configure the UAA tab with multiple LDAP servers your settings will fail to validate when using the following Ops Manager releases:
Ops Manager Version | Affected Releases |
---|---|
Ops Manager v2.6 | Ops Manager v2.6.18 and earlier patch releases. |
Ops Manager v2.7 | All patch releases. |
Ops Manager v2.8 | All patch releases. |
Workaround
To resolve this issue see the following:
Ops Manager Version | Workaround |
---|---|
Ops Manager v2.6 | Perform one of the following:
|
Ops Manager v2.7 | Complete the procedures in UAA authentication tab in PKS 1.6 fails to save with error “URI::InvalidURIError bad URI(is not URI?):LDAP URLs” (76495) in the Pivotal Support Knowledge Base. |
Ops Manager v2.8 | Complete the procedures in UAA authentication tab in PKS 1.6 fails to save with error “URI::InvalidURIError bad URI(is not URI?):LDAP URLs” (76495) in the Pivotal Support Knowledge Base. |
Windows Worker Clusters Fail to Upgrade to v1.6
Symptoms
During your upgrade from Enterprise PKS v1.5 to Enterprise PKS v1.6 a Windows worker VM fails to upgrade, as evidenced by:
- The command line outputs an error
Failed jobs: docker-windows
. - The Windows worker VM disappears from the output of
kubectl get nodes
. - The command line shows the status
failed
and the actionUPGRADE
for the cluster that contains the worker. - The log shows an entry
\docker\dockerd.exe: Access is denied
.
Explanation
Between PKS v1.5 and v1.6, the name of the Docker service changed from docker
to docker-windows
,
but your environment continues to use the old Docker service name and paths.
The incompatible service name and pathing causes a Windows worker upgrade to fail.
If your cluster has multiple Windows workers, this issue does not incur downtime . Before BOSH attempts to upgrade a Windows worker, it moves the worker’s apps to other Windows workers in the cluster. When the upgrade fails, BOSH stops the cluster upgrade process and the other Windows workers continue running at the earlier version.
Workaround
After upgrading to Enterprise PKS v1.6 and your Windows worker clusters have failed to upgrade, complete the following steps:
- Upload a vSphere stemcell v2019.8 or later for Windows Server version 2019 to your Enterprise PKS tile.
- To upgrade your Windows worker clusters, perform one of the following:
- Enable the Upgrade all clusters errand setting and deploy the PKS tile. For more information about configuring the Upgrade all clusters errand and deploying the Enterprise PKS tile, see Modify Errand Configuration in the Enterprise PKS Tile in Upgrading Clusters.
- Run
pks upgrade-cluster
orpks upgrade-clusters
on your failed Windows worker cluster(s). For more information about upgrading specific Enterprise PKS clusters, see Upgrade Clusters in Upgrading Clusters.
502 Bad Gateway After OIDC Login
Symptom
You experience a “502 Bad Gateway” error from the NSX load balancer after you log in to OIDC.
Explanation
A large response header has exceeded your NSX-T load balancer maximum response header size. The default maximum response header size is 10,240 characters and should be resized to 50,000.
Workaround
If you experience this issue, manually reconfigure your NSX-T request_header_size
and response_header_size
to 50,000 characters.
For information about configuring NSX-T default header sizes,
see OIDC Response Header Overflow in the Pivotal Knowledge Base.
NSX-T Pre-Check Errand Fails Due to Edge Node Configuration
Symptom
You have configured your NSX-T Edge Node VM as medium
size,
and the NSX-T Pre-Check Errand fails with the following error:
“ERROR: NSX-T Precheck failed due to Edge Node … no of cpu cores is less than 8”.
Explanation
The NSX-T Pre-Check Errand is erroneously returning the “cpu cores is less than 8” error.
Solution
You can safely configure your NSX-T Edge Node VMs as medium
size and ignore the error.
Character Limitations in HTTP Proxy Password
For vSphere with NSX-T, the HTTP Proxy password field does not support the following special characters: &
or ;
.
Enterprise PKS Management Console 1.6.3
Release Date: July 28, 2020
Features
Other than support for Enterprise PKS v1.6.3, Enterprise PKS Management Console 1.6.3 has no new features.
Bug Fixes
Enterprise PKS Management Console 1.6.3 includes no bug fixes.
Product Snapshot
Note: Enterprise PKS Management Console provides an opinionated installation of Enterprise PKS. The supported versions may differ from or be more limited than what is generally supported by Enterprise PKS.
Element | Details |
---|---|
Version | v1.6.3 |
Release date | July 28, 2020 |
Installed Enterprise PKS version | v1.6.3 |
Installed Ops Manager version | v2.8.10 |
Installed Kubernetes version | v1.15.12 |
Compatible NSX-T versions | v2.5.1, v2.5.0*, v2.4.3 |
Installed Harbor Registry version | v1.9.4 |
* VMware recommends NSX-T v2.5.1 or later for NSX-T v2.5 integration.
Known Issues
The Enterprise PKS Management Console v1.6.3 appliance and user interface have the same known issues as v1.6.2.
Enterprise PKS Management Console 1.6.2
Release Date: April 29, 2020
Features
Other than support for Enterprise PKS v1.6.2, Enterprise PKS Management Console 1.6.2 has no new features.
Bug Fixes
Enterprise PKS Management Console 1.6.2 includes no bug fixes.
Product Snapshot
Note: Enterprise PKS Management Console provides an opinionated installation of Enterprise PKS. The supported versions may differ from or be more limited than what is generally supported by Enterprise PKS.
Element | Details |
---|---|
Version | v1.6.2 |
Release date | April 29, 2020 |
Installed Enterprise PKS version | v1.6.2 |
Installed Ops Manager version | v2.8.5 |
Installed Kubernetes version | v1.15.10 |
Compatible NSX-T versions | v2.5.0, v2.4.3 |
Installed Harbor Registry version | v1.9.4 |
Known Issues
The Enterprise PKS Management Console v1.6.2 appliance and user interface have the same known issues as v1.6.1.
Enterprise PKS Management Console 1.6.1
Release Date: January 23, 2020
Features
Other than support for Enterprise PKS v1.6.1, Enterprise PKS Management Console 1.6.1 has no new features.
Bug Fixes
Enterprise PKS Management Console 1.6.1 includes no bug fixes.
Product Snapshot
Note: Enterprise PKS Management Console provides an opinionated installation of Enterprise PKS. The supported versions may differ from or be more limited than what is generally supported by Enterprise PKS.
Element | Details |
---|---|
Version | v1.6.1 |
Release date | January 23, 2020 |
Installed Enterprise PKS version | v1.6.1 |
Installed Ops Manager version | v2.8.0 |
Installed Kubernetes version | v1.15.5 |
Compatible NSX-T versions | v2.5.0, v2.4.3 |
Installed Harbor Registry version | v1.9.3 |
Known Issues
The Enterprise PKS Management Console v1.6.1 appliance and user interface have the same known issues as v1.6.0-rev.3 and v1.6.0-rev.2.
Enterprise PKS Management Console 1.6.0-rev.3
Release Date: December 19, 2019
IMPORTANT: The Enterprise PKS Management Console 1.6.0-rev.3 offline patch can only be applied in an air-gapped environment. It can only be applied to 1.6.0-rev.2 and not to any other version. For information about how to apply the patch, see Patch Enterprise PKS Management Console Components.
Features
Enterprise PKS Management Console 1.6.0-rev.3 has no new features.
Bug Fixes
Enterprise PKS Management Console 1.6.0-rev.3 includes the following bug fixes:
- Fixes UI failure caused by multiple datacenters being present in vCenter Server.
- Adds support for both FQDN and IP addresses in LDAP/LDAPS configuration for identity management.
- Fixes UI freezing after entering unconventionally formatted URLs for SAML provider metadata.
- Adds support for UAA role
pks.clusters.admin.read
in identity Management configuration. - Adds validation for Harbor FQDN in lower case.
- Fixes misconfigured Wavefront HTTP Proxy when field is left empty.
Product Snapshot
Note: Enterprise PKS Management Console provides an opinionated installation of Enterprise PKS. The supported versions may differ from or be more limited than what is generally supported by Enterprise PKS.
Element | Details |
---|---|
Version | v1.6.0-rev.3 |
Release date | December 19, 2019 |
Installed Enterprise PKS version | v1.6.0 |
Installed Ops Manager version | v2.7.3 |
Installed Kubernetes version | v1.15.5 |
Compatible NSX-T versions | v2.5.0, v2.4.3 |
Installed Harbor Registry version | v1.9.3 |
Known Issues
With the exception of the Bug Fixes listed above, the Enterprise PKS Management Console v1.6.0-rev.3 appliance and user interface have the same known issues as v1.6.0-rev.2.
Enterprise PKS Management Console v1.6.0-rev.2
Release Date: November 26, 2019
Features
Enterprise PKS Management Console v1.6.0-rev.2 updates include:
- Provides experimental integration with VMware Tanzu Mission Control. For more information, see Tanzu Mission Control Integration.
- Provides experimental support for plans that use Windows worker nodes. For information, see Configure Plans.
- Deploys Harbor registry v1.9. For information, see Configure Harbor.
- Adds support for active-active mode on the tier 0 router in automated-NAT deployments and No-NAT configurations in Bring Your Own Topology deployments. For information, see Configure Networking.
- Adds the ability to configure proxies for the integration with Wavefront. For information, see Configure a Connection to Wavefront.
- Adds the ability to configure the size of the PKS API VM. For information, see Configure Resources and Storage.
- Allows you to use the management console to upgrade to v1.6.0-rev.2. For information, see Upgrade Enterprise PKS Management Console.
Product Snapshot
Note: Enterprise PKS Management Console provides an opinionated installation of Enterprise PKS. The supported versions may differ from or be more limited than what is generally supported by Enterprise PKS.
Element | Details |
---|---|
Version | v1.6.0-rev.2 |
Release date | November 26, 2019 |
Installed Enterprise PKS version | v1.6.0 |
Installed Ops Manager version | v2.7.3 |
Installed Kubernetes version | v1.15.5 |
Compatible NSX-T versions | v2.5.0, v2.4.3 |
Installed Harbor Registry version | v1.9.3 |
Known Issues
The following known issues are specific to the Enterprise PKS Management Console v1.6.0-rev.2 appliance and user interface.
YAML Validation Errors Not Cleared
Symptom
If you attempt to upload a YAML configuration file and the deployment fails because of an invalid manifest, Enterprise PKS Management Console displays an error notification with the validation error. If subsequent attempts also fail because of validation issues, the validation errors are appended to each other.
Explanation
The validation errors are not cleared when you resubmit the YAML configuration file.
Workaround
None
Enterprise PKS Management Console Notifications Persist
Symptom
In the Enterprise PKS view of Enterprise PKS Management Console, error notifications sometimes persist in memory on the Clusters and Nodes pages after you clear those notifications.
Explanation
After clicking the X button to clear a notification it is removed, but when you navigate back to those pages the notification might show again.
Workaround
Use shift+refresh to reload the page.
Cannot Delete Enterprise PKS Deployment from Management Console
Symptom
In the Enterprise PKS view of Enterprise PKS Management Console, you cannot use the Delete Enterprise PKS Deployment option even after you have removed all clusters.
Explanation
The option to delete the deployment is only activated in the management console a short period after the clusters are deleted.
Workaround
After removing clusters, wait for a few minutes before attempting to use the Delete Enterprise PKS Deployment option again.
Configuring Enterprise PKS Management Console Integration with VMware vRealize Log Insight
Symptom
Enterprise PKS Management Console appliance sends logs to VMware vRealize Log Insight over HTTP, not HTTPS.
Explanation
When you deploy the Enterprise PKS Management Console appliance from the OVA, if you require log forwarding to vRealize Log Insight, you must provide the port on the vRealize Log Insight server on which it listens for HTTP traffic. Do not provide the HTTPS port.
Workaround
Set the vRealize Log Insight port to the HTTP port. This is typically port 9000
.
Deploying Enterprise PKS to an Unprepared NSX-T Data Center Environment Results in Flannel Error
Symptom
When using the management console to deploy Enterprise PKS in NSX-T Data Center (Not prepared for PKS) mode, if an error occurs during the network configuration, the message Unable to set flannel environment
is displayed in the deployment progress page.
Explanation
The network configuration has failed, but the error message is incorrect.
Workaround
To see the correct reason for the failure, see the server logs. For instructions about how to obtain the server logs, see Troubleshooting Enterprise PKS Management Console.
Using BOSH CLI from Operations Manager VM
Symptom
The BOSH CLI client bash command that you obtain from the Deployment Metadata view does not work when logged in to the Operations Manager VM.
Explanation
The BOSH CLI client bash command from the Deployment Metadata view is intended to be used from within the Enterprise PKS Management Console appliance.
Workaround
To use the BOSH CLI from within the Operations Manager VM, see Connect to Operations Manager.
From the Ops Manager VM, use the BOSH CLI client bash command from the Deployment Metadata page, with the following modifications:
- Remove the clause
BOSH_ALL_PROXY=xxx
- Replace the
BOSH_CA_CERT
section withBOSH_CA_CERT=/var/tempest/workspaces/default/root_ca_certificate
Run pks
Commands against the PKS API Server
Explanation
The PKS CLI is available in the Enterprise PKS Management Console appliance.
Workaround
To be able to run pks
commands against the PKS API Server, you must first log to PKS using the following command syntax pks login -a fqdn_of_pks ...
.
To do this, you must ensure either of the following:
- The FQDN configured for the PKS Server is resolvable by the DNS server configured for the Enterprise PKS Management Console appliance, or
- An entry that maps the Floating IP assigned to the PKS Server to the FQDN exists on /etc/hosts in the appliance. For example:
192.168.160.102 api.pks.local
.
Please send any feedback you have to pks-feedback@pivotal.io.