Note: As of v1.8, Enterprise PKS has been renamed to VMware Tanzu Kubernetes Grid Integrated Edition. Some screenshots in this documentation do not yet reflect the change.
Page last updated:
This topic contains release notes for Tanzu Kubernetes Grid Integrated Edition (TKGI) v1.8.
Warning: Before installing or upgrading to Tanzu Kubernetes Grid Integrated Edition v1.8, review the Breaking Changes below.
Release Date: June 30, 2020
|Release date||June 30, 2020|
|Percona XtraDB Cluster (PXC)||v0.22.0|
|Ops Manager||AWS, Azure, GCP: See VMware Tanzu Network
vSphere v7.0: Ops Manager v2.9.3 - v2.9.5*
vSphere v6.7 or v6.5: Ops Manager v2.9.3 - v2.9.5*, v2.8.2+, v2.7.15+
|vSphere**||See VMware Product Interoperability Matrices|
|NSX-T||v3.0***, v2.5.1, v2.5.0|
|Xenial stemcells*||See VMware Tanzu Network|
vSphere v7.0: v2019.15
vSphere v6.7 or v6.5: v2019.15 and later
|CNS for vSphere||v1.0.2|
|Backup and Restore SDK||v1.18.0|
** Excluding VCF 4; see VCF 4 and Converged VDS v7 Not Supported in TKGI v1.8.
*** TKGI supports NSX-T v3.0 as a beta integration. Upgrading NSX-T to v3.0 is not recommended for production or large-scale TKGI environments. For more information about NSX-T v3.0 support, see NSX-T v3.0 Compatibility below.
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.8.0 are from Enterprise PKS v1.7.0 and later patches.
This section describes new features and changes in VMware Tanzu Kubernetes Grid Integrated Edition v1.8.0.
Enterprise PKS has been renamed to Tanzu Kubernetes Grid Integrated Edition (TKGI).
What has changed:
- The Tanzu Kubernetes Grid Integrated Edition v1.8 tile uses the new name.
- Tanzu Kubernetes Grid Integrated Edition v1.8 includes two downloads of the CLI, the TKGI CLI and PKS CLI. See PKS CLI Renamed to TKGI CLI below.
What has not changed:
- Internal components continue to use the old
name and its alternatives, such as
pivotal-container-service. This includes, but is not limited to, BOSH names, UAA roles, and text strings containing the product name in TKGI components and TKGI-provisioned clusters.
If you intend to continue using the PKS CLI in TKGI v1.8, no action is required. However, future releases of TKGI will deprecate and remove the PKS CLI.
To support the product name change, Tanzu Kubernetes Grid Integrated Edition v1.8 is distributed with a TKGI CLI in addition to a PKS CLI.
Both CLIs work identically and accept the same commands and arguments.
To run a TKGI CLI command,
tkgi where you previously used
For more information, see TKGI CLI.
To download the TKGI CLI or the PKS CLI, see VMware Tanzu Network.
TKGI v1.8 can run on vSphere v7.
On vSphere, TKGI can run with NSX-T v3.0 container networking.
Warning: TKGI supports NSX-T v3.0 as a beta integration. Intermittent upgrade failures and scale problems may occur if you upgrade to NSX-T v3.0. Upgrading your NSX-T environment to v3.0 in a production or large-scale deployment is not recommended until a patch resolving these issues has been released.
The TKGI API VM no longer stores a copy of the control plane database that the v1.7 upgrade migrated to the Database VM. This deletion frees internal memory in the TKGI API VM. As a result, users may notice improved control plane performance.
The PKS 1.7.x Upgrade - MySQL Clone errand errand has been removed from the TKGI tile Errands pane.
- On Azure, TKGI supports disabling the creation of a default outbound SNAT rule for clusters. See Kubernetes Cloud Provider for how to disable the default SNAT rule.
- All TKGI components use TLS v1.2 with
strong ciphers, including the
443now reports only TLS v1.2+ ciphers.
- The legacy Telemetry DB has been removed from the TKGI Database.
The following components have been updated:
- Bumps Kubernetes to v1.17.5.
- Bumps NCP to v3.0.1.
- Bumps UAA to v74.5.15.
TKGI v1.8.0 includes the following bug fixes:
tkgi tasksreturns valid output for all clusters.
tkgi upgrade-clustererrand no longer times out when stopping
tkgi get-credentialsworks for clusters that have not been upgraded.
tkgi update-clusterretains the
compute_profilevalue when changing settings for clusters created with a Compute Profile.
TKGI v1.8.0 has the following known issues:
TKGI with NSX-T v3.0 is compatible with Linux Ubuntu Xenial stemcell v621.75, but not with stemcell versions v621.76 and later.
Because Ops Manager v2.9.6 uses Xenial stemcell 621.76, TKGI v1.8 with NSX-T on vSphere is incompatible with that version. TKGI v1.8 on NSX-T is compatible with Ops Manager v2.9.3 - v2.9.5.
For installations on vSphere v7 with NSX-T v3.0 integration, TKGI v1.8 supports only N-VDS for NSX-T traffic. It does not support:
- Converged Virtual Distributed Switch (C-VDS) v7, which lets you use the same VDS for both vSphere and NSX-T traffic
- VMware Cloud Foundation (VCF) v4.x, which uses only VDS mode with NSX-T v3.0
For more information, see Configure vSphere Networking for ESXi Hosts in Installing and Configuring NSX-T Data Center v3.0 for Tanzu Kubernetes Grid Integrated Edition.
TKGI v1.8 installations with Windows worker-based Kubernetes clusters on vSphere (Flannel) are not compatible with Ops Manager v2.9. If you do not intend to deploy and run Windows worker-based Kubernetes clusters, you can use Ops Manager v2.9 with TKGI v1.8.
For Ops Manager compatibility information, see VMware Tanzu Network.
TKGI-provisioned Windows workers inherit a Kubernetes limitation that prevents outbound ICMP communication from workers. As a result, pinging Windows workers does not work.
For information about this limitation, see Limitations > Networking in the Windows in Kubernetes documentation.
TKGI on Google Cloud Platform (GCP) does not support Tanzu Mission Control integration, which is configured in the Tanzu Kubernetes Grid Integrated Edition tile > the Tanzu Mission Control (Experimental) pane.
If you intend to run TKGI v1.8 on GCP, skip this pane when configuring the Tanzu Kubernetes Grid Integrated Edition tile.
You experience a “502 Bad Gateway” error from the NSX load balancer after you log in to OIDC.
A large response header has exceeded your NSX-T load balancer maximum response header size. The default maximum response header size is 10,240 characters and should be resized to 50,000.
If you experience this issue, manually reconfigure your NSX-T
response_header_size to 50,000 characters.
For information about configuring NSX-T default header sizes,
see OIDC Response Header Overflow in the Knowledge Base.
One of your plan IDs is one character longer than your other plan IDs.
In TKGI, each plan has a unique plan ID. A plan ID is normally a UUID consisting of 32 alphanumeric characters and 4 hyphens. However, the Plan 4 ID consists of 33 alphanumeric characters and 4 hyphens.
You can safely configure and use Plan 4. The length of the Plan 4 ID does not affect the functionality of Plan 4 clusters.
If you require all plan IDs to have identical length, do not activate or use Plan 4.
You have configured your NSX-T Edge Node VM as
and the NSX-T Pre-Check Errand fails with the following error:
“ERROR: NSX-T Precheck failed due to Edge Node … no of cpu cores is less than 8”.
The NSX-T Pre-Check Errand is erroneously returning the “cpu cores is less than 8” error.
You can safely configure your NSX-T Edge Node VMs as
medium size and ignore the error.
You must configure a global proxy in the Tanzu Kubernetes Grid Integrated Edition tile > Networking pane before you create any Windows workers that use the proxy.
You cannot change the proxy configuration for Windows workers in an existing cluster.
For vSphere with NSX-T, the HTTP Proxy password field does not support the following special characters:
Release Date: June 30, 2020
Tanzu Kubernetes Grid Integrated Edition Management Console v1.8.0 updates include:
- Support for vSphere 7
- Support for NSX-T 3.0
- Rebranding to Tanzu Kubernetes Grid Integrated Edition Management Console
- Specify FQDN for the Ops Manager VM during upgrade
Note: Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI. The supported versions may differ from or be more limited than what is generally supported by TKGI.
|Release date||June 30, 2020|
|Installed Tanzu Kubernetes Grid Integrated Edition version||v1.8.0|
|Installed Ops Manager version||v2.9.0|
|Installed Kubernetes version||v1.17.5|
|Compatible NSX-T versions||v3.0, v2.5.1, v2.5.0|
|Installed Harbor Registry version||v2.0, v1.10.3|
|Windows stemcells||v2019.20 and later|
The supported upgrade path to Tanzu Kubernetes Grid Integrated Edition Management Console v1.8.0 is from Tanzu Kubernetes Grid Integrated Edition v1.7.0 and later.
The Tanzu Kubernetes Grid Integrated Edition Management Console v1.8.0 has the following known issues:
If you enable vSphere HA on a cluster, if the TKGI Management Console appliance VM is running on a host in that cluster, and if the host reboots, vSphere HA recreates a new TKGI Management Console appliance VM on another host in the cluster. Due to an issue with vSphere HA, the
ovfenv data for the newly created appliance VM is corrupted and the new appliance VM does not boot up with the correct network configuration.
- In the vSphere Client, right-click the appliance VM and select Power > Shut Down Guest OS.
- Right-click the appliance again and select Edit Settings.
- Select VM Options and click OK.
- Verify under Recent Tasks that a
Reconfigure virtual machinetask has run on the appliance VM.
- Power on the appliance VM.
Some file arguments in Kubernetes profiles are base64 encoded. When the management console displays the Kubernetes profile, some file arguments are not decoded.
echo "$content" | base64 --decode
If you create network profiles and then try to apply them in the Create Cluster page, the new profiles are not available for selection.
Log out of the management console and log back in again.
In the cluster summary page, only default IP pool, pod IP block, node IP block values are displayed, rather than the real-time values from the associated network profile.
Please send any feedback you have to email@example.com.