PKS Release Notes
Page last updated:
Release Date: July 17, 2018
- Bumps version of UAA for security fix
Release Date: July 16, 2018
The supported upgrade paths to PKS v1.1.1 are from PKS v1.0.2 and later.
To upgrade to PKS v1.1.1, follow the procedures in Upgrade PKS.
- UAA and security enhancements
- NSX-T patches
- Telemetry patch
- Kubernetes 1.10.4
Release Date: June 28, 2018
Note: The only supported upgrade path for PKS v1.1.0 is from PKS v1.0.2 and later. Do not upgrade directly to PKS v1.1.0 from v1.0.0. Instead, first upgrade PKS v1.0.0 to v1.0.2; then upgrade PKS v1.0.2 to v1.1.0. Alternatively, do a clean install of PKS v1.1.0.
To upgrade to PKS v1.1.0, follow the procedures in Upgrade PKS.
This section describes new features introduced in PKS v1.1.0.
- Adds support for Kubernetes 1.10.3.
- Adds support for backing up and restoring PKS using BOSH Backup and Restore (BBR). For more information, see Backing Up and Restoring PKS.
- Adds support for granting PKS control plane access to clients and external LDAP groups. For more information, see the Grant Cluster Access section of Manage Users in UAA.
- Adds support for allowing workers to be deployed across Availability Zones (AZs).
- Adds support for network automation and node network isolation.
- Adds support for NFS by enabling rpcbind on worker nodes.
- Adds support for kube-controller-manager to issue certificates.
- Adds support for configuring HTTP/HTTPS proxy to be used by the Kubernetes control plane.
- Adds support for configuring the SecurityContextDeny admission controller. For more information, see Using Admission Controllers in the Kubernetes documentation.
- Enables the MutatingAdmissionWebhook admission controller. For more information, see Using Admission Controllers in the Kubernetes documentation.
- Enables audit logging for the API server.
- Creates logs for delete-all-cluster errands in the /var/vcap/sys/log/delete-all-clusters folder on the PKS control plane VM.
- Adds BOSH instance IDs to worker node labels.
- Hardens security by removing the ABAC authorization option for clusters.
- Hardens security by using service account IDs instead of service account keys for GCP deployments.
- Hardens security for Kubernetes system components. For example, kube-dns now uses its own configuration instead of the kubelet configuration.
- Adds support for NO-NAT deployment topologies for PKS installations on NSX-T. For more information, see Installing and Configuring PKS with NSX-T Integration.
- Adds support for PKS integration with VMware Wavefront to capture metrics for clusters and pods. For more information, see the (Optional) Logging section of Installing and Configuring PKS.
- Adds support for node network access via HTTP proxy for vSphere deployments. For more information, see the Networking section of Installing and Configuring PKS.
- Adds support for PKS integration with VMware vRealize Log Insight (vRLI) for tagged logging of the control plane, clusters, and pods. For more information, see the (Optional) Monitoring section of Installing and Configuring PKS.
- Adds support for integration with VMware Analytics Cloud (VAC) to capture telemetry information.
- Hardens security by removing VM change permissions from worker nodes for vSphere deployments.
- Hardens security by removing vCenter user credentials from worker nodes for vSphere deployments.
- Adds support for Harbor Registry integration enhancements: updated Harbor tile, ability to use NFS and Google Buckets as an image store, and HTTP/HTTPS proxy servers for Clair.
- Prevents unnecessary route creation in the kube-controller-manager.
- Retains the original source IP when using Flannel.
- Disables the read-only port in the kubelet configuration.
- Disables cAdvisor in the kubelet configuration.
- For added security, the Kubernetes API server no longer tries to fix malformed requests.
- The Kubernetes API server now cleans up terminated pods more often to avoid running out of disk space.
- The Kubernetes API server now unmounts volumes of terminated pods for security reasons.
- Operators no longer have to manually delete NSX-T objects created during the life of the product. In PKS v1.1, running the
pks delete-clustercommand deletes all NSX objects.
Adds support for deploying multiple Kubernetes master nodes across AZs. For information about configuring multiple masters, see the Plans section of Installing and Configuring PKS.
WARNING: This feature is a beta component and is intended for evaluation and test purposes only. Do not use this feature in a production environment. Product support and future availability are not guaranteed for beta components.
WARNING: You cannot change the number of master nodes for existing clusters. To use the multi-master feature, you must create a new plan that uses multiple master/etcd nodes and deploy a new cluster. If you are already using all three plan configurations in the PKS tile, you must delete a plan and all clusters you deployed using that plan before you can deploy a multi-master cluster.
PKS v1.1.0 includes or supports the following component versions:
WARNING: PKS v1.1.0 does not support Ops Manager v2.1.7 and later.
|Product Component||Version Supported||Notes|
|Pivotal Cloud Foundry Operations Manager (Ops Manager)||2.1.0-2.1.6||Separate download available from Pivotal Network|
|Kubernetes||1.10.3||Packaged in the PKS Tile (CFCR)|
|CFCR (Kubo)||0.17||Packaged in the PKS Tile|
|Golang||1.9.7||Packaged in the PKS Tile|
|NCP||2.2||Packaged in the PKS Tile|
|Kubernetes CLI||1.10.3||Separate download available from the PKS section of Pivotal Network|
|PKS CLI||1.1||Separate download available from the PKS section of Pivotal Network|
|VMware vSphere||6.5 U2, 6.5 U1, and 6.5. Editions:
||vSphere versions supported for Pivotal Container Service (PKS)||VMware NSX-T||2.1 - Advanced Edition||NST-T versions supported for Pivotal Container Service (PKS)|
|VMware Harbor Registry||1.5.0||Separate download available from Pivotal Network|
|VMware vRealize Log Insight (for vSphere deployments)||4.6||Separate download available from Pivotal Network|
|* Components marked with an asterisk have been patched to resolve security vulnerabilities or fix component behavior.|
This section includes known issues with PKS v1.1.0 and corresponding workarounds.
- PKS v1.1.0 does not support Ops Manager v2.1.7 and later. For more information, see Error: Duplicate Variable Name in the Troubleshooting topic.
- If you use PKS CLI v1.0.x with PKS tile v1.1.x, you must log in every 600 seconds to manually refresh the CLI token. Pivotal recommends upgrading to PKS CLI v1.1.x to solve this issue.
- If you upgrade PKS from v1.0.x to v1.1, you must enable the Upgrade All Clusters errand in the PKS tile configuration. This ensures existing clusters can perform resize or delete actions after the upgrade.
To reduce the risk of compromised clusters in your PKS deployment, the following policies are recommended:
- Ensure that only trusted operators and systems have access to clusters.
- Ensure that only trusted images are deployed to clusters.
- Maintain trusted images to consistently include current security fixes.
- Do not expose network ports to untrusted networks unless strictly required.
If Kubernetes master node VMs are recreated for any reason, you must reconfigure your cluster load balancers to point to the new master VMs. For example, after a stemcell upgrade, BOSH recreates the VMs in your deployment.
To reconfigure your GCP cluster load balancer to use the new master VM, follow the procedure in the Reconfiguring a GCP Load Balancer section of Configuring a GCP Load Balancer for PKS Clusters.
Attribute-based access control (ABAC) is no longer supported in v1.1. Delete any ABAC clusters before upgrading to v1.1.
In the Resource Config pane, the default VM Type is now large. This is to ensure that PKS control plane VM has sufficient resources.
If the VMs in your PKS installation use the default VM type, your VMs will use the new large VM type after upgrading to PKS v1.1.0.
If the VMs in your PKS installation use a custom VM type, your configuration remains the same after upgrading to PKS v1.1.0.
Please send any feedback you have to email@example.com.