LATEST VERSION: v1.3 - RELEASE NOTES
Pivotal Container Service v1.3

PKS Release Notes

Page last updated:

This topic contains release notes for Pivotal Container Service (PKS) v1.3.x.

WARNING: PKS v1.3.1 and earlier includes a critical CVE. Follow the procedures in the PKS upgrade approach for CRITICAL CVE article in the Pivotal Support Knowledge Base to perform an upgrade to PKS v1.3.2.

v1.3.2

Release Date: February 13, 2019

Product Snapshot

Element Details
Version v1.3.2
Release date February 13, 2019
Compatible Ops Manager versions v2.3.1+, v2.4.0+
Stemcell version v170.15
Kubernetes version v1.12.4
On-Demand Broker version v0.24
CFCR v0.25.8
NSX-T versions * v2.2, v2.3.0.2, v2.3.1
NCP version v2.3.1
Docker version v18.06.2-ce
CFCR v0.25.8

Feature Support by IaaS

AWS Azure GCP vSphere vSphere with NSX-T
Automatic Kubernetes Cluster API load balancer
HTTP proxy
Multi-AZ storage
Per-namespace subnets
Service type:LoadBalancer *

Upgrade Path

The supported upgrade paths to PKS v1.3.2 are as follows:

  • When upgrading from PKS v1.3.x: PKS 1.3.1
  • When upgrading from PKS v1.2.x: PKS v1.2.8 or v1.2.9

What’s New

PKS v1.3.2 adds the following:

  • Fix: CVE-2019-5736. This fix updates the version of Docker deployed by PKS to v18.06.2-ce. This Docker version addresses a runc vulnerability whereby a malicious image could run in privileged mode and elevate to root access on worker nodes.
  • Fix: CVE-2019-3779. This fix addresses a vulnerability where certs signed by the Kubernetes API could be used to gain access to a PKS-deployed cluster’s etcd service.
  • Fix: CVE-2019-3780. This fixes a regression bug in PKS where vCenter IaaS credentials intended for the vSphere Cloud Provider were written on worker node VM disks.
  • Fix: Clusters can now be successfully created if there are pre-existing Kubernetes clusters using the same hostname.

Breaking Changes and Known Issues

Breaking Change: Heapster is deprecated in PKS v1.3.x, and Kubernetes has retired Heapster. For more information, see the kubernetes-retired/heapster repository on GitHub.

PKS v1.3.2 has the following known issues:

PKS Flannel Network Gets Out of Sync with Docker Bridge Network (cni0)

When VMs have been powered down for multiple days, turning them back on and issuing a bosh recreate to re-create the VMs causes the pods to get stuck in a ContainerCreating state.

Workaround: See PKS Flannel network gets out of sync with docker bridge network (cni0) in the Pivotal Knowledge Base.

Deploy Fails if vSphere Master Credentials Field Has Special Characters Without Quotes

If you install PKS on vSphere and you enter credentials in the vCenter Master Credentials field of the Kubernetes Cloud Provider pane of the PKS tile that contain special characters, such as #, $, ,, !, or -, your deployment might fail with the following error:

ServerFaultCode: Cannot complete login due to an incorrect user name or password.

Workaround: If you install PKS on vSphere without NSX-T integration, place quotes around the credentials in the cloud provider configuration. For example, "SomeP4$$w0rd#!". Then redeploy the PKS tile by clicking Apply Changes.

If you install PKS on vSphere with NSX-T integration, avoid using special characters in this field until this issue is resolved.

Cluster Upgrades from PKS v1.3.0 on Azure Fail If Services Are Exposed

If you install PKS v1.3.0 on Azure, clusters might fail with the following error when you upgrade to either PKS v1.3.1 or v1.3.2:

result: 1 of 2 post-start scripts failed. Failed Jobs: kubelet. Successful Jobs: bosh-dns

This issue is caused by a timeout condition. The issue affects nodes hosting Kubernetes pods that are exposed externally by a Kubernetes service.

New cluster creations and cluster scaling operations are not affected by this issue.

Workaround: If you install PKS on Azure and experience this issue, contact Support for assistance.

v1.3.1

Release Date: February 8, 2019

WARNING: PKS v1.3.1 and earlier includes a critical CVE. Follow the procedures in the PKS upgrade approach for CRITICAL CVE article in the Pivotal Support Knowledge Base to perform an upgrade to PKS v1.3.2.

Product Snapshot

Element Details
Version v1.3.1
Release date February 8, 2019
Compatible Ops Manager versions v2.3.1+, v2.4.0+
Stemcell version v170.15
Kubernetes version v1.12.4
On-Demand Broker version v0.24
CFCR v0.25.8
NSX-T versions * v2.2, v2.3.0.2, v2.3.1
NCP version v2.3.1
Docker version v18.06.1-ce
CFCR v0.25.8

vSphere Version Requirements

If installing PKS on vSphere or vSphere with NSX‑T, please note Ops Manager and PKS support the following vSphere component versions:

Versions Editions
  • VMware vSphere 6.7 U1
  • VMware vSphere 6.7.0
  • VMware vSphere 6.5 U2
  • VMware vSphere 6.5 U1
  • vSphere Enterprise Plus
  • vSphere with Operations Management Enterprise Plus

Note: VMware vSphere 6.7 is only supported with Ops Manager v2.3.1 or later and NSX‑T v2.3.

For more information, see Upgrading vSphere in an NSX Environment in the VMware documentation.

Feature Support by IaaS

AWS Azure GCP vSphere vSphere with NSX-T
Automatic Kubernetes Cluster API load balancer
HTTP proxy
Multi-AZ storage
Per-namespace subnets
Service type:LoadBalancer *

Upgrade Path

The supported upgrade paths to PKS v1.3.1 are as follows:

  • When upgrading from PKS v1.3.x: PKS 1.3.0
  • When upgrading from PKS v1.2.x: PKS v1.2.7 or v1.2.8

Follow the procedures in the PKS upgrade approach for CRITICAL CVE article in the Pivotal Support Knowledge Base to perform an upgrade to PKS v1.3.2.

What’s New

PKS v1.3.1 adds support for the following:

  • Certificates for the Etcd instance for each Kubernetes cluster provisioned by PKS are generated with a four-year lifetime and signed by a new Etcd Certificate Authority (CA).
  • Fix: Upgrading PKS no longer fails during upgrades if there are Kubernetes clusters with duplicate hostnames.
  • Fix: Deploying PKS no longer fails if an entry in the No Proxy field contains special characters such as (-) character.
  • Fix: The Kubernetes API now responds with the CA certificate that signed the Kubernetes cluster’s certificate so that customer scripts such as the get-pks-k8s-config.sh tool will function again.

Breaking Changes and Known Issues

Breaking Change: Heapster is deprecated in PKS v1.3.x, and Kubernetes has retired Heapster. For more information, see the kubernetes-retired/heapster repository on GitHub.

PKS v1.3.1 has the following known issues:

PKS Flannel Network Gets out of Sync with Docker Bridge Network (cni0)

When VMs have been powered down for multiple days, turning them back on and issuing a bosh recreate to re-create the VMs causes the pods to get stuck in a ContainerCreating state.

Workaround: See PKS Flannel network gets out of sync with docker bridge network (cni0) in the Pivotal Knowledge Base.

Deploy Fails if vSphere Master Credentials Field Has Special Characters Without Quotes

If you install PKS on vSphere and you enter credentials in the vCenter Master Credentials field of the Kubernetes Cloud Provider pane of the PKS tile that contain special characters, such as #, $, ,, !, or -, your deployment might fail with the following error:

ServerFaultCode: Cannot complete login due to an incorrect user name or password.

Workaround: If you install PKS on vSphere without NSX-T integration, place quotes around the credentials in the cloud provider configuration. For example, "SomeP4$$w0rd#!". Then redeploy the PKS tile by clicking Apply Changes.

If you install PKS on vSphere with NSX-T integration, avoid using special characters in this field until this issue is resolved.

Cluster Upgrades from PKS v1.3.0 on Azure Fail If Services Are Exposed

If you install PKS v1.3.0 on Azure, clusters might fail with the following error when you upgrade to either PKS v1.3.1 or v1.3.2:

result: 1 of 2 post-start scripts failed. Failed Jobs: kubelet. Successful Jobs: bosh-dns

This issue is caused by a timeout condition. The issue affects nodes hosting Kubernetes pods that are exposed externally by a Kubernetes service.

New cluster creations and cluster scaling operations are not affected by this issue.

v1.3.0

Release Date: January 16, 2019

WARNING: PKS v1.3.0 has a known vulnerability and is no longer available. Install or upgrade to PKS v1.3.1.

Product Snapshot

Element Details
Version v1.3.0
Release date January 16, 2019
Compatible Ops Manager versions v2.3.1+, v2.4.0+
Stemcell version v170.15
Kubernetes version v1.12.4
On-Demand Broker version v0.24
NSX-T versions * v2.2, v2.3.0.2, v2.3.1
NCP version v2.3.1
Docker version v18.06.1-ce
CFCR

* PKS v1.3 supports NSX-T v2.2 and v2.3 with the following caveats:

vSphere Version Requirements

If installing PKS on vSphere or vSphere with NSX‑T, please note Ops Manager and PKS support the following vSphere component versions:

Versions Editions
  • VMware vSphere 6.7 U1
  • VMware vSphere 6.7.0
  • VMware vSphere 6.5 U2
  • VMware vSphere 6.5 U1
  • vSphere Enterprise Plus
  • vSphere with Operations Management Enterprise Plus

Note: VMware vSphere 6.7 is only supported with Ops Manager v2.3.1 or later and NSX‑T v2.3.

For more information, see Upgrading vSphere in an NSX Environment in the VMware documentation.

Feature Support by IaaS

AWS Azure GCP vSphere vSphere with NSX-T
Automatic Kubernetes Cluster API load balancer
HTTP proxy
Multi-AZ storage
Per-namespace subnets
Service type:LoadBalancer *

* For more information about configuring Service type:LoadBalancer on AWS, see the Access Workloads Using an Internal AWS Load Balancer section of Deploying and Accessing Basic Workloads.

Upgrade Path

The supported upgrade paths to PKS v1.3.0 are from PKS v1.2.5 and later.

For more information, see Upgrading PKS and Upgrading PKS with NSX-T.

Note: Upgrading from PKS v1.2.5+ to PKS v1.3.x causes all certificates to be automatically regenerated. The old certificate authority is still trusted, and has a validity of one year. But the new certificates are signed with a new certificate authority, which is valid for four years.

What’s New

PKS v1.3.0 adds the following:

  • Support for PKS on Azure. For more information, see Azure.
  • BOSH Backup and Restore (BBR) for single-master clusters. For more information, see Backing up the Single Master Cluster and Restoring the Single Master Cluster.
  • Routable pods on NSX-T. For more information, see Routable Pod Networks in Defining Network Profiles.
  • Large size NSX-T load balancers with Bare Metal NSX-T edge nodes. For more information, see Hardware Requirements for PKS on vSphere with NSX-T.
  • HTTP proxy for NSX-T components. For more information, see Using Proxies with PKS on NSX-T.
  • Ability to specify the size of the Pods IP Block subnet using a network profile. For more information, see Pod Subnet Prefix in Defining Network Profiles.
  • Support for bootstrap security groups, custom floating IPs, and edge router selection using network profiles. For more information, see Bootstrap Security Group, Custom Floating IP Pool, and Edge Router Selection in Defining Network Profiles.
  • Support for sink resources in air-gapped environments.
  • Support for creating sink resources with the PKS Command Line Interface (PKS CLI). For more information, see Creating Sink Resources.
  • Sink resources include both pod logs as well as events from the Kubernetes API. These events are combined in a shared format that provides operators with a robust set of filtering and monitoring options. For more information, see Monitoring PKS with Sinks.
  • Support for multiple NSX-T Tier-0 (T0) logical routers for use with PKS multi-tenant environments. For more information, see Configuring Multiple Tier-0 Routers for Tenant Isolation.
  • Support for multiple PKS foundations on the same NSX-T. For more information, see Implementing a Multi-Foundation PKS Deployment.
  • Smoke tests errand that uses the PKS CLI to create a Kubernetes cluster and then delete it. If the creation or deletion fails, the errand fails and the installation of the PKS tile is aborted. For more information, see the Errands section of the Installing PKS topic for your IaaS, such as Installing PKS on vSphere.
  • Support for scaling down the number of worker nodes. For more information, see Scaling Existing Clusters.
  • Support for defining the CIDR range for Kubernetes pods and services on Flannel networks. For more information, see the Networking section of the Installing PKS topic for your IaaS, such as Installing PKS on vSphere.
  • Kubernetes v1.12.4.
  • Bug Fix: The No Proxy property for vSphere now accepts wildcard domains like *.example.com and example.com. See Networking in Installing PKS on vSphere for more information.
  • Bug Fix: The issue with NSX-T where special characters in username and password doesn’t work is resolved.
  • Security Fix: CVE 2018-18264: This CVE allows unauthenticated secret access to the Kubernetes Dashboard.
  • Security Fix: CVE-2018-15759: This CVE contains an insecure method of verifying credentials. A remote unauthenticated malicious user may make many requests to the service broker with a series of different credentials, allowing them to infer valid credentials and gain access to perform broker operations.

Breaking Changes and Known Issues

Breaking Change: Heapster is deprecated in PKS v1.3, and Kubernetes has retired Heapster. For more information, see the kubernetes-retired/heapster repository on GitHub.

PKS v1.3.0 has the following known issues:

Upgrades Fail When Clusters Share an External Hostname

If you use the same external hostname across more than one PKS-deployed Kubernetes cluster, upgrades from PKS v1.2.x to PKS v1.3.0 might fail. The external hostname is the value you set with either the -e or --external-hostname argument when you created the cluster. For more information, see Create a Kubernetes Cluster.

PKS v1.3.0 introduces restrictions that prevent you from deploying clusters with duplicate hostnames, so this issue does not affect upgrades from PKS v1.3.0 and later.

If you have existing clusters that use the same external hostname, do not upgrade to PKS v1.3.x. Contact your Support representative for more information.

Upgrades Fail with a Hyphen in the No Proxy Field on vSphere

If you install PKS on vSphere and you enable the HTTP/HTTPS Proxy setting, you cannot use the - character in the No Proxy field. Entering - in the No Proxy field can cause validation errors when trying to upgrade to PKS v1.3.0. For more information, see the Networking section of Installing PKS on vSphere.

If you experience this issue during an upgrade, contact Support for a hotfix that will be applied in a future PKS v1.3.x release.

PKS Flannel Network Gets Out of Sync with Docker Bridge Network (cni0)

When VMs have been powered down for multiple days, turning them back on and issuing a bosh recreate to re-create the VMs causes the pods to get stuck in a ContainerCreating state.

Workaround: See PKS Flannel network gets out of sync with docker bridge network (cni0) in the Pivotal Knowledge Base.

Deploy Fails if vSphere Master Credentials Field Has Special Characters Without Quotes

If you install PKS on vSphere and you enter credentials in the vCenter Master Credentials field of the Kubernetes Cloud Provider pane of the PKS tile that contain special characters, such as #, $, ,, !, or -, your deployment might fail with the following error:

ServerFaultCode: Cannot complete login due to an incorrect user name or password.

Workaround: If you install PKS on vSphere without NSX-T integration, place quotes around the credentials in the cloud provider configuration. For example, "SomeP4$$w0rd#!". Then redeploy the PKS tile by clicking Apply Changes.

If you install PKS on vSphere with NSX-T integration, avoid using special characters in this field until this issue is resolved.


Please send any feedback you have to pks-feedback@pivotal.io.

Create a pull request or raise an issue on the source for this page in GitHub