PKS Release Notes

Page last updated:

This topic contains release notes for Pivotal Container Service (PKS) v1.3.x.


v1.3.7

Release Date: July 17, 2019

Product Snapshot

Element Details
Version v1.3.7
Release date July 17, 2019
Compatible Ops Manager versions v2.3.1+, v2.4.0+
Xenial stemcell version v170.76
Kubernetes version v1.12.8
On-Demand Broker version v0.24.0
CFCR v0.25.14
NSX-T versions v2.3.1, v2.4.0.1
NCP version v2.4.0
Docker version v18.06.3-ce
docker-boshrelease

Note: Ops Manager v2.3.10 and later in the v2.3 version line and Ops Manager v2.4.4 and later in the v2.4 version line do not support PKS v1.3 on Azure. Before deploying PKS v1.3 on Azure, you must install Ops Manager v2.3.9 or earlier in the 2.3 version line or Ops Manager v2.4.3 or earlier in the 2.4 version line.

Note: NSX-T v2.4 implements a new Policy API that PKS v1.3.7 does not support. If you are using NSX-T v2.4 with PKS 1.3.7, you must use the “Advanced Networking” tab in NSX Manager to create, read, update, and delete network object required for PKS.

vSphere Version Requirements

If you are installing PKS on vSphere or on vSphere with NSX-T Data Center, see the VMware Product Interoperability Matrices for compatibility information.

Feature Support by IaaS

AWS Azure GCP vSphere vSphere with NSX-T
Automatic Kubernetes Cluster API load balancer
HTTP proxy
Multi-AZ storage
Per-namespace subnets
Service type:LoadBalancer

Upgrade Path

The supported upgrade paths to PKS v1.3.7 are as follows:

  • PKS v1.3.4 or later

When upgrading to NSX-T 2.4:

  • Use the official VMware NSX-T Data Center 2.4 build.
  • Apply the NSX-T v2.4.0.1 Hot Patch. For more information, see KB article 67499 in the VMware Knowledge Base.
  • To obtain the NSX-T v2.4.0.1 Hot Patch, open a support ticket with VMware Global Support Services (GSS) for NSX-T Engineering.

What’s New

PKS v1.3.7 adds the following:

  • Security Fix. Updates Xenial stemcell to v170.76. This addresses the Zombieload CVE.
  • Security Fix. Fixes security issue around PKS cluster restore. Please use BOSH Backup and Restore (BBR) CLI version v1.5.0 or higher with this version of PKS.
  • Security Fix. Updates fluent-bit container image in the PKS Telemetry agent.
  • Security Fix. Updates base images to latest Xenial SHA for all sink resources.
  • Bug Fix. Enables kubelet to write data to /var/vcap/data/kubelet. This prevents pod eviction due to insufficient resources in emptyDir mounts.

v1.3.6

Release Date: April 8, 2019

Product Snapshot

Element Details
Version v1.3.6
Release date April 8, 2019
Compatible Ops Manager versions v2.3.1+, v2.4.0+
Xenial stemcell version v170.15
Kubernetes version v1.12.7
On-Demand Broker version v0.24
CFCR v0.25.11
NSX-T versions * v2.3.1, v2.4.0.1
NCP version v2.4.0
Docker version v18.06.3-ce
CFCR v0.25.11

Note: Ops Manager v2.3.10 and later in the v2.3 version line and Ops Manager v2.4.4 and later in the v2.4 version line do not support PKS v1.3 on Azure. Before deploying PKS v1.3 on Azure, you must install Ops Manager v2.3.9 or earlier in the 2.3 version line or Ops Manager v2.4.3 or earlier in the 2.4 version line.

Note: NSX-T v2.4 implements a new Policy API that PKS v1.3.6 does not support. If you are using NSX-T v2.4 with PKS 1.3.6, you must use the “Advanced Networking” tab in NSX Manager to create, read, update, and delete network object required for PKS.

vSphere Version Requirements

If you are installing PKS on vSphere or vSphere with NSX‑T, note that Ops Manager and PKS support the following vSphere component versions:

Versions Editions
  • VMware vSphere 6.7 U1 EP06 (ESXi670-201901001) – for NSX-T 2.4
  • VMware vSphere 6.7 U1
  • VMware vSphere 6.7.0
  • VMware vSphere 6.5 U2 P03 (ESXi650-201811002) – for NSX-T 2.4
  • VMware vSphere 6.5 U2
  • VMware vSphere 6.5 U1
  • vSphere Enterprise Plus
  • vSphere with Operations Management Enterprise Plus

Note: VMware vSphere 6.7 is only supported with Ops Manager v2.3.1 or later and NSX‑T v2.3.

For more information, see Upgrading vSphere in an NSX Environment in the VMware documentation.

Feature Support by IaaS

AWS Azure GCP vSphere vSphere with NSX-T
Automatic Kubernetes Cluster API load balancer
HTTP proxy
Multi-AZ storage
Per-namespace subnets
Service type:LoadBalancer *

Upgrade Path

The supported upgrade paths to PKS v1.3.6 are as follows:

  • PKS v1.3.4 or later

When upgrading to NSX-T 2.4:

  • Use the official VMware NSX-T Data Center 2.4 build.
  • Apply the NSX-T v2.4.0.1 hot-patch. For more information, see VMware Knowledge Base KB article 67499.
  • To obtain the NSX-T v2.4.0.1 hot-patch, open a support ticket with VMware Global Support Services (GSS) for NSX-T Engineering.

Features

New features and changes in this release:

  • Telemetry property environment_provider.
  • Support for nsx-cf-cni with 2.4.0.12511604.
  • Remaining plans to the osb-proxy configuration.

Breaking Changes and Known Issues

Breaking Change: Heapster is deprecated in PKS v1.3.x, and Kubernetes has retired Heapster. For more information, see the kubernetes-retired/heapster repository on GitHub.


PKS v1.3.6 has the following known issues:

Azure Resource Group Field in the Kubernetes Cloud Provider Is Ignored

The PKS tile’s Resource Group configuration is ignored on Azure IaaS platform deployments. On Azure, the PKS VM is always deployed to the same Resource Group as the Ops Manager and BOSH VMs. The Resource Group field is in the PKS tile’s Kubernetes Cloud Provider section.

NSX-T Upgrades from v2.3.X to v2.4.0.1 Fail for Bare Metal Edge Node

Upgrading NSX-T v2.3.X to v2.4.0.1 fails for Bare Metal Edge Nodes.

If you are using a Bare Metal Edge Nodes, please refrain from upgrading NSX-T v2.3.x to NSX-T v2.4.0.1.

Worker Nodes with Small Ephemeral Disks Can Cause Upgrade Failure

PKS deploys packages to the ephemeral disk, /var/vcap/data, during installations and upgrades. If master or worker node VMs have ephemeral disks smaller than 8 GB, the disk can fill during an upgrade and cause the upgrade to fail. Cluster upgrades can present error messages such as the following:

{"time":999999999,"error":{"code":450001,"message":"Response exceeded maximum allowed length"}}

Workaround: In the plans you use to deploy clusters, ensure that worker and master node ephemeral disks are set to greater than 8 GB. For plan configuration instructions, see the Plans section of the Installing PKS topic for your IaaS.

This issue should not affect new installations of PKS v1.3.x as the default ephemeral disk size in plans is larger than 8 GB.

PKS Flannel Network Gets Out of Sync with Docker Bridge Network (cni0)

When VMs have been powered down for multiple days, turning them back on and issuing a bosh recreate to re-create the VMs causes the pods to get stuck in a ContainerCreating state.

Workaround: See PKS Flannel network gets out of sync with docker bridge network (cni0) in the Pivotal Knowledge Base.

Cluster Upgrades from PKS v1.3.0 on Azure Fail If Services Are Exposed

If you install PKS v1.3.0 on Azure, clusters might fail with the following error when you upgrade to PKS v1.3.1 or later:

result: 1 of 2 post-start scripts failed. Failed Jobs: kubelet. Successful Jobs: bosh-dns

This issue is caused by a timeout condition. The issue affects nodes hosting Kubernetes pods that are exposed externally by a Kubernetes service.

New cluster creations and cluster scaling operations are not affected by this issue.

Workaround: If you install PKS on Azure and experience this issue, contact Support for assistance.

The kubelet customization feature is only enabled for Plan 1

PKS 1.3.4 introduces the ability to configure kubelet startup parameters system-reserved and evction-hard within a plan. This capability is only functional in Plan 1 for PKS 1.3.4 and will be enabled in additional plans in the next release.

v1.3.5

Release Date: March 28, 2019

Product Snapshot

Element Details
Version v1.3.5
Release date March 28, 2019
Compatible Ops Manager versions v2.3.1+, v2.4.0+
Xenial stemcell version v170.15
Kubernetes version v1.12.7
On-Demand Broker version v0.24
CFCR v0.25.11
NSX-T versions * v2.2, v2.3.0.2, v2.3.1
NCP version v2.3.2
Docker version v18.06.3-ce
CFCR v0.25.11

Note: Ops Manager v2.3.10 and later in the v2.3 version line and Ops Manager v2.4.4 and later in the v2.4 version line do not support PKS v1.3 on Azure. Before deploying PKS v1.3 on Azure, you must install Ops Manager v2.3.9 or earlier in the 2.3 version line or Ops Manager v2.4.3 or earlier in the 2.4 version line.

vSphere Version Requirements

If you are installing PKS on vSphere or vSphere with NSX‑T, note that Ops Manager and PKS support the following vSphere component versions:

Versions Editions
  • VMware vSphere 6.7 U1 EP06 (ESXi670-201901001) – for NSX-T 2.4
  • VMware vSphere 6.7 U1
  • VMware vSphere 6.7.0
  • VMware vSphere 6.5 U2 P03 (ESXi650-201811002) – for NSX-T 2.4
  • VMware vSphere 6.5 U2
  • VMware vSphere 6.5 U1
  • vSphere Enterprise Plus
  • vSphere with Operations Management Enterprise Plus

Note: VMware vSphere 6.7 is only supported with Ops Manager v2.3.1 or later and NSX‑T v2.3.

For more information, see Upgrading vSphere in an NSX Environment in the VMware documentation.

Feature Support by IaaS

AWS Azure GCP vSphere vSphere with NSX-T
Automatic Kubernetes Cluster API load balancer
HTTP proxy
Multi-AZ storage
Per-namespace subnets
Service type:LoadBalancer *

Upgrade Path

The supported upgrade paths to PKS v1.3.5 are as follows:

  • PKS v1.3.4 or later

Features

New features and changes in this release:

  • Support for Kubernetes v1.12.7.
  • Fix: CVE-2019-1002101. Kubernetes v1.12.7 address this CVE.
  • Fix: CVE-2019-9946. Kubernetes v1.12.7 address this CVE.

Breaking Changes and Known Issues

Breaking Change: Heapster is deprecated in PKS v1.3.x, and Kubernetes has retired Heapster. For more information, see the kubernetes-retired/heapster repository on GitHub.

PKS v1.3.5 has the following known issues:

Worker Nodes with Small Ephemeral Disks Can Cause Upgrade Failure

PKS deploys packages to the ephemeral disk, /var/vcap/data, during installations and upgrades. If master or worker node VMs have ephemeral disks smaller than 8 GB, the disk can fill during an upgrade and cause the upgrade to fail. Cluster upgrades can present error messages such as the following:

{"time":999999999,"error":{"code":450001,"message":"Response exceeded maximum allowed length"}}

Workaround: In the plans you use to deploy clusters, ensure that worker and master node ephemeral disks are set to greater than 8 GB. For plan configuration instructions, see the Plans section of the Installing PKS topic for your IaaS.

This issue should not affect new installations of PKS v1.3.x as the default ephemeral disk size in plans is larger than 8 GB.

PKS Flannel Network Gets Out of Sync with Docker Bridge Network (cni0)

When VMs have been powered down for multiple days, turning them back on and issuing a bosh recreate to re-create the VMs causes the pods to get stuck in a ContainerCreating state.

Workaround: See PKS Flannel network gets out of sync with docker bridge network (cni0) in the Pivotal Knowledge Base.

Cluster Upgrades from PKS v1.3.0 on Azure Fail If Services Are Exposed

If you install PKS v1.3.0 on Azure, clusters might fail with the following error when you upgrade to PKS v1.3.1 or later:

result: 1 of 2 post-start scripts failed. Failed Jobs: kubelet. Successful Jobs: bosh-dns

This issue is caused by a timeout condition. The issue affects nodes hosting Kubernetes pods that are exposed externally by a Kubernetes service.

New cluster creations and cluster scaling operations are not affected by this issue.

Workaround: If you install PKS on Azure and experience this issue, contact Support for assistance.

The kubelet customization feature is only enabled for Plan 1

PKS 1.3.4 introduces the ability to configure kubelet startup parameters system-reserved and evction-hard within a plan. This capability is only functional in Plan 1 for PKS 1.3.4 and will be enabled in additional plans in the next release.


v1.3.4

Release Date: March 26, 2019

Product Snapshot

Element Details
Version v1.3.4
Release date March 26, 2019
Compatible Ops Manager versions v2.3.1+, v2.4.0+
Xenial stemcell version v170.15
Kubernetes version v1.12.6
On-Demand Broker version v0.24
CFCR v0.25.11
NSX-T versions * v2.2, v2.3.0.2, v2.3.1
NCP version v2.3.2
Docker version v18.06.3-ce
CFCR v0.25.11

Note: Ops Manager v2.3.10 and later in the v2.3 version line and Ops Manager v2.4.4 and later in the v2.4 version line do not support PKS v1.3 on Azure. Before deploying PKS v1.3 on Azure, you must install Ops Manager v2.3.9 or earlier in the 2.3 version line or Ops Manager v2.4.3 or earlier in the 2.4 version line.

vSphere Version Requirements

If you are installing PKS on vSphere or vSphere with NSX‑T, note that Ops Manager and PKS support the following vSphere component versions:

Versions Editions
  • VMware vSphere 6.7 U1 EP06 (ESXi670-201901001) – for NSX-T 2.4
  • VMware vSphere 6.7 U1
  • VMware vSphere 6.7.0
  • VMware vSphere 6.5 U2 P03 (ESXi650-201811002) – for NSX-T 2.4
  • VMware vSphere 6.5 U2
  • VMware vSphere 6.5 U1
  • vSphere Enterprise Plus
  • vSphere with Operations Management Enterprise Plus

Note: VMware vSphere 6.7 is only supported with Ops Manager v2.3.1 or later and NSX‑T v2.3.

For more information, see Upgrading vSphere in an NSX Environment in the VMware documentation.

Feature Support by IaaS

AWS Azure GCP vSphere vSphere with NSX-T
Automatic Kubernetes Cluster API load balancer
HTTP proxy
Multi-AZ storage
Per-namespace subnets
Service type:LoadBalancer *

Upgrade Path

The supported upgrade paths to PKS v1.3.3 are as follows:

  • When upgrading from PKS v1.3.x: PKS v1.3.1 or later
  • When upgrading from PKS v1.2.x: PKS v1.2.8 or later

Features

New features and changes in this release:

  • Custom DNS configuration for Kubernetes clusters using NSX-T and Network Profiles. For more information, see DNS Configuration for Kubernetes Clusters in Defining Network Profiles.
  • Support for NSX-T NCP v2.3.2. For more information, see the VMware NSX Container Plug-in 2.3.2 Release Notes.
  • Support for additional plans. Operators can configure up to ten sets of resource types, or Plans, in the PKS tile. All plans except the first can made available or unavailable to developers deploying clusters. Plan 1 must be configured and made available as a default for developers.
  • Kubelet customization. You can enable Kubelet to reserve compute resources for system daemons by configuring the startup parameters system-reserved and eviction-hard in the Plans pane of the PKS tile. For more information, see the Plans section of the Installing PKS topic for your IaaS, such as Installing PKS on vSphere.
  • Fix: CVE-2019-1002100. Kubernetes v1.12.6 address this CVE.
  • Fix: Updated the Telemetry URL.
  • Fix: Resolved an issue where vSphere Cloud Provider configuration could fail if credentials contained non-alphanumeric characters. For example, #, \, and ".

Breaking Changes and Known Issues

Breaking Change: Heapster is deprecated in PKS v1.3.x, and Kubernetes has retired Heapster. For more information, see the kubernetes-retired/heapster repository on GitHub.

PKS v1.3.4 has the following known issues:

Master and Worker Nodes with Small Ephemeral Disks Can Cause Upgrade Failure

PKS deploys packages to the ephemeral disk, /var/vcap/data, during installations and upgrades. If master and worker node VMs have ephemeral disks smaller than 8 GB, the disk can fill during an upgrade and cause the upgrade to fail. Cluster upgrades can present error messages such as the following:

{"time":999999999,"error":{"code":450001,"message":"Response exceeded maximum allowed length"}}

Workaround: In the plans you use to deploy clusters, ensure that the master and worker node ephemeral disks are set to greater than 8 GB. For plan configuration instructions, see the Plans section of the Installing PKS topic for your IaaS, such as Installing PKS on vSphere.

This issue should not affect new installations of PKS v1.3.x as the default ephemeral disk size in plans is larger than 8 GB.

PKS Flannel Network Gets Out of Sync with Docker Bridge Network (cni0)

When VMs have been powered down for multiple days, turning them back on and issuing a bosh recreate to re-create the VMs causes the pods to get stuck in a ContainerCreating state.

Workaround: See PKS Flannel network gets out of sync with docker bridge network (cni0) in the Pivotal Knowledge Base.

Cluster Upgrades from PKS v1.3.0 on Azure Fail If Services Are Exposed

If you install PKS v1.3.0 on Azure, clusters might fail with the following error when you upgrade to PKS v1.3.1 or later:

result: 1 of 2 post-start scripts failed. Failed Jobs: kubelet. Successful Jobs: bosh-dns

This issue is caused by a timeout condition. The issue affects nodes hosting Kubernetes pods that are exposed externally by a Kubernetes service.

New cluster creations and cluster scaling operations are not affected by this issue.

Workaround: If you install PKS on Azure and experience this issue, contact Support for assistance.

Kubelet Customization Feature Only Enabled for Plan 1

PKS v1.3.4 introduces the ability to configure Kubelet startup parameters system-reserved and eviction-hard within a plan. For more information, see the Plans section of the Installing PKS topic for your IaaS, such as Installing PKS on vSphere.

This feature is only functional in Plan 1 for PKS v1.3.4 and will be enabled in additional plans in the next release.

v1.3.3

Release Date: February 22, 2019

Product Snapshot

Element Details
Version v1.3.3
Release date February 22, 2019
Compatible Ops Manager versions v2.3.1+, v2.4.0+
Xenial stemcell version v170.15
Kubernetes version v1.12.5
On-Demand Broker version v0.24
CFCR v0.25.9
NSX-T versions * v2.2, v2.3.0.2, v2.3.1
NCP version v2.3.1
Docker version v18.06.3-ce
CFCR v0.25.9

Note: Ops Manager v2.3.10 and later in the v2.3 version line and Ops Manager v2.4.4 and later in the v2.4 version line do not support PKS v1.3 on Azure. Before deploying PKS v1.3 on Azure, you must install Ops Manager v2.3.9 or earlier in the 2.3 version line or Ops Manager v2.4.3 or earlier in the 2.4 version line.

Feature Support by IaaS

AWS Azure GCP vSphere vSphere with NSX-T
Automatic Kubernetes Cluster API load balancer
HTTP proxy
Multi-AZ storage
Per-namespace subnets
Service type:LoadBalancer *

Upgrade Path

The supported upgrade paths to PKS v1.3.3 are as follows:

  • When upgrading from PKS v1.3.x: PKS v1.3.1 or v1.3.2
  • When upgrading from PKS v1.2.x: PKS v1.2.8 through v1.2.11

Features

New features and changes in this release:

  • Fix: CVE-2019-5736. This release updates the version of Docker deployed by PKS to v18.06.3-ce. This Docker version addresses a runc vulnerability whereby a malicious image could run in privileged mode and elevate to root access on worker nodes. Docker v18.06.2-ce, deployed by PKS v1.3.2, did not contain the correct compiled binary. This Docker version includes the correct runc binary to address the CVE.

Breaking Changes and Known Issues

Breaking Change: Heapster is deprecated in PKS v1.3.x, and Kubernetes has retired Heapster. For more information, see the kubernetes-retired/heapster repository on GitHub.

PKS v1.3.3 has the following known issues:

PKS Flannel Network Gets Out of Sync with Docker Bridge Network (cni0)

When VMs have been powered down for multiple days, turning them back on and issuing a bosh recreate to re-create the VMs causes the pods to get stuck in a ContainerCreating state.

Workaround: See PKS Flannel network gets out of sync with docker bridge network (cni0) in the Pivotal Knowledge Base.

Deploy Fails if vSphere Master Credentials Field Has Special Characters Without Quotes

If you install PKS on vSphere and you enter credentials in the vCenter Master Credentials field of the Kubernetes Cloud Provider pane of the PKS tile that contain special characters, such as #, $, ,, !, or -, your deployment might fail with the following error:

ServerFaultCode: Cannot complete login due to an incorrect user name or password.

Workaround: If you install PKS on vSphere without NSX-T integration, place quotes around the credentials in the cloud provider configuration. For example, "SomeP4$$w0rd#!". Then redeploy the PKS tile by clicking Apply Changes.

If you install PKS on vSphere with NSX-T integration, avoid using special characters in this field until this issue is resolved.

Cluster Upgrades from PKS v1.3.0 on Azure Fail If Services Are Exposed

If you install PKS v1.3.0 on Azure, clusters might fail with the following error when you upgrade to PKS v1.3.1 or later:

result: 1 of 2 post-start scripts failed. Failed Jobs: kubelet. Successful Jobs: bosh-dns

This issue is caused by a timeout condition. The issue affects nodes hosting Kubernetes pods that are exposed externally by a Kubernetes service.

New cluster creations and cluster scaling operations are not affected by this issue.

Workaround: If you install PKS on Azure and experience this issue, contact Support for assistance.

v1.3.2

Release Date: February 13, 2019

Product Snapshot

Element Details
Version v1.3.2
Release date February 13, 2019
Compatible Ops Manager versions v2.3.1+, v2.4.0+
Xenial stemcell version v170.15
Kubernetes version v1.12.4
On-Demand Broker version v0.24
CFCR v0.25.8
NSX-T versions * v2.2, v2.3.0.2, v2.3.1
NCP version v2.3.1
Docker version v18.06.2-ce
CFCR v0.25.8

Note: Ops Manager v2.3.10 and later in the v2.3 version line and Ops Manager v2.4.4 and later in the v2.4 version line do not support PKS v1.3 on Azure. Before deploying PKS v1.3 on Azure, you must install Ops Manager v2.3.9 or earlier in the 2.3 version line or Ops Manager v2.4.3 or earlier in the 2.4 version line.

Feature Support by IaaS

AWS Azure GCP vSphere vSphere with NSX-T
Automatic Kubernetes Cluster API load balancer
HTTP proxy
Multi-AZ storage
Per-namespace subnets
Service type:LoadBalancer *

Upgrade Path

The supported upgrade paths to PKS v1.3.2 are as follows:

  • When upgrading from PKS v1.3.x: PKS v1.3.1
  • When upgrading from PKS v1.2.x: PKS v1.2.8 or v1.2.9

Features

New features and changes in this release:

  • Fix: CVE-2019-3779. This fix addresses a vulnerability where certs signed by the Kubernetes API could be used to gain access to a PKS-deployed cluster’s etcd service.
  • Fix: CVE-2019-3780. This fixes a regression bug in PKS where vCenter IaaS credentials intended for the vSphere Cloud Provider were written on worker node VM disks.
  • Fix: Clusters can now be successfully created if there are pre-existing Kubernetes clusters using the same hostname.

Breaking Changes and Known Issues

Breaking Change: Heapster is deprecated in PKS v1.3.x, and Kubernetes has retired Heapster. For more information, see the kubernetes-retired/heapster repository on GitHub.

PKS v1.3.2 has the following known issues:

PKS Flannel Network Gets Out of Sync with Docker Bridge Network (cni0)

When VMs have been powered down for multiple days, turning them back on and issuing a bosh recreate to re-create the VMs causes the pods to get stuck in a ContainerCreating state.

Workaround: See PKS Flannel network gets out of sync with docker bridge network (cni0) in the Pivotal Knowledge Base.

Deploy Fails if vSphere Master Credentials Field Has Special Characters Without Quotes

If you install PKS on vSphere and you enter credentials in the vCenter Master Credentials field of the Kubernetes Cloud Provider pane of the PKS tile that contain special characters, such as #, $, ,, !, or -, your deployment might fail with the following error:

ServerFaultCode: Cannot complete login due to an incorrect user name or password.

Workaround: If you install PKS on vSphere without NSX-T integration, place quotes around the credentials in the cloud provider configuration. For example, "SomeP4$$w0rd#!". Then redeploy the PKS tile by clicking Apply Changes.

If you install PKS on vSphere with NSX-T integration, avoid using special characters in this field until this issue is resolved.

Cluster Upgrades from PKS v1.3.0 on Azure Fail If Services Are Exposed

If you install PKS v1.3.0 on Azure, clusters might fail with the following error when you upgrade to either PKS v1.3.1 or later:

result: 1 of 2 post-start scripts failed. Failed Jobs: kubelet. Successful Jobs: bosh-dns

This issue is caused by a timeout condition. The issue affects nodes hosting Kubernetes pods that are exposed externally by a Kubernetes service.

New cluster creations and cluster scaling operations are not affected by this issue.

Workaround: If you install PKS on Azure and experience this issue, contact Support for assistance.

v1.3.1

Release Date: February 8, 2019

WARNING: PKS v1.3.1 and earlier includes a critical CVE. Follow the procedures in the PKS upgrade approach for CRITICAL CVE article in the Pivotal Support Knowledge Base to perform an upgrade to PKS v1.3.2.

Product Snapshot

Element Details
Version v1.3.1
Release date February 8, 2019
Compatible Ops Manager versions v2.3.1+, v2.4.0+
Xenial stemcell version v170.15
Kubernetes version v1.12.4
On-Demand Broker version v0.24
CFCR v0.25.8
NSX-T versions * v2.2, v2.3.0.2, v2.3.1
NCP version v2.3.1
Docker version v18.06.1-ce
CFCR v0.25.8

Note: Ops Manager v2.3.10 and later in the v2.3 version line and Ops Manager v2.4.4 and later in the v2.4 version line do not support PKS v1.3 on Azure. Before deploying PKS v1.3 on Azure, you must install Ops Manager v2.3.9 or earlier in the 2.3 version line or Ops Manager v2.4.3 or earlier in the 2.4 version line.

vSphere Version Requirements

If installing PKS on vSphere or vSphere with NSX‑T, please note Ops Manager and PKS support the following vSphere component versions:

Versions Editions
  • VMware vSphere 6.7 U1 EP06 (ESXi670-201901001) – for NSX-T 2.4
  • VMware vSphere 6.7 U1
  • VMware vSphere 6.7.0
  • VMware vSphere 6.5 U2 P03 (ESXi650-201811002) – for NSX-T 2.4
  • VMware vSphere 6.5 U2
  • VMware vSphere 6.5 U1
  • vSphere Enterprise Plus
  • vSphere with Operations Management Enterprise Plus

Note: VMware vSphere 6.7 is only supported with Ops Manager v2.3.1 or later and NSX‑T v2.3.

For more information, see Upgrading vSphere in an NSX Environment in the VMware documentation.

Feature Support by IaaS

AWS Azure GCP vSphere vSphere with NSX-T
Automatic Kubernetes Cluster API load balancer
HTTP proxy
Multi-AZ storage
Per-namespace subnets
Service type:LoadBalancer *

Upgrade Path

The supported upgrade paths to PKS v1.3.1 are as follows:

  • When upgrading from PKS v1.3.x: PKS v1.3.0
  • When upgrading from PKS v1.2.x: PKS v1.2.7 or v1.2.8

Follow the procedures in the PKS upgrade approach for CRITICAL CVE article in the Pivotal Support Knowledge Base to perform an upgrade to PKS v1.3.2.

Features

New features and changes in this release:

  • Certificates for the Etcd instance for each Kubernetes cluster provisioned by PKS are generated with a four-year lifetime and signed by a new Etcd Certificate Authority (CA).
  • Fix: Upgrading PKS no longer fails during upgrades if there are Kubernetes clusters with duplicate hostnames.
  • Fix: Deploying PKS no longer fails if an entry in the No Proxy field contains special characters such as (-) character.
  • Fix: The Kubernetes API now responds with the CA certificate that signed the Kubernetes cluster’s certificate so that customer scripts such as the get-pks-k8s-config.sh tool will function again.

Breaking Changes and Known Issues

Breaking Change: Heapster is deprecated in PKS v1.3.x, and Kubernetes has retired Heapster. For more information, see the kubernetes-retired/heapster repository on GitHub.

PKS v1.3.1 has the following known issues:

PKS Flannel Network Gets out of Sync with Docker Bridge Network (cni0)

When VMs have been powered down for multiple days, turning them back on and issuing a bosh recreate to re-create the VMs causes the pods to get stuck in a ContainerCreating state.

Workaround: See PKS Flannel network gets out of sync with docker bridge network (cni0) in the Pivotal Knowledge Base.

Deploy Fails if vSphere Master Credentials Field Has Special Characters Without Quotes

If you install PKS on vSphere and you enter credentials in the vCenter Master Credentials field of the Kubernetes Cloud Provider pane of the PKS tile that contain special characters, such as #, $, ,, !, or -, your deployment might fail with the following error:

ServerFaultCode: Cannot complete login due to an incorrect user name or password.

Workaround: If you install PKS on vSphere without NSX-T integration, place quotes around the credentials in the cloud provider configuration. For example, "SomeP4$$w0rd#!". Then redeploy the PKS tile by clicking Apply Changes.

If you install PKS on vSphere with NSX-T integration, avoid using special characters in this field until this issue is resolved.

Cluster Upgrades from PKS v1.3.0 on Azure Fail If Services Are Exposed

If you install PKS v1.3.0 on Azure, clusters might fail with the following error when you upgrade to either PKS v1.3.1 or later:

result: 1 of 2 post-start scripts failed. Failed Jobs: kubelet. Successful Jobs: bosh-dns

This issue is caused by a timeout condition. The issue affects nodes hosting Kubernetes pods that are exposed externally by a Kubernetes service.

New cluster creations and cluster scaling operations are not affected by this issue.

v1.3.0 - Withdrawn

Release Date: January 16, 2019

This release has been removed from Pivotal Network because it has a known vulnerability. This issue has been fixed in PKS v1.3.1.

Product Snapshot

Element Details
Version v1.3.0
Release date January 16, 2019
Compatible Ops Manager versions v2.3.1+, v2.4.0+
Xenial stemcell version v170.15
Kubernetes version v1.12.4
On-Demand Broker version v0.24
NSX-T versions * v2.2, v2.3.0.2, v2.3.1
NCP version v2.3.1
Docker version v18.06.1-ce
CFCR

* PKS v1.3 supports NSX-T v2.2 and v2.3 with the following caveats:

Note: Ops Manager v2.3.10 and later in the v2.3 version line and Ops Manager v2.4.4 and later in the v2.4 version line do not support PKS v1.3 on Azure. Before deploying PKS v1.3 on Azure, you must install Ops Manager v2.3.9 or earlier in the 2.3 version line or Ops Manager v2.4.3 or earlier in the 2.4 version line.

vSphere Version Requirements

If installing PKS on vSphere or vSphere with NSX‑T, please note Ops Manager and PKS support the following vSphere component versions:

Versions Editions
  • VMware vSphere 6.7 U1 EP06 (ESXi670-201901001) – for NSX-T 2.4
  • VMware vSphere 6.7 U1
  • VMware vSphere 6.7.0
  • VMware vSphere 6.5 U2 P03 (ESXi650-201811002) – for NSX-T 2.4
  • VMware vSphere 6.5 U2
  • VMware vSphere 6.5 U1
  • vSphere Enterprise Plus
  • vSphere with Operations Management Enterprise Plus

Note: VMware vSphere 6.7 is only supported with Ops Manager v2.3.1 or later and NSX‑T v2.3.

For more information, see Upgrading vSphere in an NSX Environment in the VMware documentation.

Feature Support by IaaS

AWS Azure GCP vSphere vSphere with NSX-T
Automatic Kubernetes Cluster API load balancer
HTTP proxy
Multi-AZ storage
Per-namespace subnets
Service type:LoadBalancer *

* For more information about configuring Service type:LoadBalancer on AWS, see the Access Workloads Using an Internal AWS Load Balancer section of Deploying and Exposing Basic Workloads.

Upgrade Path

The supported upgrade paths to PKS v1.3.0 are from PKS v1.2.5 and later.

For more information, see Upgrading PKS and Upgrading PKS with NSX-T.

Note: Upgrading from PKS v1.2.5+ to PKS v1.3.x causes all certificates to be automatically regenerated. The old certificate authority is still trusted, and has a validity of one year. But the new certificates are signed with a new certificate authority, which is valid for four years.

Features

New features and changes in this release:

  • Support for PKS on Azure. For more information, see Azure.
  • BOSH Backup and Restore (BBR) for single-master clusters. For more information, see Back Up Cluster Deployments in Backing Up PKS, and Restore PKS Clusters in Restoring PKS.
  • Routable pods on NSX-T. For more information, see Routable Pod Networks in Defining Network Profiles.
  • Large size NSX-T load balancers with Bare Metal NSX-T edge nodes. For more information, see Hardware Requirements for PKS on vSphere with NSX-T.
  • HTTP proxy for NSX-T components. For more information, see Using Proxies with PKS on NSX-T.
  • Ability to specify the size of the Pods IP Block subnet using a network profile. For more information, see Pod Subnet Prefix in Defining Network Profiles.
  • Support for bootstrap security groups, custom floating IPs, and edge router selection using network profiles. For more information, see Bootstrap Security Group, Custom Floating IP Pool, and Edge Router Selection in Defining Network Profiles.
  • Support for sink resources in air-gapped environments.
  • Support for creating sink resources with the PKS Command Line Interface (PKS CLI). For more information, see Creating Sink Resources.
  • Sink resources include both pod logs as well as events from the Kubernetes API. These events are combined in a shared format that provides operators with a robust set of filtering and monitoring options. For more information, see Monitoring PKS with Sinks.
  • Support for multiple NSX-T Tier-0 (T0) logical routers for use with PKS multi-tenant environments. For more information, see Configuring Multiple Tier-0 Routers for Tenant Isolation.
  • Support for multiple PKS foundations on the same NSX-T. For more information, see Implementing a Multi-Foundation PKS Deployment.
  • Smoke tests errand that uses the PKS CLI to create a Kubernetes cluster and then delete it. If the creation or deletion fails, the errand fails and the installation of the PKS tile is aborted. For more information, see the Errands section of the Installing PKS topic for your IaaS, such as Installing PKS on vSphere.
  • Support for scaling down the number of worker nodes. For more information, see Scaling Existing Clusters.
  • Support for defining the CIDR range for Kubernetes pods and services on Flannel networks. For more information, see the Networking section of the Installing PKS topic for your IaaS, such as Installing PKS on vSphere.
  • Kubernetes v1.12.4.
  • Bug Fix: The No Proxy property for vSphere now accepts wildcard domains like *.example.com and example.com. See Networking in Installing PKS on vSphere for more information.
  • Bug Fix: The issue with NSX-T where special characters in username and password doesn’t work is resolved.
  • Security Fix: CVE 2018-18264: This CVE allows unauthenticated secret access to the Kubernetes Dashboard.
  • Security Fix: CVE-2018-15759: This CVE contains an insecure method of verifying credentials. A remote unauthenticated malicious user may make many requests to the service broker with a series of different credentials, allowing them to infer valid credentials and gain access to perform broker operations.

Breaking Changes and Known Issues

Breaking Change: Heapster is deprecated in PKS v1.3, and Kubernetes has retired Heapster. For more information, see the kubernetes-retired/heapster repository on GitHub.

PKS v1.3.0 has the following known issues:

Upgrades Fail When Clusters Share an External Hostname

If you use the same external hostname across more than one PKS-deployed Kubernetes cluster, upgrades from PKS v1.2.x to PKS v1.3.0 might fail. The external hostname is the value you set with either the -e or --external-hostname argument when you created the cluster. For more information, see Create a Kubernetes Cluster.

PKS v1.3.0 introduces restrictions that prevent you from deploying clusters with duplicate hostnames, so this issue does not affect upgrades from PKS v1.3.0 and later.

If you have existing clusters that use the same external hostname, do not upgrade to PKS v1.3.x. Contact your Support representative for more information.

Upgrades Fail with a Hyphen in the No Proxy Field on vSphere

If you install PKS on vSphere and you enable the HTTP/HTTPS Proxy setting, you cannot use the - character in the No Proxy field. Entering - in the No Proxy field can cause validation errors when trying to upgrade to PKS v1.3.0. For more information, see the Networking section of Installing PKS on vSphere.

If you experience this issue during an upgrade, contact Support for a hotfix that will be applied in a future PKS v1.3.x release.

PKS Flannel Network Gets Out of Sync with Docker Bridge Network (cni0)

When VMs have been powered down for multiple days, turning them back on and issuing a bosh recreate to re-create the VMs causes the pods to get stuck in a ContainerCreating state.

Workaround: See PKS Flannel network gets out of sync with docker bridge network (cni0) in the Pivotal Knowledge Base.

Deploy Fails if vSphere Master Credentials Field Has Special Characters Without Quotes

If you install PKS on vSphere and you enter credentials in the vCenter Master Credentials field of the Kubernetes Cloud Provider pane of the PKS tile that contain special characters, such as #, $, ,, !, or -, your deployment might fail with the following error:

ServerFaultCode: Cannot complete login due to an incorrect user name or password.

Workaround: If you install PKS on vSphere without NSX-T integration, place quotes around the credentials in the cloud provider configuration. For example, "SomeP4$$w0rd#!". Then redeploy the PKS tile by clicking Apply Changes.

If you install PKS on vSphere with NSX-T integration, avoid using special characters in this field until this issue is resolved.

PKS Selects the First AZ Only During Cluster Creation

If the first availability zone (AZ) used by a plan with multiple AZs runs out of resources, cluster creation fails with an error like the following:

L Error: CPI error 'Bosh::Clouds::CloudError' with message 'No valid placement found for requested memory: 4096

Explanation: BOSH creates VMs for your PKS deployment using a round-robin algorithm. It tries to create the first VM in the first AZ that your plan uses. If the first AZ runs out of resources, cluster creation fails and BOSH does not try to create more VMs.


Please send any feedback you have to pks-feedback@pivotal.io.