LATEST VERSION: v1.2 - RELEASE NOTES
Pivotal Container Service v1.2

PKS Release Notes

Page last updated:

This topic contains release notes for Pivotal Container Service (PKS) v1.2.x.

v1.2.2

Release Date: November 14, 2018

Product Snapshot

Element Details
Version v1.2.2
Release date November 14, 2018
Compatible Ops Manager versions v2.2.2+, v2.3.1+
Stemcell version v97.17
Kubernetes version v1.11.3
On-Demand Broker version v0.23
NSX-T versions v2.2, v2.3
NCP version v2.3

Feature Support by IaaS

AWS GCP vSphere vSphere with NSX-T
Automatic PKS control plane load balancer *
Automatic cluster load balancer
HTTP proxy
Multi-AZ storage
Per-namespace subnets
Service type:LoadBalancer **

* Enter the load balancer name in the Resource Config tab to connect the load balancer to the PKS control plane. For more information, see the Resource Config section of Installing PKS on AWS.

** For more information about configuring Service type:LoadBalancer on AWS, see the Access Workloads Using an Internal AWS Load Balancer section of Deploying and Accessing Basic Workloads.

Upgrade Path

The supported upgrade paths to PKS v1.2.2 are from PKS v1.1.5 and later.

For more information, see Upgrading PKS and Upgrading PKS with NSX-T.

What’s New

PKS v1.2.2 includes updates to the containers that underlie sink resources and Wavefront integration. These updates do not add functionality and should not impact existing functionality.

Known Issues

There are no known issues.

v1.2.1

Release Date: November 2, 2018

Product Snapshot

Element Details
Version v1.2.1
Release date November 2, 2018
Compatible Ops Manager versions v2.2.2+, v2.3.1+
Stemcell version v97.17
Kubernetes version v1.11.3
On-Demand Broker version v0.23
NSX-T versions v2.2, v2.3
NCP version v2.3

Feature Support by IaaS

AWS GCP vSphere vSphere with NSX-T
Automatic PKS control plane load balancer *
Automatic cluster load balancer
HTTP proxy
Multi-AZ storage
Per-namespace subnets
Service type:LoadBalancer **

* Enter the load balancer name in the Resource Config tab to connect the load balancer to the PKS control plane. For more information, see the Resource Config section of Installing PKS on AWS.

** For more information about configuring Service type:LoadBalancer on AWS, see the Access Workloads Using an Internal AWS Load Balancer section of Deploying and Accessing Basic Workloads.

Upgrade Path

The supported upgrade paths to PKS v1.2.1 are from PKS v1.1.5 and later.

For more information, see Upgrading PKS and Upgrading PKS with NSX-T.

What’s New

PKS v1.2.1 adds support for the following:

  • Routable pod networks for assigning each pod in a Kubernetes cluster a routable (public) IP address. For more information, see Routable IP Addresses for Pods in Using Network Profiles (NSX-T Only).
  • Configurable maximum number of worker nodes per Kubernetes cluster. Previously the maximum was 50 and not configurable. For more information, see the Plans section of the Installing PKS topic for your IaaS. For example, Plans in Installing PKS on vSphere.
  • Sink resources for Kubernetes clusters. For more information, see Creating Sink Resources.
  • Kubernetes v1.11.3.
  • Updated On-Demand Broker.
  • Updated UAA.

Known Issues

There are no known issues.

v1.2.0

Release Date: September 27, 2018

Product Snapshot

Element Details
Version v1.2.0
Release date September 27, 2018
Compatible Ops Manager versions v2.2.2+, v2.3.1+
Stemcell version v97.17
Kubernetes version v1.11.2
On-Demand Broker version v0.22
NSX-T versions v2.2, v2.3
NCP version v2.3

Feature Support by IaaS

AWS GCP vSphere vSphere with NSX-T
Automatic PKS control plane load balancer *
Automatic cluster load balancer
HTTP proxy
Multi-AZ storage
Per-namespace subnets
Service type:LoadBalancer **

* Enter the load balancer name in the Resource Config tab to connect the load balancer to the PKS control plane. For more information, see the Resource Config section of Installing PKS on AWS.

** For more information about configuring Service type:LoadBalancer on AWS, see the Access Workloads Using an Internal AWS Load Balancer section of Deploying and Accessing Basic Workloads.

Upgrade Path

The supported upgrade paths to PKS v1.2.0 are from PKS v1.1.5 and later.

For customers who have deployed PKS v1.1.5 with NSX-T, NSX-T v2.2 is the version supported for upgrades to PKS v1.2.0.

For more information, see Upgrading PKS and Upgrading PKS with NSX-T.

What’s New

PKS v1.2.0 adds support for the following:

  • Network profiles for per-cluster customization and choice of load balancer size for PKS deployments with NSX-T. For more information, see Using Network Profiles (NSX-T Only).
  • Xenial stemcells.
  • Multi-master clusters. For more information, see the Plans section of Installing PKS for your IaaS.
  • OpenID Connect (OIDC) authentication strategy in Kubernetes. For more information, see the Configure OpenID Connect section of Installing PKS for your IaaS.
    • Cluster administrators can use LDAP users and groups in RoleBinding and ClusterRoleBinding objects. For more information, see Managing Users in PKS with UAA.
  • Namespace sinks. For more information, see Creating Sink Resources.
  • PKS can be deployed on Amazon Web Services (AWS). For more information, see the Amazon Web Services (AWS) topic.
  • You can specify the number of worker nodes to be installed in parallel. For more information, see the PKS API section of Installing PKS for your IaaS.
  • Metrics server is deployed by default. Heapster is still deployed but will be removed in a future release per Kubernetes deprecation notice.
  • Support for Horizontal Pod Autoscaling.
  • Support for the HostPort feature to allow pods to open external ports on the worker node.
  • ETCD release v3.3.9.
  • Updated admission controllers based on Kubernetes recommendations, including DefaultTolerationSeconds and ValidatingAdminssionWebhook. NamespaceExists has been removed.
  • Changed Docker storage driver from overlay to overlay2. The old images will remain on each worker in the /var/vcap/data/docker/docker/overlay directory.
  • Support for the NTLM formatted usernames for vSphere.
  • Improved drain script for large cluster upgrades.
  • Deprecated support for NSX-T v2.1.
  • Fix: vSphere credentials are not stored in the BOSH manifest.

Known Issues

  • If you use a space in any field entry in the PKS tile, the deployment of PKS fails. Ensure your field entries in the PKS tile do not contain leading and trailing spaces or spaces between characters.
  • When the PKS tile is being redeployed (during PKS tile upgrade, for instance), the following error message may appear in the Ops Manager status log: Failed Jobs: pks-api. The workaround is to disable telemetry data collection in the Usage Data pane of the PKS tile.
  • For PKS with NSX-T, using the Generate RSA Certificate option in the Networking section of the PKS tile for generating the NSX Manager Super User Principal Identity Certificate results in the following error during deployment of PKS:

    ERROR: NSX-T Precheck failed due to error code:
    403, error message: The credentials were incorrect
    or the account specified has been locked.
    


    This error is the result of a change in the cURL version as part of the stemcell upgrade from Ubuntu v14.04 to v16.04. In Ubuntu 16.04, cURL comes with GnuTLS instead of OpenSSL. For a workaround, use the manual approach for generating the principle identity certificate and key as described in Generating and Registering the NSX Manager Superuser Principal Identity Certificate and Key.

  • Namespace sinks do not work in environments without internet access.

  • Due to a limitation with the NSX-T v2.2 scheduler component, VMware recommends that you do not use a medium-sized load balancer at this time, even if the NSX-T edge cluster has more than two edge node VMs. This limitation is addressed in NSX-T v2.3, which PKS v1.2.0 supports.

  • When using AWS, you must select a VM type under Master/ETCD VM Type, Worker VM Type, and Errand VM Type in the Plans section of the PKS tile in order to save a plan on the tile. You cannot leave the VM type on Automatic. The recommended minimum VM type is t2.medium.

  • Existing certificates will expire after a year. The certificates will be updated in a future release.

  • The External Groups Whitelist field in the UAA section of the PKS tile has a 4000 character limit due to the size limitation of JWT tokens.

  • In an internetless environment, the images for the kube-system components must be present within the environment to allow the overlay2 upgrade.

  • Kubernetes end users must manually configure their kubeconfig in order to use their LDAP credentials if OIDC is turned on.

  • UAA refresh token for OIDC authorization is currently not supported.

  • When creating a cluster with the pks create-cluster command, you cannot use the \ character in the value for --external-hostname. For more information about creating clusters, see the Create a Kubernetes Cluster section of Creating Clusters.

  • When a cluster is created, the output logs will contain the following warning: Warning: DNS address not available for the link provider instance: pivotal-container-service/[uuid]. It has no effect on the cluster creation.

  • Enabling Telemetry on environments without Internet access causes tile installation to fail.

  • When Enable UAA as OIDC Provider is selected in the UAA pane of the PKS tile, the Kubernetes Dashboard no longer works with the kubeconfig option. Currently, external identity providers and certificate-based authentication are not supported in Kubernetes.


Please send any feedback you have to pks-feedback@pivotal.io.

Create a pull request or raise an issue on the source for this page in GitHub