LATEST VERSION: v1.4 - RELEASE NOTES
Pivotal Container Service v1.4

Enterprise PKS Release Notes

Page last updated:

This topic contains release notes for Enterprise Pivotal Container Service (Enterprise PKS) v1.4.x.

v1.4.0

Release Date: April 25, 2019

Product Snapshot

Element Details
Version v1.4.0
Release date April 25, 2019
Compatible Ops Manager versions v2.4.3+, v2.5.0+
Stemcell version v250.25
Kubernetes version v1.13.5
On-Demand Broker version v0.26.0
NSX-T versions * v2.3.1, v2.4.0.1
NCP version v2.4.0
Docker version v18.06.3-ce
CFCR
Compatible BBR version v1.5.0+

Note: NSX-T v2.4 implements a new user interface (UI) based on the NSX Policy API. PKS v1.4.0 does not support the NSX Policy API. Any objects created via the new Policy-based UI cannot be used with PKS v1.4.0. If you are installing PKS v1.4.0 with NSX-T v2.4.x, or upgrading to PKS v1.4.0 and NSX-T 2.4.x, you must use the “Advanced Networking” tab in NSX Manager to create, read, update, and delete all networking objects required for PKS.

Note: PKS v1.4.0 on Azure is compatible only with Ops Manager v2.4.2 and v2.4.3. Before deploying PKS v1.4.0 on Azure, you must install Ops Manager v2.4.2 or v2.4.3.

vSphere Version Requirements

For Enterprise PKS installations on vSphere or on vSphere with NSX-T Data Center, refer to the VMware Product Interoperability Matrices.

Feature Support by IaaS

AWS Azure GCP vSphere vSphere with NSX-T
Automatic Kubernetes Cluster API load balancer
HTTP proxy
Multi-AZ storage
Per-namespace subnets
Service type:LoadBalancer *

* For more information about configuring Service type:LoadBalancer on AWS, see the Access Workloads Using an Internal AWS Load Balancer section of Deploying and Exposing Basic Workloads.

Upgrade Path

The supported upgrade paths to PKS v1.4.0 are from PKS v1.3.2 and later.

To upgrade, see Upgrading Enterprise PKS and Upgrading Enterprise PKS with NSX-T.

When upgrading to NSX-T 2.4:

  • Use the official VMware NSX-T Data Center 2.4 build.
  • Apply the NSX-T v2.4.0.1 hot-patch. For more information, see KB article 67499 in the VMware Knowledge Base.
  • To obtain the NSX-T v2.4.0.1 hot-patch, open a support ticket with VMware Global Support Services (GSS) for NSX-T Engineering.

What’s New

Enterprise PKS v1.4.0 adds the following:

  • Operators can configure up to ten sets of resource types, or plans, in the Enterprise PKS tile. All plans except the first can made available or unavailable to developers deploying clusters. Plan 1 must be configured and made available as a default for developers. For more information, see the Plans section of the installation topic for your IaaS, such as Installiing Enterprise PKS on vSphere with NSX-T.

  • Operators can deploy up to 5 master nodes per plan. For more information, see the Plans section of the installation topic for your IaaS, such as Installiing Enterprise PKS on vSphere with NSX-T.

  • Operators can install PKS and Pivotal Application Service (PAS) on the same instance of Ops Manager.

  • Improved workflow for managing cluster access. For more information, see Grant Cluster Access in Managing Users in Enterprise PKS with UAA.

  • Operators can create webhook ClusterSink resources. A webhook ClusterSink resource batches logs into 1 second units, wraps the resulting payload in JSON, and uses the POST method to deliver the logs to the address of your log management service. For more information see, Create a Webhook ClusterSink Resource with YAML and kubectl in Creating Sinks.

  • Operators can set quotas for maximum memory and CPU utilization in a PKS deployment. For more information, see Managing Resource Usage. This is a beta feature.

    Warning: This feature is a beta component and is intended for evaluation and test purposes only. Do not use this feature in a production environment. Product support and future availability are not guaranteed for beta components.

  • Operators can enable the PodSecurityPolicy admission plugin on a per-plan basis requiring cluster users to have policy, role, and role binding permissions to deploy pod workloads. See Pod Security Policies for more information.

  • Operators can enable the SecurityContextDeny admission plugin on a per-plan basis to prohibit the use of security context configurations on pods and containers. See Security Context Deny for more information.

  • Operators can enable the DenyEscalatingExec admission plugin on a per-plan basis to prohibit the use of certain commands for containers that allow host access. See Deny Escalating Execution for more information.

  • Operators using vSphere can use HostGroups to define Availability Zones (AZs) for clusters in BOSH. See Using vSphere Host Groups.

  • Operators using vSphere can configure compute profiles to specify which vSphere resources are used when deploying Kubernetes clusters in a PKS deployment. For more information, see Using Compute Profiles (vSphere Only).

    Warning: This feature is a beta component and is intended for evaluation and test purposes only. Do not use this feature in a production environment. Product support and future availability are not guaranteed for beta components.

  • Operators using vSphere with NSX-T can update a Network Profile and add to or reorder the Pods IP Block IDs. For more information, see the Change the Network Profile for a Cluster section of Using Network Profiles (NSX-T Only).

Breaking Changes and Known Issues

Enterprise PKS v1.4.0 has the following known issues:

Azure Default Security Group Is Not Automatically Assigned to Cluster VMs

Symptom

You experience issues when configuring a load balancer for a multi-master Kubernetes cluster or creating a service of type LoadBalancer. Additionally, in the Azure portal, the VM > Networking page does not display any inbound and outbound traffic rules for your cluster VMs.

Explanation

As part of configuring the Enterprise PKS tile for Azure, you enter Default Security Group in the Kubernetes Cloud Provider pane. When you create a Kubernetes cluster, Enterprise PKS automatically assigns this security group to each VM in the cluster. However, in Enterprise PKS v1.4, the automatic assignment may not occur.

As a result, your inbound and outbound traffic rules defined in the security group are not applied to the cluster VMs.

Workaround

If you experience this issue, manually assign the default security group to each VM NIC in your cluster.

Cluster Creation Fails When First AZ Runs out of Resources

Symptom

If the first availability zone (AZ) used by a plan with multiple AZs runs out of resources, cluster creation fails with an error like the following:

L Error: CPI error 'Bosh::Clouds::CloudError' with message 'No valid placement found for requested memory: 4096

Explanation

BOSH creates VMs for your Enterprise PKS deployment using a round-robin algorithm, creating the first VM in the first AZ that your plan uses. If the AZ runs out of resources, cluster creation fails because BOSH cannot create the cluster VM.

For example, if your three AZs each have enough resources for ten VMs, and you create two clusters with four worker VMs each, BOSH creates VMs in the following AZs:

AZ1 AZ2 AZ3
Cluster 1 Worker VM 1 Worker VM 2 Worker VM 3
Worker VM 4
Cluster 2 Worker VM 1 Worker VM 2 Worker VM 3
Worker VM 4

In this scenario, AZ1 has twice as many VMs as AZ2 or AZ3.

Azure Worker Node Communication Fails after Upgrade

Symptom

Outbound communication from a worker node VM fails after an upgrade to Enterprise PKS v1.4.0.

Explanation

Enterprise PKS 1.4.0 uses Azure Availability Sets to improve the uptime of workloads and worker nodes in the event of Azure platform failures. Worker node VMs are distributed evenly across Availability Sets.

Azure Standard SKU Load Balancers are recommended for the Kubernetes control plane and Kubernetes ingress and egress. This load balancer type provides an IP address for outbound communication using SNAT.

During an upgrade, when BOSH rebuilds a given worker instance in an Availability Set, Azure can time out while re-attaching the worker node network interface to the back-end pool of the Standard SKU Load Balancer.

For more information, see Outbound connections in Azure in the Azure documentation.

Workaround

You can manually re-attach the worker instance to the back-end pool of the Azure Standard SKU Load Balancer in your Azure console.


Please send any feedback you have to pks-feedback@pivotal.io.

Create a pull request or raise an issue on the source for this page in GitHub