Release Notes

Page last updated:

This topic contains release notes for VMware Enterprise PKS v1.7.

Warning: Before installing or upgrading to Enterprise PKS v1.7, review the Breaking Changes below.

v1.7.2

Release Date: September 28, 2020

Product Snapshot

Release Details
Version v1.7.2
Release date September 28, 2020
Component Version
Kubernetes v1.16.14+vmware.1
CoreDNS v1.6.2+vmware.10
Docker v18.09.9
etcd v3.3.17
Metrics Server v0.3.6
NCP v2.5.1
On-Demand Broker v0.38.0
Percona XtraDB Cluster (PXC) v0.28.0
UAA v74.5.18
Compatibilities Versions
Ops Manager See VMware Tanzu Network
Xenial stemcells See VMware Tanzu Network
Windows stemcells v2019.15 and later
Backup and Restore SDK v1.17.0
vSphere See VMware Product Interoperability Matrices
CNS for vSphere v1.0.2
NSX-T v2.5.2, v2.5.1, v2.5.0, v2.4.3
Harbor v1.10.3

Upgrade Path

The supported upgrade paths to Enterprise PKS v1.7.2 are from Enterprise PKS v1.6.0 and later.

Features

This section describes new features and changes in Enterprise PKS v1.7.2.

  • Bumps Kubernetes to v1.16.14+vmware.1.
  • Bumps Percona XtraDB Cluster (PXC) to v0.28.0.
  • Bumps UAA to v74.5.18.
  • [Bug Fix] Enforces TLS v1.2 for TLS connections to pxc-mysql.
  • [Bug Fix] Adds syslog_forwarder to the Enterprise PKS database VM.

Breaking Changes

All breaking changes in Enterprise PKS v1.7.2 are also in PKS v1.7.0. See Breaking Changes in Enterprise PKS v1.7.0.

Known Issues

Except where noted, Enterprise PKS v1.7.2 has the same known issues as Enterprise PKS v1.7.0. See Known Issues in Enterprise PKS v1.7.0.

v1.7.1

Release Date: July 15, 2020

Product Snapshot

Release Details
Version v1.7.1
Release date July 15, 2020
Component Version
Kubernetes v1.16.12+vmware.1
CoreDNS v1.6.2
Docker v18.09.9
etcd v3.3.17
Metrics Server v0.3.6
NCP v2.5.1
On-Demand Broker v0.38.0
Percona XtraDB Cluster (PXC) v0.22.0
UAA v74.5.17
Compatibilities Versions
Ops Manager See VMware Tanzu Network
Xenial stemcells See VMware Tanzu Network
Windows stemcells v2019.15 and later
Backup and Restore SDK v1.17.0
vSphere See VMware Product Interoperability Matrices
CNS for vSphere v1.0.2
NSX-T v2.5.2, v2.5.1, v2.5.0, v2.4.3
Harbor v1.10.3

Upgrade Path

The supported upgrade paths to Enterprise PKS v1.7.1 are from Enterprise PKS v1.6.0 and later.

Features

This section describes new features and changes in Enterprise PKS v1.7.1.

  • Bumps Kubernetes to v1.16.12+vmware.1.
  • Bumps UAA to v74.5.17.
  • [Security Fix] All components use TLS v1.2 with strong ciphers for internal communications, including metrics-server.
  • [Security Fix] Kubernetes bump fixes the following CVEs:
    • CVE-2020-8558: node setting allows for neighboring hosts to bypass localhost boundary.
    • CVE-2020-8559: privilege escalation from compromised node to cluster.
  • [Bug Fix] Fixes pks tasks command failure with v1.6 upgrade tasks.

Breaking Changes

All breaking changes in Enterprise PKS v1.7.1 are also in PKS v1.7.0. See Breaking Changes in Enterprise PKS v1.7.0.

Known Issues

All known issues in Enterprise PKS v1.7.1 are also in Enterprise PKS v1.7.0. See Known Issues in Enterprise PKS v1.7.0.

v1.7.0

Release Date: April 2, 2020

Product Snapshot

Release Details
Version v1.7.0
Release date April 2, 2020
Component Version
Kubernetes v1.16.7
CoreDNS v1.6.2
Docker v18.09.9
etcd v3.3.17
Metrics Server v0.3.6
NCP v2.5.1
On-Demand Broker v0.38.0
Percona XtraDB Cluster (PXC) v0.22.0
UAA v74.5.10
Compatibilities Versions
Ops Manager See VMware Tanzu Network
Xenial stemcells See VMware Tanzu Network
Windows stemcells v2019.15 and later
Backup and Restore SDK v1.17.0
vSphere See VMware Product Interoperability Matrices
CNS for vSphere v1.0.2
NSX-T v2.5.1, v2.5.0, v2.4.3

Upgrade Path

The supported upgrade paths to Enterprise PKS v1.7.0 are from Enterprise PKS v1.6.0 and later.

Features

This section describes new features and changes in VMware Enterprise PKS v1.7.0.

PKS Control Plane and API

  • Introduces Kubernetes profiles that customize Kubernetes component settings for PKS-provisioned clusters. Kubernetes profiles can include validated configurations supported by the PKS team and unvalidated configurations for evaluation only. For more information, see Using Kubernetes Profiles.
  • Supports VMware Tanzu Service Mesh by VMware NSX (Beta).
  • Uses TLS v1.2+ with strong ciphers for all internal components except for the metrics-server (see Known Issues).
  • Supports resizing and updating Kubernetes clusters that have not been upgraded from the previously installed Enterprise PKS version.
  • Splits the PKS control plane into separate PKS API and PKS Database VMs.
  • Increases to 65,535 the maximum allowed number of open files for the Docker process on worker nodes.
  • Removes the Server SSL Cert AltName field from under the Enterprise PKS tile > UAA > LDAP Server. Enterprise PKS no longer uses this field.

Kubernetes Control Plane

  • For security, PKS v1.7 does not include the Kubernetes Dashboard Web UI because it uses weak TLS ciphers. Clusters created with PKS v1.7 do not have Dashboard installed on creation.

    • You can install Dashboard on new clusters by following the Deploying the Dashboard UI instructions in the Kubernetes documentation.
    • Clusters upgraded from PKS v1.6 to v1.7 have leftover, cached dashboard objects.
      • The service of these cached Dashboard objects may degrade over time, as resize or upgrade operations rebuild node VMs.
      • To remove the leftover Dashboard objects from a cluster, run kubectl delete service,deployment,configmap,rolebinding,role,serviceaccount -l k8s-app=kubernetes-dashboard -n kube-system
  • The bump to Kubernetes v1.16 removes some API version definitions in favor of newer, more stable elements. See Deprecated APIs Removed In 1.16: Here’s What You Need To Know in the Kubernetes Blog. Kubernetes recommends that you:

    • Update the following to reference the newer API definitions:
      • Custom integrations and controllers
      • Third-party tools such as ingress controllers and continuous delivery systems
    • VMware recommends that customers selectively upgrade Kubernetes 1.15.x clusters before upgrading a fleet of clusters. See Upgrade a Single Cluster for how to upgrade clusters selectively.
  • Replaces the proxy plugin with the forward plugin for CoreDNS, as recommended in upstream Kubernetes. For more information, see CoreDNS-1.4.0 Release in the CoreDNS documentation.

PKS Monitoring and Logging

  • Adds new metrics that Enterprise PKS can send to your monitoring service:

    • Node Exporter metrics from the PKS API VM and worker nodes in Prometheus format
    • Metrics from the Kubernetes controller manager and Kubernetes API server on master nodes

    For configuration instructions, see Telegraf in the Installing topic for your IaaS.

  • Improves the in-cluster Node Exporter component, which is enabled in the Enterprise PKS tile > In-Cluster Monitoring.

  • Deprecates the billing database. The billing database is scheduled to be removed in Enterprise PKS v1.9. You can use Telemetry instead of the billing database. For more information about the billing database and Telemetry, see Viewing Usage Data and Telemetry. If you are impacted by this deprecation, please reach out to pks-telemetry@groups.vmware.com.

Windows on PKS

  • Makes HTTP and HTTPS proxy available on Windows nodes, to support downloading Windows Docker images through a proxy.

    • You must configure a global proxy in the Enterprise PKS tile > Networking pane before you create any Windows workers that use the proxy.
  • When upgrading Windows clusters, Enterprise PKS upgrades Windows workers with the latest Windows stemcell.

  • Enables Windows pods to access DNS services from kube-dns (CoreDNS).

  • Windows pods can now consume the same pod CIDR as Linux pods.

  • Windows pods can now egress to applications or workloads outside of the pod’s IP subnet.

PKS with NSX-T Networking

  • Supports highly resilient workloads using stretched clusters. See Solution Guide for Enabling Highly Resilient Kubernetes Workloads Using vSAN Stretched Clusters.
  • Changes the default topology for newly-created clusters to Shared Tier-1 Router topology, from Dedicated Tier-1 Router topology.

    • This change does not affect the topology of previously-existing clusters, which continue to run during and after upgrade to v1.7.
    • You cannot change the topology of an existing cluster from Dedicated Tier-1 to Shared Tier-1.
    • See Network Profile for Dedicated Tier-1 Topology for how to use a network profile to override the default and create new Dedicated Tier-1 Router clusters.
  • Supports backup and restore of the PKS control plane, Kubernetes clusters, and stateless workloads networked with vSphere NSX-T.

Customer Experience Improvement Program (CEIP) and Telemetry

  • Adds proxy support for Telemetry. You can configure your Enterprise PKS proxy settings in the Enterprise PKS tile > Networking.

Component Updates

  • Bumps Kubernetes to v1.16.7.
  • Bumps NCP to v2.5.1.
  • Bumps UAA to v74.5.10.

Bug Fixes

Enterprise PKS v1.7.0 includes the following bug fixes:

  • Improves the behavior of the pks get-kubeconfig and pks get-credentials commands during cluster updates and upgrades. You can now run the pks get-kubeconfig command during single- and multi-master cluster updates. Additionally, you can run the pks get-credentials command during multi-master cluster upgrades.

  • When upgrading Windows clusters, Enterprise PKS upgrades the Windows stemcell and Kubernetes version on Windows worker nodes. See Windows on PKS for more information.

Breaking Changes

Enterprise PKS v1.7.0 has the following breaking changes:

No PKS API Access with Certificate Hostname Mismatch

Prior versions of Enterprise PKS did not require the hostname in the PKS API certificate to match the PKS API hostname. In PKS v1.7, the PKS API certificate hostname must contain a valid hostname for the PKS API. This breaks the existing PKS API certificate if it has an invalid hostname.

To fix this issue, create a new PKS API certificate with the hostname that you entered in the Enterprise PKS tile > PKS API pane > API Hostname (FQDN) field.

Removal of the Dashboard UI

Clusters created with PKS v1.7 do not have Dashboard installed on creation. For more information, see the Kubernetes Control Plane section.

Removal of API Version Definitions in Kubernetes v1.16

The bump to Kubernetes v1.16 removes some API version definitions in favor of newer, more stable definitions. This change may break some integrations, controllers, and pipelines. For more information, see the Kubernetes Control Plane section.

Enterprise PKS Database Migration

This release migrates the Enterprise PKS control plane database from the PKS API VM to a new database VM, PKS Database.

Enterprise PKS performs the database migration as part of your Enterprise PKS upgrade to v1.7. During the upgrade, the Enterprise PKS control plane and the PKS CLI will be unavailable. Your Kubernetes workloads remain accessible throughout the upgrade.

To upgrade to Enterprise PKS v1.7, follow the instructions in Upgrading Enterprise PKS (Flannel Networking) or Upgrading Enterprise PKS (NSX-T Networking).

These topics contain important preparation and upgrade configuration steps you must follow before and during your upgrade to v1.7.

Warning: If your Enterprise PKS environment was originally created using Enterprise PKS v1.2 or earlier, see Leftover Tables From PKS v1.2 Prevent Database Migration in the Known Issues below.

Known Issues

Enterprise PKS v1.7.0 has the following known issues:

Error: Could Not Execute “Apply-Changes” in Azure Environment

Symptom

After clicking Apply Changes on the PKS tile in an Azure environment, you experience an error ’…could not execute “apply-changes”…’ with either of the following descriptions:

  • {“errors”:{“base”:[“undefined method ‘location’ for nil:NilClass”]}}
  • FailedError.new(“Resource Groups in region ’#{location}’ do not support Availability Zones”))

For example:

INFO | 2020-09-21 03:46:49 +0000 | Vessel::Workflows::Installer#run | Install product (apply changes)
2020/09/21 03:47:02 could not execute "apply-changes": installation failed to trigger: request failed: unexpected response from /api/v0/installations:
HTTP/1.1 500 Internal Server Error
Transfer-Encoding: chunked
Cache-Control: no-cache, no-store
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Mon, 21 Sep 2020 17:51:50 GMT
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Referrer-Policy: strict-origin-when-cross-origin
Server: Ops Manager
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Content-Type-Options: nosniff
X-Download-Options: noopen
X-Frame-Options: SAMEORIGIN
X-Permitted-Cross-Domain-Policies: none
X-Request-Id: f5fc99c1-21a7-45c3-7f39
X-Runtime: 9.905591
X-Xss-Protection: 1; mode=block

44
{"errors":{"base":["undefined method `location' for nil:NilClass"]}}
0

Explanation

The Azure CPI endpoint used by Ops Manager has been changed and your installed version of Ops Manager is not compatible with the new endpoint.

Workaround

Run the following Ops Manager CLI command:

om --skip-ssl-validation --username USERNAME --password PASSWORD --target https://OPSMAN-API curl --silent --path /api/v0/staged/director/verifiers/install_time/IaasConfigurationVerifier -x PUT -d '{ "enabled": false }'

Where:

  • USERNAME is the account to use to run Ops Manager API commands.
  • PASSWORD is the password for the account.
  • OPSMAN-API is the IP address for the Ops Manager API

For more information, see Error 'undefined method location’ is received when running Apply Change on Azure in the VMware Tanzu Knowledge Base.

Leftover Tables From PKS v1.2 Prevent Database Migration

PKS v1.2 created a pksdata and pkswatermark tables in the Telemetry database on the pivotal-container-service VM. These leftover tables interfere with database migration.

If your PKS installation was originally created using PKS v1.2 or earlier, you must complete the procedures in the KB article For environments created before VMware Enterprise PKS 1.3, upgrading to PKS 1.7 fails during data migration when running clone-db errand before upgrading to Enterprise PKS v1.7.

pks tasks Command Fails on v1.6 Tasks in Upgrade Table

This issue is fixed in PKS v1.7.1.

Symptom

Running pks tasks produces Error: An error occurred in the PKS API when processing, after which pks-api.log shows java.lang.NullPointerException: null.

This error does not stop cluster upgrades from completing or otherwise affect the cluster upgrade process.

Explanation

Timestamps on task objects and the Date column for pks tasks output are new for v1.7. Existing v1.6 tasks do not have these values, and the pks tasks query does not take this into account.

Workaround

Monitor cluster upgrades through pks cluster and other commands.

pks get-credentials Command Fails on N-1 Clusters

The command pks get-credentials does not work for clusters that have not upgraded to the current version of the PKS control plane, and are therefore running the previously-installed version of PKS.

Ingress IP in Network Profile Is Duplicated to Clusters Created Later

Symptom

After you create a PKS cluster with a network profile that specifies a value for ingress_ip, clusters created subsequently, either with or without a network profile, incorrectly retain this same value for ingress_ip. Because this address is already allocated in the FIP pool, creating the new clusters either fails to create HTTP and HTTPS servers for the clusters, or creates them with conflicting addresses.

Explanation

Creating a cluster with a network profile that defines ingress_ip also sets the value for http_and_https_ingress_ip in the NSX-T Container Plugin (NCP) configuration, at /var/vcap/jobs/ncp/config/ncp.ini. When Enterprise PKS creates new clusters, it does not overwrite this value in the NCP configuration.

Workaround

See the Resolution section in the Knowledge Base article Same ingress IP used across multiple clusters in PKS 1.7.

Cluster Upgrade Fails Due to Timeout

Symptom

Running pks upgrade-cluster fails with an error that stopping the dockerd process timed out after 60 seconds.

Explanation

Changes to how Docker shuts down containers affects interaction between kubelet stop and Docker stop, and requires graceful shutdown to start with kubelet.

Workaround

  1. For each worker node in the cluster, run monit unmonitor all and monit stop all.

  2. Run pks upgrade-cluster again.

Enterprise PKS v1.7 (Windows) on vSphere Not Compatible with Ops Manager v2.9

Enterprise PKS v1.7 installations with Windows worker-based Kubernetes clusters on vSphere (Flannel) are not compatible with Ops Manager v2.9. If you do not intend to deploy and run Windows worker-based Kubernetes clusters, you can use Ops Manager v2.9 with Enterprise PKS v1.7.

For Ops Manager compatibility information, see VMware Tanzu Network.

Pinging Windows Workers Does Not Work

Enterprise PKS-provisioned Windows workers inherit a Kubernetes limitation that prevents outbound ICMP communication from workers. As a result, pinging Windows workers does not work.

For information about this limitation, see Limitations > Networking in the Windows in Kubernetes documentation.

TMC Integration Not Supported on GCP

Enterprise PKS on Google Cloud Platform (GCP) does not support Tanzu Mission Control integration, which is configured in the Enterprise PKS tile > the Tanzu Mission Control (Experimental) pane.

If you intend to run Enterprise PKS v1.7 on GCP, skip this pane when configuring the Enterprise PKS tile.

502 Bad Gateway After OIDC Login

Symptom

You experience a “502 Bad Gateway” error from the NSX load balancer after you log in to OIDC.

Explanation

A large response header has exceeded your NSX-T load balancer maximum response header size. The default maximum response header size is 10,240 characters and should be resized to 50,000.

Workaround

If you experience this issue, manually reconfigure your NSX-T request_header_size and response_header_size to 50,000 characters. For information about configuring NSX-T default header sizes, see OIDC Response Header Overflow in the Knowledge Base.

One Plan ID Longer than Other Plan IDs

Symptom

One of your plan IDs is one character longer than your other plan IDs.

Explanation

In Enterprise PKS, each plan has a unique plan ID. A plan ID is normally a UUID consisting of 32 alphanumeric characters and 4 hyphens. However, the Plan 4 ID consists of 33 alphanumeric characters and 4 hyphens.

Solution

You can safely configure and use Plan 4. The length of the Plan 4 ID does not affect the functionality of Plan 4 clusters.

If you require all plan IDs to have identical length, do not activate or use Plan 4.

NSX-T Pre-Check Errand Fails Due to Edge Node Configuration

Symptom

You have configured your NSX-T Edge Node VM as medium size, and the NSX-T Pre-Check Errand fails with the following error: “ERROR: NSX-T Precheck failed due to Edge Node … no of cpu cores is less than 8”.

Explanation

The NSX-T Pre-Check Errand is erroneously returning the “cpu cores is less than 8” error.

Solution

You can safely configure your NSX-T Edge Node VMs as medium size and ignore the error.

Difficulty Changing Proxy for Windows Workers

You must configure a global proxy in the Enterprise PKS tile > Networking pane before you create any Windows workers that use the proxy.

You cannot change the proxy configuration for Windows workers in an existing cluster.

Metrics Server uses Weak Ciphers

This issue is fixed in PKS v1.7.1.

The metrics-server component communicates over TLS v1.2 with weak ciphers. All other Enterprise PKS components use TLS v1.2 with strong ciphers.

Character Limitations in HTTP Proxy Password

For vSphere with NSX-T, the HTTP Proxy password field does not support the following special characters: & or ;.

Enterprise PKS Management Console 1.7.2

Release Date: September 28, 2020

Features

Other than support for Enterprise PKS v1.7.2, Enterprise PKS Management Console 1.7.2 has no new features.

Product Snapshot

Note: Enterprise PKS Management Console provides an opinionated installation of Enterprise PKS. The supported versions may differ from or be more limited than what is generally supported by Enterprise PKS.

Element Details
Version v1.7.2
Release date September 28, 2020
Installed Enterprise PKS version v1.7.2
Installed Ops Manager version v2.9.10
Installed Kubernetes version v1.16.12+vmware.1
Compatible NSX-T versions v2.5.1, v2.5.0, v2.4.3
Installed Harbor Registry version v1.10.3
Windows stemcells v2019.15 and later

Upgrade Path

The supported upgrade path to Enterprise PKS Management Console v1.7.2 is from Enterprise PKS v1.6.0 and later.

Known Issues

The Enterprise PKS Management Console v1.7.2 appliance and user interface have the same known issues as v1.7.0.

Enterprise PKS Management Console 1.7.1

Release Date: July 15, 2020

Features

Other than support for Enterprise PKS v1.7.1, Enterprise PKS Management Console 1.7.1 has no new features.

Product Snapshot

Note: Enterprise PKS Management Console provides an opinionated installation of Enterprise PKS. The supported versions may differ from or be more limited than what is generally supported by Enterprise PKS.

Element Details
Version v1.7.1
Release date July 15, 2020
Installed Enterprise PKS version v1.7.1
Installed Ops Manager version v2.9.6
Installed Kubernetes version v1.16.12+vmware.1
Compatible NSX-T versions v2.5.1, v2.5.0, v2.4.3
Installed Harbor Registry version v1.10.3
Windows stemcells v2019.15 and later

Upgrade Path

The supported upgrade path to Enterprise PKS Management Console v1.7.1 is from Enterprise PKS v1.6.0 and later.

Known Issues

The Enterprise PKS Management Console v1.7.1 appliance and user interface have the same known issues as v1.7.0.

Enterprise PKS Management Console 1.7.0

Release Date: April 16, 2020

Features

Enterprise PKS Management Console v1.7.0 updates include:

Product Snapshot

Note: Enterprise PKS Management Console provides an opinionated installation of Enterprise PKS. The supported versions may differ from or be more limited than what is generally supported by Enterprise PKS.

Element Details
Version v1.7.0
Release date April 16, 2020
Installed Enterprise PKS version v1.7.0
Installed Ops Manager version v2.8.5
Installed Kubernetes version v1.16.7
Compatible NSX-T versions v2.5.1, v2.5.0, v2.4.3
Installed Harbor Registry version v1.10.1
Windows stemcells v2019.15 and later

Upgrade Path

The supported upgrade path to Enterprise PKS Management Console v1.7.0 is from Enterprise PKS v1.6.0 and later.

Known Issues

The Enterprise PKS Management Console v1.7.0 appliance and user interface have the following known issues.

vRealize Log Insight Integration Does Not Support HTTPS Connections

Symptom

The Enterprise PKS Management Console integration to vRealize Log Insight does not support connections to the HTTPS port on the vRealize Log Insight server.

Workaround

  1. Use SSH to log in to the Enterprise PKS Management Console appliance VM.
  2. Open the file /lib/systemd/system/pks-loginsight.service in a text editor.
  3. Add the following to the file:

    -e LOG_SERVER_ENABLE_SSL_VERIFY=false
    -e LOG_SERVER_USE_SSL=true
    

    The resulting file should look like the following example:

    ExecStart=/bin/docker run --privileged --restart=always --network=pks 
    -v /var/log/journal:/var/log/journal 
    --name=pks-loginsight 
    -e TYPE=gear2-vm 
    -e LOG_SERVER_HOST=${LOGINSIGHT_HOST} 
    -e LOG_SERVER_PORT=${LOGINSIGHT_PORT} 
    -e LOG_SERVER_ENABLE_SSL_VERIFY=false 
    -e LOG_SERVER_USE_SSL=true 
    -e LOG_SERVER_AGENT_ID=${LOGINSIGHT_ID} 
    pksoctopus/vrli-journald:v07092019
    
  4. Save the file.

  5. Run systemctl daemon-reload.

  6. To restart the vRealize Log Insight service, run systemctl restart pks-loginsight.service.

Enterprise PKS Management Console can now send logs to the HTTPS port on the vRealize Log Insight server.

Base64 encoded file arguments are not decoded in Kubernetes profiles

Symptom

Some file arguments in Kubernetes profiles are base64 encoded. When the management console displays the Kubernetes profile, some file arguments are not decoded.

Workaround

Run echo "$content" | base64 --decode

Cannot specify Ops Manager EQDN addresses during upgrade.

Symptom

This release allows you to specify an FQDN for the Ops Manager VM during deployment of Enterprise PKS. This only applies to new deployments. You cannot specify an FQDN during upgrade.

Workaround

None

Network profiles not immediately selectable

Symptom

If you create network profiles and then try to apply them in the Create Cluster page, the new profiles are not available for selection.

Workaround

Log out of the management console and log back in again.

Real-Time IP information not displayed for network profiles.

Symptom

In the cluster summary page, only default IP pool, pod IP block, node IP block values are displayed, rather than the real-time values from the associated network profile.

Workaround

None


Please send any feedback you have to pks-feedback@pivotal.io.