VMware Tanzu Application Service for VMs v2.10 Release Notes

Page last updated:

This topic contains release notes for VMware Tanzu Application Service for VMs (TAS for VMs) v2.10.

For the feature highlights of this release, read the blog post VMware Tanzu Application Service 2.10 Adds New CLI, Eases Upgrades with More Flexible Control Plane or see New Features in TAS for VMs v2.10.

Ops Manager is certified by the Cloud Foundry Foundation for 2020.

Read more about the certified provider program and the requirements of providers.


Releases

2.10.4

Release Date: 09/21/2020

  • [Security Fix] Bump Usage Service ruby version to 2.6.6 - CVE-2020-15169 CVE-2020-10933 CVE-2020-10663
  • [Feature Improvement] Secure scraping available in Metric Registrar
  • Bump ubuntu-xenial stemcell to version 621.84
  • Bump cf-autoscaling to version 233
  • Bump cflinuxfs3 to version 0.204.0
  • Bump dotnet-core-offline-buildpack to version 2.3.14
  • Bump go-offline-buildpack to version 1.9.17
  • Bump metric-registrar to version 1.2.1
  • Bump push-usage-service-release to version 673.0.13
  • Bump python-offline-buildpack to version 1.7.20
  • Bump routing to version 0.207.0
  • Bump staticfile-offline-buildpack to version 1.5.10
Component Version
ubuntu-xenial stemcell621.84
backup-and-restore-sdk1.18.0
binary-offline-buildpack1.0.36
bosh-dns-aliases0.0.3
bosh-system-metrics-forwarder0.0.19
bpm1.1.7
capi1.95.2
cf-autoscaling233
cf-cli1.28.0
cf-networking2.33.0
cflinuxfs30.204.0
credhub2.6.1
diego2.48.0
dotnet-core-offline-buildpack2.3.14
garden-runc1.19.16
go-offline-buildpack1.9.17
haproxy10.0.0
istio1.3.0
java-offline-buildpack4.32.1
log-cache2.8.0
loggregator-agent6.0.2
loggregator106.3.11
mapfs1.2.4
metric-registrar1.2.1
metrics-discovery3.0.0
mysql-monitoring9.12.0
nats34
nfs-volume7.0.4
nginx-offline-buildpack1.1.14
nodejs-offline-buildpack1.7.26
notifications-ui40
notifications61
php-offline-buildpack4.4.20
push-apps-manager-release672.0.13
push-usage-service-release673.0.13
pxc0.28.0
python-offline-buildpack1.7.20
r-offline-buildpack1.1.7
routing0.207.0
ruby-offline-buildpack1.8.23
silk2.33.0
smb-volume3.0.1
smoke-tests2.2.0
staticfile-offline-buildpack1.5.10
statsd-injector1.11.15
syslog11.6.1
system-metrics-scraper2.0.13
uaa74.5.18

2.10.3

Release Date: 09/09/2020

  • [Security Fix] Fix for CVE-2020-5420: Improve Gorouter’s handling of invalid HTTP responses
  • [Feature Improvement] Gorouter aliases /healthz to /health in order to prevent downtime during upgrades
  • [Bug Fix] Improve Log Cache Syslog Ingestion Performance
  • Bump ubuntu-xenial stemcell to version 621.82
  • Bump cf-networking to version 2.33.0
  • Bump diego to version 2.48.0
  • Bump log-cache to version 2.8.0
  • Bump nfs-volume to version 7.0.4
  • Bump nginx-offline-buildpack to version 1.1.14
  • Bump nodejs-offline-buildpack to version 1.7.26
  • Bump php-offline-buildpack to version 4.4.20
  • Bump push-apps-manager-release to version 672.0.13
  • Bump routing to version 0.206.0
  • Bump silk to version 2.33.0
Component Version
ubuntu-xenial stemcell621.82
backup-and-restore-sdk1.18.0
binary-offline-buildpack1.0.36
bosh-dns-aliases0.0.3
bosh-system-metrics-forwarder0.0.19
bpm1.1.7
capi1.95.2
cf-autoscaling232
cf-cli1.28.0
cf-networking2.33.0
cflinuxfs30.203.0
credhub2.6.1
diego2.48.0
dotnet-core-offline-buildpack2.3.13
garden-runc1.19.16
go-offline-buildpack1.9.16
haproxy10.0.0
istio1.3.0
java-offline-buildpack4.32.1
log-cache2.8.0
loggregator-agent6.0.2
loggregator106.3.11
mapfs1.2.4
metric-registrar1.1.1
metrics-discovery3.0.0
mysql-monitoring9.12.0
nats34
nfs-volume7.0.4
nginx-offline-buildpack1.1.14
nodejs-offline-buildpack1.7.26
notifications-ui40
notifications61
php-offline-buildpack4.4.20
push-apps-manager-release672.0.13
push-usage-service-release673.0.11
pxc0.28.0
python-offline-buildpack1.7.18
r-offline-buildpack1.1.7
routing0.206.0
ruby-offline-buildpack1.8.23
silk2.33.0
smb-volume3.0.1
smoke-tests2.2.0
staticfile-offline-buildpack1.5.9
statsd-injector1.11.15
syslog11.6.1
system-metrics-scraper2.0.13
uaa74.5.18

2.10.2

Release Date: 08/24/2020

  • [Security Fix] Fix for CVE-2020-5416: Improve Gorouter’s websocket error handling
  • [Bug Fix] loggr-syslog-agent - Fix server alternative name
  • [Bug Fix] Fix memory leak in RLP gateway
  • [Bug Fix]: Return 502 TLS Handshake error for an unresponsive backend
  • [Bug Fix] Fix Usage Service for inactive foundations
  • [Bug Fix] Bump garden-runc to v1.19.16
  • Bump ubuntu-xenial stemcell to version 621.78
  • Bump cflinuxfs3 to version 0.203.0
  • Bump garden-runc to version 1.19.16
  • Bump go-offline-buildpack to version 1.9.16
  • Bump java-offline-buildpack to version 4.32.1
  • Bump loggregator to version 106.3.11
  • Bump push-usage-service-release to version 673.0.11
  • Bump python-offline-buildpack to version 1.7.18
  • Bump routing to version 0.205.0
  • Bump ruby-offline-buildpack to version 1.8.23
Component Version
ubuntu-xenial stemcell621.78
backup-and-restore-sdk1.18.0
binary-offline-buildpack1.0.36
bosh-dns-aliases0.0.3
bosh-system-metrics-forwarder0.0.19
bpm1.1.7
capi1.95.2
cf-autoscaling232
cf-cli1.28.0
cf-networking2.31.0
cflinuxfs30.203.0
credhub2.6.1
diego2.47.0
dotnet-core-offline-buildpack2.3.13
garden-runc1.19.16
go-offline-buildpack1.9.16
haproxy10.0.0
istio1.3.0
java-offline-buildpack4.32.1
license
log-cache2.7.2
loggregator-agent6.0.2
loggregator106.3.11
mapfs1.2.4
metric-registrar1.1.1
metrics-discovery3.0.0
mysql-monitoring9.12.0
nats34
nfs-volume7.0.3
nginx-offline-buildpack1.1.12
nodejs-offline-buildpack1.7.25
notifications-ui40
notifications61
php-offline-buildpack4.4.19
push-apps-manager-release672.0.12
push-usage-service-release673.0.11
pxc0.28.0
python-offline-buildpack1.7.18
r-offline-buildpack1.1.7
routing0.205.0
ruby-offline-buildpack1.8.23
silk2.31.0
smb-volume3.0.1
smoke-tests2.2.0
staticfile-offline-buildpack1.5.9
statsd-injector1.11.15
syslog11.6.1
system-metrics-scraper2.0.13
uaa74.5.18

2.10.1

Release Date: 08/07/2020

  • [Security Fix] Notifications-ui removes UAA client secret from logs during installation
  • [Feature Improvement] Expose GCS blobstore storage account timeout values
  • [Feature Improvement] Upgrade Percona-XtraDB-Cluster to version 5.7.30-31.43
  • [Bug Fix] Fix issue where requests to internal routes could fail due to incorrect case-sensitivity in DNS lookup in the service discovery controller.
  • [Bug Fix] Apps Manager accounts for App Metrics’ duplicate counts of HTTP requests, HTTP latency, and HTTP errors on App page Overview tab graphs
  • [Bug Fix] System Metrics Scraper/Prom Scraper — Fixes a bug that causes excess log volume and increases scrape interval to reduce metric volume
  • Bump ubuntu-xenial stemcell to version 621.77
  • Bump cf-cli to version 1.28.0
  • Bump cf-networking to version 2.31.0
  • Bump cflinuxfs3 to version 0.202.0
  • Bump dotnet-core-offline-buildpack to version 2.3.13
  • Bump garden-runc to version 1.19.14
  • Bump go-offline-buildpack to version 1.9.15
  • Bump nginx-offline-buildpack to version 1.1.12
  • Bump nodejs-offline-buildpack to version 1.7.25
  • Bump notifications-ui to version 40
  • Bump php-offline-buildpack to version 4.4.19
  • Bump push-apps-manager-release to version 672.0.12
  • Bump pxc to version 0.28.0
  • Bump python-offline-buildpack to version 1.7.17
  • Bump ruby-offline-buildpack to version 1.8.22
  • Bump silk to version 2.31.0
  • Bump system-metrics-scraper to version 2.0.13
Component Version
ubuntu-xenial stemcell621.77
backup-and-restore-sdk1.18.0
binary-offline-buildpack1.0.36
bosh-dns-aliases0.0.3
bosh-system-metrics-forwarder0.0.19
bpm1.1.7
capi1.95.2
cf-autoscaling232
cf-cli1.28.0
cf-networking2.31.0
cflinuxfs30.202.0
credhub2.6.1
diego2.47.0
dotnet-core-offline-buildpack2.3.13
garden-runc1.19.14
go-offline-buildpack1.9.15
haproxy10.0.0
istio1.3.0
java-offline-buildpack4.31.1
log-cache2.7.2
loggregator-agent6.0.2
loggregator106.3.10
mapfs1.2.4
metric-registrar1.1.1
metrics-discovery3.0.0
mysql-monitoring9.12.0
nats34
nfs-volume7.0.3
nginx-offline-buildpack1.1.12
nodejs-offline-buildpack1.7.25
notifications-ui40
notifications61
php-offline-buildpack4.4.19
push-apps-manager-release672.0.12
push-usage-service-release673.0.10
pxc0.28.0
python-offline-buildpack1.7.17
r-offline-buildpack1.1.7
routing0.203.0
ruby-offline-buildpack1.8.22
silk2.31.0
smb-volume3.0.1
smoke-tests2.2.0
staticfile-offline-buildpack1.5.9
statsd-injector1.11.15
syslog11.6.1
system-metrics-scraper2.0.13
uaa74.5.18

2.10.0

Release Date: July 31, 2020

Component Version
ubuntu-xenial stemcell621.76
backup-and-restore-sdk1.18.0
binary-offline-buildpack1.0.36
bosh-dns-aliases0.0.3
bosh-system-metrics-forwarder0.0.19
bpm1.1.7
capi1.95.2
cf-autoscaling232
cf-cli1.27.0
cf-networking2.30.0
cflinuxfs30.198.0
credhub2.6.1
diego2.47.0
dotnet-core-offline-buildpack2.3.12
garden-runc1.19.11
go-offline-buildpack1.9.14
haproxy10.0.0
istio1.3.0
java-offline-buildpack4.31.1
log-cache2.7.2
loggregator-agent6.0.2
loggregator106.3.10
mapfs1.2.4
metric-registrar1.1.1
metrics-discovery3.0.0
mysql-monitoring9.12.0
nats34
nfs-volume7.0.3
nginx-offline-buildpack1.1.11
nodejs-offline-buildpack1.7.24
notifications-ui37
notifications61
php-offline-buildpack4.4.18
push-apps-manager-release672.0.11
push-usage-service-release673.0.10
pxc0.25.0
python-offline-buildpack1.7.16
r-offline-buildpack1.1.7
routing0.203.0
ruby-offline-buildpack1.8.21
silk2.30.0
smb-volume3.0.1
smoke-tests2.2.0
staticfile-offline-buildpack1.5.9
statsd-injector1.11.15
syslog11.6.1
system-metrics-scraper2.0.12
uaa74.5.18

How to Upgrade

To upgrade to TAS for VMs v2.10, see Upgrading Ops Manager.

When upgrading to TAS for VMs v2.10, be aware of the following upgrade considerations:

  • If you previously used an earlier version of TAS for VMs, you must first upgrade to TAS for VMs v2.9 to successfully upgrade to TAS for VMs v2.10.

  • Some partner service tiles may be incompatible with Ops Manager v2.10. VMware is working with partners to ensure their tiles are updated to work with the latest versions of Ops Manager.

    For information about which partner service releases are currently compatible with Ops Manager v2.10, review the appropriate partners services release documentation at https://docs.pivotal.io or contact the partner organization that produces the tile.

New Features in TAS for VMs v2.10

TAS for VMs v2.10 includes the following major features:

Aggregate Syslog Drains Contain Logs Only

When you configure an aggregate syslog drain in TAS for VMs v2.10, by default you receive logs only. You do not also receive metrics. By not including metrics alongside logs, your syslog drain uses fewer resources and reduces network traffic between TAS for VMs components and your external logging service.

If you want the aggregate drain to send metrics along with logs, you can modify your drain URLs.

To continue to see metrics in your drains after upgrading to TAS for VMs v2.10:

  1. Navigate to the Ops Manager Installation Dashboard.
  2. Click the VMware Tanzu Application Service for VMs tile in the Installation Dashboard.
  3. Select System Logging.
  4. For Address, enter the hostname or IP address of the syslog server and append ?include-metrics-deprecated=true. For example, https://syslog-server.com:123?include-metrics-deprecated=true.
  5. Click Save.

For more information about configuring aggregate syslog drains, see Configure System Logging in Configuring TAS for VMs.

Send Only App Metrics to Firehose

You can choose to prevent the Loggregator Firehose from emitting app logs but still allow the Firehose to emit app metrics. Disabling logs in the Firehose helps reduce the load on TAS for VMs by allowing you to scale down Doppler and Traffic Controller VMs.

To configure the Firehose to receive only app metrics, you must select the Disable logs in Firehose, Log Cache syslog ingestion, Enable V1 Firehose, and Enable V2 Firehose checkboxes in the System Logging pane of the TAS for VMs tile. You must also configure Aggregate log and metric drain destinations in the System Logging pane of the TAS for VMs tile. For more information, see Configure System Logging in Configuring TAS for VMs.

Optionally Use Human-Readable Timestamps for Component Logs

TAS for VMs v2.10 introduces RFC3339 log format support for several TAS for VMs components. You can configure these components to produce logs with human-readable RFC3339 timestamps with the Timestamp format for component logs configuration option in the TAS for VMs tile. Logs that use human-readable timestamps are often easier to debug.

RFC3339-formatted timestamps follow the RFC3339 spec, include nine points of precision where possible, and are in UTC. For example:

  • 2019-11-21T22:16:18.750673404Z
  • 2019-11-21T22:16:18.750000000Z


For more information about configuring the Timestamp format for component logs field, see System Logging in Configuring TAS for VMs.

In TAS for VMs v2.10.0, if you select the Converge to human-readable RFC3339 format option under Timestamp format for component logs, then the following components and related jobs use RFC3339 timestamps: 

Component Jobs
routing gorouter
silk iptables logger, silk-daemon
diego auctioneer, bbs, file_server, locket, rep, route_emitter, ssh_proxy
garden-runc garden
pxc pxc-mysql, proxy, galera-agent, gra-log-purger
mysql-monitoring mysql-metrics, replication-canary

Components not listed in the table above either do not support RFC3339 timestamps in TAS for VMs v2.10.0 or were already using the RFC3339 timestamp format. Selecting Converge to human-readable RFC3339 format ensures that any additional components that add support for RFC3339 timestamps in later releases of TAS for VMs v2.10 are automatically configured to use RFC3339 timestamps after you upgrade.

To confirm which TAS for VMs components use RFC3339 timestamps:

  1. Go to the debug/files endpoint at https://OPS-MANAGER-FQDN/debug/files, where OPS-MANAGER-FQDN is the fully-qualified domain name of your Ops Manager instance.

  2. For each component, confirm that the logging.format.timestamp property is set to rfc3339.

Breaking Change: The Timestamp format for component logs feature replaces the Format of timestamps in Diego logs feature in the App Containers pane of the TAS for VMs tile. However, when you upgrade to TAS for VMs v2.10, the option that was selected under Format of timestamps in Diego logs in your previous deployment is applied to Timestamp format for component logs. For more information, see Timestamp Format for Component Logs Replaces Timestamp Format for Diego Logs below.

You can supply sticky session cookie names for the Gorouter to use when handling sticky sessions. The Gorouter uses these cookies to support session affinity, or sticky sessions. For more information, see Session Affinity in HTTP Routing.

By default, the Gorouter uses JSESSIONID. Some apps require a different session name. For example, Spring WebFlux requires SESSION for the session cookie name.

To supply cookie names, see Configure Networking in Configuring TAS for VMs.

Improvements to App Autoscaler

TAS for VMs v2.10 includes the following improvements to App Autoscaler:

  • App Autoscaler no longer returns an error when you set an executes_at time that is in the past. This lets you re-use scheduled limit changes through the Scheduler API. App Autoscaler calculates future execution dates based on the past date.
  • You can use rules that are based on the HTTP throughput metric when the number of the requests is high. For information about the metric, see Default Metrics for Scaling Rules.

TAS for VMs Is Compatible with cf CLI v7

TAS for VMs v2.10 paired with cf CLI v7 allows you to do the following:

For more information about the GA release of cf CLI v7, see Cloud Foundry Further Simplifies Modern App Development: An Inside Look at the New cf CLI v7.

Breaking Changes

TAS for VMs v2.10 includes the following breaking changes:

Timestamp Format for Component Logs Replaces Timestamp Format for Diego Logs

The Format of timestamps in Diego logs feature is removed from the App Containers pane of the TAS for VMs tile. It is replaced by Timestamp format for component logs in the System Logging pane of the TAS for VMs tile.

TAS for VMs v2.10 automatically configures Timestamp format for component logs based on how you configured Format of timestamps in Diego logs in TAS for VMs v2.9:

  • If the RFC3339 timestamps option for Format of timestamps in Diego logs is selected before you upgrade to TAS for VMs v2.10, then the Converge to human-readable RFC3339 format option for Timestamp format for component logs is selected in TAS for VMs v2.10 by default. Converge to human-readable RFC3339 format configures TAS for VMs to use RFC3339 timestamps in the logs of several TAS for VMs components.

  • If the Seconds since the Unix epoch option for Format of timestamps in Diego logs is selected before you upgrade to TAS for VMs v2.10, then the Maintain previous format option for Timestamp format for component logs is selected in TAS for VMs v2.10 by default.

To avoid breaking changes associated with this update, you must:

  • Update any automation scripts that reference the removed Format of timestamps in Diego logs feature.

  • If RFC3339 timestamps is selected for Format of timestamps in Diego logs before you upgrade to TAS for VMs v2.10, update any external monitoring configuration to account for RFC3339 timestamps in the logs for several TAS for VMs components. For a list of TAS for VMs components that support RFC3339 timestamps in TAS for VMs v2.10, see Optionally Use Human-Readable Timestamps for Component Logs below.

For more information about configuring RFC3339 timestamps for component logs, see System Logging in Configuring TAS for VMs.

Aggregate Syslog Drains Contain Logs Only

In TAS for VMs v2.10, aggregate syslog drains contain only logs by default, and do not contain metrics. If you rely on metrics sent through aggregate syslog drains, you must add ?include-metrics-deprecated=true to your aggregate drain URLs to continue to receive metrics in the drains.

For more information, see Aggregate Syslog Drains Contain Logs Only in the TAS for VMs v2.10 Release Notes.

Known Issues

TAS for VMs v2.10 includes the following known issues:

Errors Viewing App Logs after Disabling V1 Firehose

If you disable the V1 Firehose and you are using a version of the cf CLI earlier than v6.50, you may encounter errors when you push an app or view the logs for an app. The logs exist but are not visible from the cf CLI.

Running the following commands results in errors:

  • cf logs: Timeout trying to connect to NOAA
  • cf push: timeout connecting to log server, no log will be shown

Despite the log-related errors, cf push works correctly and pushes the app.

To avoid encountering errors after disabling the Loggregator V1 Firehose, upgrade to cf CLI v6.50 or later.