Pivotal Elastic Runtime v1.10 Release Notes
Versions 1.10.0 and higher versions of Elastic Runtime consist of the following component versions:
|* Components marked with an asterisk have been patched to resolve security vulnerabilities or fix component behavior.|
The procedure for upgrading to Pivotal Cloud Foundry Elastic Runtime v1.10 is documented in the Upgrading Pivotal Cloud Foundry topic.
When upgrading to v1.10, be aware of the following upgrade considerations:
- You must upgrade first to a version of Elastic Runtime v1.9.x in order to successfully upgrade to v1.10.
- If you are currently using any of the following services in your PCF deployment, then you must upgrade and configure the tiles before upgrading to PCF v1.10:
- RabbitMQ for PCF. Upgrade to RabbitMQ for PCF v1.7.13 or later, and deselect the Use non-secure communication for metrics checkbox. For more information about RabbitMQ for PCF, see the RabbitMQ for PCF documentation.
- Redis for PCF. Upgrade to Redis for PCF v1.7.3 or later, and deselect the Use non-secure communication for metrics checkbox. For more information about Redis for PCF, see the Redis for PCF documentation.
- Some partner service tiles may be incompatible with PCF v1.10. Pivotal is working with partners to ensure their tiles are being updated to work with the latest versions of PCF.
For information about which partner service releases are currently compatible with PCF v1.10, review the appropriate partners services release documentation at http://docs.pivotal.io, or contact the partner organization that produces the tile.
The Advanced Features section of the Elastic Runtime tile includes new functionality that may have certain constraints.
Although these features are fully supported, Pivotal recommends caution when using them in production.
This section describes new features of the release.
The Elastic Runtime tile offers a Container-to-Container Networking feature that puts applications in their own overlay network. This feature is currently in beta.
For more information, see the Container-to-Container Networking topic.
This release provides general support for volume services inside of application containers.
Additionally, the Elastic Runtime ships with an NFS Volume Service Broker as a beta feature.
This release removes the etcd Proxy VM, ensuring all communication to the etcd cluster happens over a secure connection.
The CF API can enforce rate limits for users and clients.
Limits can be set for authenticated and unauthenticated clients and expire over a rolling hour-long window.
You can enable these API rate limits as a beta feature in the Advanced Features section of the Elastic Runtime tile.
For more information, see the Deploying Elastic Runtime topic for the IaaS where you are deploying PCF. For example, if you are deploying PCF on Google Cloud Platform (GCP), see the Deploying Elastic Runtime on GCP topic.
All Diego VMs now include an operator toolkit, known as
cfdot, for interacting with your Diego components.
For more details about
cfdot, see the cfdot documentation.
The Cloud Controller Clock has been outfitted to allow multiple instances of the VM to run in parallel.
Operators can scale the instance count for the VM to fit their needs. For example, operators might want to change the instance count to 2 or 3 so they have a clock in each availability zone.
For more information, see High Availability in Cloud Foundry.
The Routers can now be configured to maintain a number of keep-alive connections.
Reusing connections allows for HTTP performance improvements as the underlying connection does not need to be re-established on every request.
For more information, see the Router Idle Keepalive Connections and the Deploying Elastic Runtime topic for the IaaS where you are deploying PCF. For example, if you are deploying PCF on Google Cloud Platform (GCP), see the Deploying Elastic Runtime on GCP topic.
Previously, application SSH access was previously enabled globally as a feature in Cloud Foundry.
In addition to the global setting, operators can now choose to disable SSH access for new applications.
Choosing to disable SSH access for new applications requires that developers enable SSH access on a per-application basis.
For more information, see Configuring SSH Access for PCF.
The Elastic Runtime now supports using Azure Storage as a backend for the platform file storage.
For more information, see the Deploying Elastic Runtime on Azure topic.
The internal MySQL database cluster now includes healthcheck thresholds.
These thresholds can be configured to match your MySQL load balancer thresholds so that failover is seamless.
For information on internal MySQL load balancer configuration, see the Deploying Elastic Runtime topic for the IaaS where you are deploying PCF. For example, if you are deploying PCF on Google Cloud Platform (GCP), see the Deploying Elastic Runtime on GCP topic.
Additionally, the MySQL Monitor job now includes a tool called
mysql-diag that provides some diagnostic information about your MySQL cluster.
For more information on the
mysql-diag tool, see Diagnosing problems with Elastic Runtime MySQL or the Pivotal MySQL Tile.
Diego now provides a configuration option for limiting the number of containers allowed to be in a “starting” state at any one time.
By default, the setting limits the number of containers in the “starting” state to 200.
This setting prevents Diego from scheduling too much work for your platform to handle, preventing a possible cascading failure.
This configuration is available as the Max Inflight Container Starts on the Application Containers screen in Elastic Runtime.
To configure this feature, see Setting a Maximum Number of Started Containers.
For more information on preventing platform overload during upgrade, see also Upgrade Considerations for Selecting File Storage in Pivotal Cloud Foundry and Managing Diego Cell Limits During Upgrade topics.
The Loggregator system now uses the gRPC protocol for secure and reliable communication between the Metron Agent and the Doppler, and between the Doppler and the Traffic Controller. This improves the stability and the performance of the Loggregator system.
Since Loggregator now uses the gRPC protocol, your deployment may see an increase in Loggregator message throughput.
This section lists known issues for PCF Elastic Runtime.
When tailing logs using the
cf logs or
cf logs --recent command, the cf CLI reports a connection issue. Users may encounter errors similar to the following:
Warning: error tailing logs Error dialing loggregator server: websocket: bad handshake. Please ask your Cloud Foundry Operator to check the platform configuration (loggregator endpoint is wss://loggregator.example.com:443).
FAILED Error dialing loggregator server: unexpected EOF Please ask your Cloud Foundry Operator to check the platform configuration (loggregator endpoint is wss://loggregator.example.com:443).
Solution: Upgrade to cf CLI version 6.23 or later. After you upgrade, if you still encounter the connection issue, make sure you log out and log in again using
cf logout and
In PCF v1.10, the recommended metric for monitoring firehose message throughput is changing to
- As of ERT 1.10.0
DopplerServer.listeners.totalReceivedMessageCountis not an accurate metric for all possible firehose traffic. This is being patched.
- As of ERT 1.10.1,
DopplerServer.listeners.totalReceivedMessageCountcan be expected to accurately represent a count of firehose message throughput.