Pivotal Platform v2.8 Feature Highlights

This topic highlights important new features included in Pivotal Platform v2.8.

Pivotal Operations Manager Highlights

Ops Manager v2.8 includes the important major features. For additional information about these and other features included in Ops Manager v2.8, see Pivotal Operations Manager v2.8 Release Notes.

Add Optional Dependencies

Tile authors can include both required and optional product dependencies for tiles.

Optional dependencies are only required if you upload both the dependent tile and the optional dependency to your environment.

If tiles use optional dependencies instead of required dependencies, operators do not need to upload tiles in a particular order to avoid errors during deployment.

HTTP Install-Time Verifier

Tile authors can define an HTTP install-time verifier that calls an HTTP endpoint on the broker. Ops Manager executes this verifier after you click Apply Changes. If the HTTP response is not successful, the deployment fails and the verifier displays a warning message.

Tile authors can use this verifier to check for service instances that might become orphaned after an upgrade.

For more information, see install_time_verifiers in Property and Template References in the Pivotal Platform Tile Developer Guide.

System Metrics Agent Installed By Default

Ops Manager installs a System Metrics Agent on all Pivotal Platform VMs by default.

System metrics report the usage and status of VM memory, disk, CPU, network, load, and swap space.

Other platform tools such as Pivotal Healthwatch and Pivotal App Metrics can consume VM metrics from PAS, PKS, hosted services, and any other products deployed by Ops Manager.

For a complete list of metrics, see VM Metrics in System Metrics Agent in the System Metrics repository on GitHub.

Ops Manager Supports Certificate Authentication to vSphere NSX Manager

For Pivotal Platform deployments on vSphere that use NSX networking, the BOSH Director, NSX-T Container Plugin (NCP), and PKS can all authenticate to the NSX Manager with a certificate and private key, as well as with a username and password.

For information about how to configure BOSH Director authentication to NSX, see Step 2: Configure vCenter in Configuring BOSH Director on vSphere.

Revert Staged Changes With the API

You can use the DELETE /api/v0/staged Ops Manager API endpoint to revert all staged changes in Ops Manager. For more information, see Revert staged changes in the Ops Manager API documentation.

Configure Multiple HSMs With the API

You can use the PUT /api/v0/staged/director/properties endpoint of the Ops Manager API to configure multiple hardware security modules (HSMs) for BOSH CredHub. For more information, see Updating director and Iaas properties (Experimental) in the Ops Manager API documentation.

Pivotal Telemetry for Ops Manager Is Imported by Default

Ops Manager automatically imports the Pivotal Telemetry for Ops Manager tile. This tile collects product usage data, which helps Pivotal improve our products and services.

Using Pivotal Telemetry for Ops Manager is optional, and the tile does not share product usage data until you add and configure it.

For more information, see the Pivotal Telemetry for Ops Manager documentation.

Pivotal Application Service (PAS) Highlights

PAS v2.8 includes the following important major features. For additional information about these and other features included in PAS v2.8, see Pivotal Application Service v2.8 Release Notes.

Deploy Sidecar Processes with a Buildpack

You can deploy a sidecar process for an app with a buildpack rather than with an app manifest.

For more information about deploying sidecar processes with buildpacks, see Sidecar Buildpacks.

cf CLI Supports Sidecar Processes

The Cloud Foundry Command-Line Interface (cf CLI) adds support for sidecar processes by displaying the sidecar process alongside the app process to which it is attached.

For more information about deploying sidecar processes with apps, see Pushing Apps with Sidecar Processes (Beta).

PAS Deployed With CredHub by Default

CredHub is now a required component in PAS, and the default number of CredHub instances is increased from 0 to 2.

You must set the number of CredHub instances to at least 1 when you deploy PAS v2.8. To set the number of CredHub instances for PAS, use the Resource Config pane of the PAS tile.

For more information about runtime CredHub, see Runtime CredHub in CredHub.

CPU Usage Metric Is Relative to CPU Entitlement for the Container

Garden uses the CPU weight property for a container to calculate a AbsoluteCPUEntitlement metric, which is the CPU entitlement for the container.

Garden can then produce CPU usage metrics that are relative to AbsoluteCPUEntitlement. For example, a value of 100% for CPU usage indiciates that the app is using all the CPU to which it is entitled.

CPU usage metrics that are relative to the CPU entitlement for the container help you make more informed scaling decisions for your apps.

For more information about the AbsoluteCPUEntitlement metric, see Diego Container Metrics in Container Metrics.

For information about the Cloud Foundry CPU Entitlement Plugin, an experimental plugin that allows you to examine the CPU usage of PAS apps relative to their CPU entitlement, see the cpu-entitlement-plugin repository on GitHub.

Support for Pushing Container Images Hosted in AWS ECR

When you push container images hosted in AWS Elastic Container Registry (ECR) with the cf CLI, you can provide the access key ID and secret for an AWS IAM user as a Docker username and password.

This update allows the cf CLI to successfully pull container images hosted in ECR with valid AWS Identity and Access Management (IAM) user credentials.

For more information, see Amazon Elastic Container Registry (ECR) in Deploying an App with Docker.

Forward Logs and Metrics for All Apps with Aggregate Syslog Drain

You can configure an aggregate log and metric drain for your foundation to allow Syslog Agents to forward all app metrics, app logs, and VM metrics to one or more syslog endpoints.

This allows you to forward logs and metrics for all apps in your foundation without configuring syslog drains for each app individually.

For more information about enabling an aggregate log and metric drain for your foundation, see Configure System Logging in Configuring PAS.

Apps Manager Spring Cloud Services Config Server Integrations

For Spring Cloud Services (SCS) instances, Apps Manager shows the current status of the SCS Config Server on the service instance detail page. You can also use Apps Manager to trigger the Config Server to update app configurations.

This feature provides closer integrations with Spring Cloud, which means that you can work and troubleshoot more quickly.

For more information, see View and Update Spring Cloud Services Configurations in Managing Apps and Service Instances Using Apps Manager.

View Quota Information on Apps Manager

You can view quota information for the orgs in the org header in App Manager. This allows you to more quickly find resource usage information for orgs.

Use Pivotal Isolation Segment to Make Upgrades More Manageable

You can use the Pivotal Isolation Segment tile to deploy a separate group of Diego Cells without isolating the Diego Cell capacity from other apps. You may want to do this if you have a PAS tile with a large volume of Diego Cells.

Putting more of your workloads on the Diego Cells of one or more Pivotal Isolation Segment tiles has the following benefits:

  • Separates the upgrade of the PAS control plane from the upgrade of the Diego Cells
  • Separates the upgrade of the Diego Cells into smaller groups

For more information, see Use Pivotal Isolation Segment to Improve Upgrades for Large Foundations in Pivotal Isolation Segment v2.8 Release Notes.

Pivotal Application Service for Windows (PASW) Highlights

PASW v2.8 includes the following important major features. For additional information about these and other features included in PASW v2.8, see Pivotal Application Service for Windows v2.8 Release Notes.

Web Config Transform Extension Buildpack

You can use the Web Config Transform Extension Buildpack to externalize .NET Framework configurations in the web.config file to external sources such as GitHub, CredHub, or environment variables. The buildpack uses token replacement to ensure that app configurations are not included in the web.config build artifact.

For more information about using the buildpack, see the Web Config Transform Buildpack repository on GitHub.

Enterprise Pivotal Container Service (PKS) v1.6 Highlights

PKS v1.6 includes the following important major features. additional information about these and other features included in PKS v1.6, see Release Notes in the PKS documentation.

Experimental Integration with Tanzu Mission Control

PKS v1.6 includes an experimental integration with Tanzu Mission Control.

For more information, see Tanzu Mission Control Integration.

Operators Can Limit Cluster Provisioning

Operators can limit the total number of clusters a user can provision in PKS.

For more information about quotas, see Managing Resource Usage with Quotas and Viewing Usage Quotas.

PKS Management Console

PKS v1.6 includes the VMware PKS Management Console v1.1 installer. VMware PKS Management Console provides a unified installation experience for deploying PKS to vSphere. For more information, see Using the PKS Management Console.

Read-Only Admin Role

PKS v1.6 adds a new UAA scope, pks.clusters.admin.read, for PKS users. Accounts with this scope can access any information about all clusters except for cluster credentials.

For information about UAA scopes, see UAA Scopes for PKS Users and Managing PKS Users with UAA.

Configure a Cluster With a Docker Registry CA Certificate (Beta)

Operators can configure a single Kubernetes cluster with a specific Docker Registry CA certificate.

For more information about configuring a cluster with a Docker Registry CA certificate, see Configuring PKS Clusters with Private Docker Registry CA Certificates (Beta).

Upgrade Multiple Clusters Simultaneously

Operators can save time on cluster upgrades by upgrading multiple Kubernetes clusters simultaneously. Operators can also designate specific upgrade clusters as canary clusters. Cluster upgrades can be serial, serial with some clusters designated as canary clusters, or in parallel.

For more information about multiple cluster upgrades, see Upgrade Multiple Kubernetes Clusters in Upgrading Clusters.

Accelerated Cluster Creation Time

In the PKS API configuration pane, the Worker VM Max in Flight default value is increased from 1 to 4. This accelerates cluster creation by allowing up to four new nodes to be provisioned simultaneously.

The updated default value is only applied during new PKS installation and is not applied during an PKS upgrade.

If you are upgrading PKS from a previous version and want to accelerate multi-cluster provisioning, you can increase the value of Worker VM Max in Flight manually.

Support for Active-Active T0

You can use active-active mode on the tier 0 router in both automated-NAT deployments and in Bring Your Own Topology deployments with No-NAT configurations.

For more information, see Configure Networking in Deploy PKS by Using the Configuration Wizard.

Use Different Failure Domains for Load Balancer and Tier-1 Active/Standby Routers

You can place the load balancer and Tier-1 Active/Standby routers on different failure domains.

For more information, see Multisite Deployment of NSX-T Data Center in the VMware documentation.

View Health Status of NSX-T Cluster Networking Object

NSX Error CRD allows cluster managers and users to view NSX errors in Kubernetes resource annotations. The command kubectl get nsxerror provides the health status of NSX-T cluster networking objects for NCP v2.5.0 and later.

This improves visibility and troubleshooting for cluster managers and users.

For more information, see Viewing the Health Status of Cluster Networking Objects (NSX-T only).

Scale Ingress Capacity With the Load Balancer CRD

The NSXLoadBalancerMonitor CRD allows you monitor the load balancer and ingress resource capacity.

For NCP v2.5.1 and later, you can run the command kubectl get nsxLoadBalancerMonitors to view a health score that reflects the current performance of the NSX-T load balancer service, including usage, traffic, and current status.

For more information, see Ingress Scaling (NSX-T only).