Pivotal Platform v2.8 Feature Highlights
This topic highlights important new features included in Pivotal Platform v2.8.
Ops Manager v2.8 includes the important major features. For additional information about these and other features included in Ops Manager v2.8, see Pivotal Operations Manager v2.8 Release Notes.
Tile authors can include both required and optional product dependencies for tiles.
Optional dependencies are only required if you upload both the dependent tile and the optional dependency to your environment.
If tiles use optional dependencies instead of required dependencies, operators do not need to upload tiles in a particular order to avoid errors during deployment.
Tile authors can define an HTTP install-time verifier that calls an HTTP endpoint on the broker. Ops Manager executes this verifier after you click Apply Changes. If the HTTP response is not successful, the deployment fails and the verifier displays a warning message.
Tile authors can use this verifier to check for service instances that might become orphaned after an upgrade.
For more information, see install_time_verifiers in Property and Template References in the Pivotal Platform Tile Developer Guide.
Ops Manager installs a System Metrics Agent on all Pivotal Platform VMs by default.
System metrics report the usage and status of VM memory, disk, CPU, network, load, and swap space.
Other platform tools such as Pivotal Healthwatch and Pivotal App Metrics can consume VM metrics from PAS, PKS, hosted services, and any other products deployed by Ops Manager.
For a complete list of metrics, see VM Metrics in System Metrics Agent in the System Metrics repository on GitHub.
For Pivotal Platform deployments on vSphere that use NSX networking, the BOSH Director, NSX-T Container Plugin (NCP), and PKS can all authenticate to the NSX Manager with a certificate and private key, as well as with a username and password.
For information about how to configure BOSH Director authentication to NSX, see Step 2: Configure vCenter in Configuring BOSH Director on vSphere.
You can use the
DELETE /api/v0/staged Ops Manager API endpoint to revert all staged changes in Ops Manager. For more information, see Revert staged changes in the Ops Manager API documentation.
You can use the
PUT /api/v0/staged/director/properties endpoint of the Ops Manager API to configure multiple hardware security modules (HSMs) for BOSH CredHub. For more information, see Updating director and Iaas properties (Experimental) in the Ops Manager API documentation.
Ops Manager automatically imports the Pivotal Telemetry for Ops Manager tile. This tile collects product usage data, which helps Pivotal improve our products and services.
Using Pivotal Telemetry for Ops Manager is optional, and the tile does not share product usage data until you add and configure it.
For more information, see the Pivotal Telemetry for Ops Manager documentation.
PAS v2.8 includes the following important major features. For additional information about these and other features included in PAS v2.8, see Pivotal Application Service v2.8 Release Notes.
You can deploy a sidecar process for an app with a buildpack rather than with an app manifest.
For more information about deploying sidecar processes with buildpacks, see Sidecar Buildpacks.
The Cloud Foundry Command-Line Interface (cf CLI) adds support for sidecar processes by displaying the sidecar process alongside the app process to which it is attached.
For more information about deploying sidecar processes with apps, see Pushing Apps with Sidecar Processes (Beta).
You must use at least one CredHub VM when you deploy PAS v2.8. The default number of CredHub instances is increased from
2. You can configure the number of CredHub VMs PAS uses in the Resource Config pane of the PAS tile.
This update improves platform security by deploying PAS with CredHub by default. It also helps to avoid unexpected behaviors that occur when there are zero CredHub instances.
Garden uses the CPU weight property for a container to calculate a
AbsoluteCPUEntitlement metric, which is the CPU entitlement for the container.
Garden can then produce CPU usage metrics that are relative to
AbsoluteCPUEntitlement. For example, a value of
100% for CPU usage indiciates that the app is using all the CPU to which it is entitled.
CPU usage metrics that are relative to the CPU entitlement for the container help you make more informed scaling decisions for your apps.
For more information about the
AbsoluteCPUEntitlement metric, see Diego Container Metrics in Container Metrics.
For information about the Cloud Foundry CPU Entitlement Plugin, an experimental plugin that allows you to examine the CPU usage of PAS apps relative to their CPU entitlement, see the cpu-entitlement-plugin repository on GitHub.
When you push container images hosted in AWS Elastic Container Registry (ECR) with the cf CLI, you can provide the access key ID and secret for an AWS IAM user as a Docker username and password.
This update allows the cf CLI to successfully pull container images hosted in ECR with valid AWS Identity and Access Management (IAM) user credentials.
For more information, see Amazon Elastic Container Registry (ECR) in Deploying an App with Docker.
You can configure an aggregate log and metric drain for your foundation to allow Syslog Agents to forward all app metrics, app logs, and VM metrics to one or more syslog endpoints.
This allows you to forward logs and metrics for all apps in your foundation without configuring syslog drains for each app individually.
For more information about enabling an aggregate log and metric drain for your foundation, see Configure System Logging in Configuring PAS.
For Spring Cloud Services (SCS) instances, Apps Manager shows the current status of the SCS Config Server on the service instance detail page. You can also use Apps Manager to trigger the Config Server to update app configurations.
This feature provides closer integrations with Spring Cloud, which means that you can work and troubleshoot more quickly.
For more information, see View and Update Spring Cloud Services Configurations in Managing Apps and Service Instances Using Apps Manager.
You can view quota information for the orgs in the org header in App Manager. This allows you to more quickly find resource usage information for orgs.
You can use the Pivotal Isolation Segment tile to deploy a separate group of Diego Cells without isolating the Diego Cell capacity from other apps. You may want to do this if you have a PAS tile with a large volume of Diego Cells.
Putting more of your workloads on the Diego Cells of one or more Pivotal Isolation Segment tiles has the following benefits:
- Separates the upgrade of the PAS control plane from the upgrade of the Diego Cells
- Separates the upgrade of the Diego Cells into smaller groups
For more information, see Use Pivotal Isolation Segment to Improve Upgrades for Large Foundations in Pivotal Isolation Segment v2.8 Release Notes.
PASW v2.8 includes the following important major features. For additional information about these and other features included in PASW v2.8, see Pivotal Application Service for Windows v2.8 Release Notes.
You can use the Web Config Transform Extension Buildpack to externalize .NET Framework configurations in the
web.config file to external sources such as GitHub, CredHub, or environment variables. The buildpack uses token replacement to ensure that app configurations are not included in the
web.config build artifact.
For more information about using the buildpack, see the Web Config Transform Buildpack repository on GitHub.
PKS v1.6 includes the following important major features. additional information about these and other features included in PKS v1.6, see Release Notes in the PKS documentation.
PKS v1.6 includes an experimental integration with Tanzu Mission Control.
For more information, see Tanzu Mission Control Integration.
Operators can limit the total number of clusters a user can provision in PKS.
PKS v1.6 includes the VMware PKS Management Console v1.1 installer. VMware PKS Management Console provides a unified installation experience for deploying PKS to vSphere. For more information, see Using the PKS Management Console.
PKS v1.6 adds a new UAA scope,
pks.clusters.admin.read, for PKS users. Accounts with this scope can access any information about all clusters except for cluster credentials.
Operators can configure a single Kubernetes cluster with a specific Docker Registry CA certificate.
For more information about configuring a cluster with a Docker Registry CA certificate, see Configuring PKS Clusters with Private Docker Registry CA Certificates (Beta).
Operators can save time on cluster upgrades by upgrading multiple Kubernetes clusters simultaneously. Operators can also designate specific upgrade clusters as canary clusters. Cluster upgrades can be serial, serial with some clusters designated as canary clusters, or in parallel.
For more information about multiple cluster upgrades, see Upgrade Multiple Kubernetes Clusters in Upgrading Clusters.
In the PKS API configuration pane, the Worker VM Max in Flight default value is increased from
4. This accelerates cluster creation by allowing up to four new nodes to be provisioned simultaneously.
The updated default value is only applied during new PKS installation and is not applied during an PKS upgrade.
If you are upgrading PKS from a previous version and want to accelerate multi-cluster provisioning, you can increase the value of Worker VM Max in Flight manually.
You can use active-active mode on the tier 0 router in both automated-NAT deployments and in Bring Your Own Topology deployments with No-NAT configurations.
For more information, see Configure Networking in Deploy PKS by Using the Configuration Wizard.
You can place the load balancer and Tier-1 Active/Standby routers on different failure domains.
For more information, see Multisite Deployment of NSX-T Data Center in the VMware documentation.
NSX Error CRD allows cluster managers and users to view NSX errors in Kubernetes resource annotations. The command
kubectl get nsxerror provides the health status of NSX-T cluster networking objects for NCP v2.5.0 and later.
This improves visibility and troubleshooting for cluster managers and users.
For more information, see Viewing the Health Status of Cluster Networking Objects (NSX-T only).
The NSXLoadBalancerMonitor CRD allows you monitor the load balancer and ingress resource capacity.
For NCP v2.5.1 and later, you can run the command
kubectl get nsxLoadBalancerMonitors to view a health score that reflects the current performance of the NSX-T load balancer service, including usage, traffic, and current status.
For more information, see Ingress Scaling (NSX-T only).