Selecting and Configuring a Monitoring System
This topic describes considerations for selecting and configuring a system to continuously monitor Pivotal Platform component performance and health.
Many third-party systems can also be used to monitor a Pivotal Platform deployment.
Monitoring platforms support two types of monitoring:
- A dashboard for active monitoring when you are at a keyboard and screen
- Automated alerts for when your attention is elsewhere
Some monitoring solutions offer both in one package. Others require putting the two pieces together.
There are many monitoring options available, both open source and commercial products. Some commonly-used platforms among Pivotal Platform customers include:
Pivotal Platform Healthwatch by Pivotal
Pivotal Platform Partner Services available on Pivotal Network:
Other Commercial Services
Open Source Tooling
The Pivotal Cloud Ops Team manages two types of deployments for internal Pivotal use: open-source Cloud Foundry, and Pivotal Platform.
For Cloud Foundry, Pivotal Cloud Ops uses several monitoring tools. The Datadog Config repository provides an example of how the Pivotal Cloud Ops team uses a customized Datadog dashboard to monitor the health of its open-source Cloud Foundry deployments.
Most monitoring service tiles for Pivotal Platform come packaged with the Firehose nozzle necessary to extract the BOSH and Pivotal Platform metrics leveraged for platform monitoring. Nozzles are programs that consume data from the Loggregator Firehose. Nozzles can be configured to select, buffer, and transform data, and to forward it to other apps and services.
The nozzles gather the component logs and metrics streaming from the Loggregator Firehose endpoint. For more information about the Firehose, see Loggregator Architecture.
As of Pivotal Platform v2.0, both BOSH VM Health metrics and Pivotal Platform component metrics stream through the Firehose by default.
Pivotal Platform component metrics originate from the Metron agents on their source components, then travel through Dopplers to the Traffic Controller.
The Traffic Controller aggregates both metrics and log messages system-wide from all Dopplers, and emits them from its Firehose endpoint.
The following topic list high-signal-value metrics and capacity scaling indicators in a Pivotal Platform deployment:
Pivotal Platform includes smoke tests, which are functional unit and integration tests on all major system components. By default, whenever an operator upgrades to a new version of Pivotal Application Service (PAS), these smoke tests run as a post-deploy errand.
Pivotal recommends additional higher-resolution monitoring by the execution of continuous smoke tests, or Service Level Indicator tests, that measure user-defined features and check them against expected levels.
- Pivotal Platform Healthwatch automatically executes these tests for PAS Service Level Indicators.
- The Pivotal Cloud Ops CF Smoke Tests repository offers additional testing examples.
For information about how to set up Concourse to generate custom component metrics, see Metrics in the Concourse documentation.
To properly configure your monitoring dashboard and alerts, you must establish what thresholds should drive alerting and red/yellow/green dashboard behavior.
Some key metrics have more fixed thresholds, with similar threshold numbers numbers recommended across different foundations and use cases. These metrics tend to revolve around the health and performance of key components that can impact the performance of the entire system.
Other metrics of operational value are more dynamic in nature. This means that you must establish a baseline and yellow/red thresholds suitable for your system and its use cases. You can establish initial baselines by watching values of key metrics over time and noting what seems to be a good starting threshold level that divides acceptable and unacceptable system performance and health.
Effective platform monitoring requires continuous evolution.
After you establish initial baselines, Pivotal recommends that you continue to refine your metrics and tests to maintain the appropriate balance between early detection and reducing unnecessary alert fatigue. You should occasionally revisit the dynamic measures recommended in Key Performance Indicators and Key Capacity Scaling Indicators to ensure they are still appropriate to the current system configuration and its usage patterns.