Pivotal Healthwatch Architecture
Warning: Pivotal Healthwatch v1.7 is no longer supported or available for download. Pivotal Healthwatch v1.7 has reached the End of General Support (EOGS) phase as defined by the Support Lifecycle Policy. To stay up to date with the latest software and security updates, upgrade to a supported version.
This topic describes the architecture of Pivotal Healthwatch.
Pivotal Healthwatch Components
The diagram below shows the architecture of Pivotal Healthwatch, including the Ops Manager components that Pivotal Healthwatch interacts with.
View a larger version of this diagram.
Pivotal Healthwatch deploys several apps as part of its installation process. These apps are responsible for creating the service UI and supporting functional health checks.
How Data Flows Through Pivotal Healthwatch
Data flows through Pivotal Healthwatch as follows:
In Ops Manager, all platform metrics are forwarded to the Loggregator Firehose by default.
The Pivotal Healthwatch Ingestor app consumes the platform metrics from the Firehose.
The Ingestor forwards the platform metrics to Redis, which acts as a buffer.
The Worker app consumes raw data from Redis, aggregates it, and writes transformed data to the MySQL datastore.
The transformed data remains available in the MySQL datastore until it is purged.
How Product-Created Metrics Flow Through Pivotal Healthwatch
Pivotal Healthwatch also creates additional metrics of operational value and stores them in the super_value_metric
table in the datastore. For more information, see Pivotal Healthwatch Metrics. These product-created platform metrics take two paths through the system: Contextual Assessments and Functional Apps.
Contextual Assessments
Contextual Assessments are derived from platform-emitted data, for example, Syslog Drain Binding Capacity. Pivotal Healthwatch handles this data as follows:
- The Aggregator app makes additional transformations to the data.
- The Aggregator app forwards the data to the Metron Forwarder.
- The Metron Forwarder writes the data to the MySQL datastore and also forwards it back into the Firehose for external consumers to use.
Note: Pivotal Healthwatch forwards back into the Firehose only these additional metrics. The service does not forward platform metrics that are already available to the Firehose consumers.
- The transformed data remains available in the MySQL datastore until it is purged.
All ingested and service-created data points are stored in the datastore for 25 hours and then pruned.
Functional Apps
Functional Apps execute Health and Uptime tests, for example, CLI Command Health. Pivotal Healthwatch handles this data as follows:
- Functional Apps forward their data to the Metron Forwarder.
- The Metron Forwarder writes that data to the MySQL datastore and also forwards it back into the Firehose for external consumers to use. This data is then available until it is purged.
Note: Pivotal Healthwatch forwards back into the Firehose only these additional metrics. The service does not forward platform metrics that are already available to the Firehose consumers.
All ingested and service-created data points are stored in the datastore for 25 hours and then pruned.
How Pivotal Healthwatch Adjusts Data Flow for Higher Availability
When a singleton component becomes temporarily unavailable, such as when a VM restarts when a new stemcell is applied, Pivotal Healthwatch can adjust its data flow to provide higher availability. This adjusted data flow process is described below.
If Redis is temporarily unavailable, Firehose-based data buffers in the Ingestor until Redis becomes available.
If MySQL is temporarily unavailable, Firehose-based data queues in Redis, and data generated by Pivotal Healthwatch queues in the Metron Forwarder until MySQL becomes available.
How Platform-Emitted Data is Aggregated by Pivotal Healthwatch
All Firehose-emitted platform metrics that Pivotal Healthwatch ingests are aggregated according to pre-defined rules before being written to the datastore. This helps avoid the cost of storing raw data, and in the case of gauge values, can add additional points of interest to the data.
Counter metrics: Maximum counter value received for the one-minute aggregation window, from which a minute-to-minute rate is later derived. Unique to the metric name and to the individual metric emitter, per instance, as applicable.
Gauge metrics: Received values for the one-minute aggregation window, aggregated and stored with the following five calculated values per metric:
avg
,min
,max
,med
,95p
. Unique to the metric name and to the individual emitter, per instance, as applicable.
Functional Apps Created by Pivotal Healthwatch
Pivotal Healthwatch creates the following Functional Apps:
- BOSH Director health check: the
bosh-health-check
app - BOSH deployment task check: the
bosh-task-check
app - CLI command health check: the
cf-health-check
app - Canary app uptime and response check: the
canaryapp-health-check
app - Ops Manager uptime check: the
opsmanager-health-check
app - Pivotal Healthwatch self-monitor and report: the
healthwatch-meta-monitor
app - Pivotal Healthwatch event monitoring and alert publishing: the
healthwatch-alerts
app
For information about scaling these Pivotal Healthwatch resources, see Pivotal Healthwatch Resources.
Users Created by Pivotal Healthwatch
Pivotal Healthwatch creates a healthwatch_space_developer
user and assigns this user the SpaceDeveloper
role.
The cf-health-check
app uses the healthwatch_space_developer
user for CLI command health checks.
Required Networking Rules for Pivotal Healthwatch
Prior to deploying Pivotal Healthwatch, the operator must verify the network configuration necessary to allow Pivotal Healthwatch components to communicate with each other and certain Ops Manager components. The following table lists the Ops Manager components that Pivotal Healthwatch needs to connect to, and why.
Key Ops Manager Components | Why Pivotal Healthwatch Needs Access |
---|---|
BOSH Director | Information about BOSH Director health and executing IaaS health checks |
BOSH UAA | Authorization to access the Director |
UAA | Authorization for component metrics, CF Health Checks and Pivotal Healthwatch UI |
Cloud Controller | CF Health Checks |
Doppler | Metric Ingestion, Forwarding Metrics to the Firehose |
The following table lists the communication paths and ports between Pivotal Healthwatch components and other Pivotal Healthwatch and Ops Manager components.
This Healthwatch component… | Must communicate with… | Default TCP Port | Communication direction(s) | Notes |
---|---|---|---|---|
bosh-health-check |
|
|
One way | |
bosh-task-check |
|
|
One way | |
canary-health-check |
|
|
One way | |
cf-health-check |
|
|
One way | CF CLI interactions. On AWS, Doppler connection is typically port 4443. |
healthwatch |
|
|
One way | |
healthwatch-aggregator |
|
|
One way | |
healthwatch-alerts |
|
|
One way | |
healthwatch-api |
|
|
One way | |
healthwatch-ingestor |
|
|
One way | |
healthwatch-worker |
|
|
One way | |
healthwatch-meta-monitor |
|
|
One way | |
opsmanager-health-check |
|
|
One way | |
ui-health-check |
|
|
One way | |
Healthwatch Forwarder VM |
|
|
One way | |
Healthwatch MySQL VM | No outbound connections | |||
Healthwatch Redis VM | No outbound connections |
Note: If you configure syslog forwarding for Pivotal Healthwatch then you will need to ensure that network path from each VM as well.
Note: Pivotal Healthwatch depends on the bosh-system-metrics-forwarder
component in PAS. For that to work, the trafficcontroller
VM needs to communicate to the BOSH Director on port 25595.