Configuring Healthwatch

Page last updated:

This topic describes how to manually configure and deploy the Healthwatch tile.

To install, configure, and deploy Healthwatch through an automated pipeline, see Installing, Configuring, and Deploying a Tile Through an Automated Pipeline.

Overview of Configuring and Deploying Healthwatch

The Healthwatch tile monitors metrics across multiple Ops Manager foundations by scraping metrics from Healthwatch Exporter tiles installed on each foundation. For more information about the architecture of the Healthwatch tile, see Healthwatch Tile in Healthwatch Architecture.

After installing Healthwatch, you configure Healthwatch component VMs, including the configuration files associated with them, through the tile UI. You can also configure errands and system logging, as well as scale VM instances up or down and configure load balancers for multiple VM instances.

To configure and deploy the Healthwatch tile:

Notes:
  • To quickly deploy the Healthwatch tile to ensure that it deploys successfully before you fully configure it, you only need to configure the Assign AZ and Networks pane.
  • If you are using Healthwatch to monitor foundations that are running Tanzu Kubernetes Grid Integrated Edition (TKGI), you must configure the TKGI Cluster Discovery Configuration pane.
  1. Navigate to the Healthwatch tile in the Ops Manager Installation Dashboard. For more information, see Navigate to the Healthwatch Tile below.

  2. Assign jobs to your Availability Zones (AZs) and networks. For more information, see Assign AZs and Networks below.

  3. (Optional) Configure the Prometheus Configuration pane. For more information, see (Optional) Configure Prometheus below.

  4. (Optional) Configure the Alertmanager Configuration pane. For more information, see (Optional) Configure Alertmanager below.

  5. (Optional) Configure the Grafana Configuration pane. For more information, see (Optional) Configure Grafana below.

  6. (Optional) Configure the Canary URL Configuration pane. For more information, see (Optional) Configure Canary URL below.

  7. (Optional) Configure the Remote Write Configuration pane. For more information, see (Optional) Configure Remote Write below.

  8. (Optional) Configure the TKGI Cluster Discovery Configuration pane. For more information, see (Optional) Configure TKGI Cluster Discovery below.

  9. (Optional) Configure the Errands pane. For more information, see (Optional) Configure Errands below.

  10. (Optional) Configure the Syslog pane. For more information, see (Optional) Configure Syslog below.

  11. (Optional) Configure the Resource Config pane. For more information, see (Optional) Configure Resources below.

  12. Deploy Healthwatch. For more information, see Deploy Healthwatch below.

After you have configured and deployed the Healthwatch tile, you can configure and deploy the Healthwatch Exporter tiles for the foundations you want to monitor. For more information, see Next Steps below.

Navigate to the Healthwatch Tile

To navigate to the Healthwatch tile:

  1. Navigate to the Ops Manager Installation Dashboard.

  2. Click the Healthwatch tile.

Assign AZs and Networks

In the Assign AZ and Networks pane, you assign jobs to your AZs and networks.

To configure the Assign AZ and Networks pane:

  1. Select Assign AZs and Networks.

  2. Under Place singleton jobs in, select the first AZ. Ops Manager runs any job with a single instance in this AZ.

  3. Under Balance other jobs in, select one or more other AZs. Ops Manager balances instances of jobs with more than one instance across the AZs that you specify.

  4. From the Network dropdown, select the runtime network that you created when configuring the BOSH Director tile.

  5. Click Save.

(Optional) Configure Prometheus

In the Prometheus Configuration pane, you configure the Prometheus instance in the Healthwatch tile to scrape metrics from the Healthwatch Exporter tiles installed on each foundation, as well as any external services or databases from which you want to collect metrics.

The values that you enter in the fields in the Prometheus Configuration pane also define their corresponding properties in the scrape_config and tls_config sections of the Prometheus configuration file. For more information, see <scrape_config> and <tls_config> in Configuration in the Prometheus documentation.

To configure the Prometheus Configuration pane:

  1. Select Prometheus Configuration.

  2. For Scrape Interval, specify the frequency at which you want the Prometheus instance to scrape Prometheus exposition endpoints for metrics. You can enter a value string that specifies ns, us, ┬Ás, ms, s, m, or h. To scrape detailed metrics without consuming too much storage, VMware recommends using the default value of 15s, or 15 seconds.

  3. (Optional) To configure the Prometheus instance to scrape metrics from the Healthwatch Exporter tiles installed on other Ops Manager foundations or from external services or databases, configure additional scrape jobs under Additional Scrape Config Jobs. You can configure scrape jobs for any app or service that exposes metrics using a Prometheus exposition format, such as Concourse CI. For more information about Prometheus exposition formats, see Exposition Formats in the Prometheus documentation.

    Note: The Prometheus instance automatically discovers and scrapes Healthwatch Exporter tiles that are installed on the same foundation as the Healthwatch tile. You do not need to configure scrape jobs for these Healthwatch Exporter tiles. You only need to configure scrape jobs for Healthwatch Exporter tiles that are installed on other foundations.

    1. Click Add.
    2. For TSDB Scrape job, provide the configuration YAML for the scrape job you want to configure. This job can use any of the properties defined by Prometheus except the tls_config property. Do not prefix the configuration YAML with a dash. For example:

      job_name: foundation-1
      metrics_path: /metrics
      scheme: https
      static_configs:
       - targets:
         - "1.2.3.4:9090"
         - "5.6.7.8:9090"
      

      For more information, see <scrape_config> in Configuration in the Prometheus documentation.

      Warning: For the job_name property, do not use the following job names:
      • Healthwatch-view-pas-exporter
      • Healthwatch-view-pks-exporter
      • tsdb
      • grafana
      • pks-master-kube-scheduler
      • pks-master-kube-controller-manager

    3. (Optional) To enable TLS communication between the Prometheus instance and your external service or database:

      1. For TLS Config Certificate Authority, provide a certificate authority (CA) that signs the certificates you provide in the TLS Config Certificate and Private Key field below. This CA appears as the ca_file property in the tls_config section of the Prometheus configuration file.
      2. For TLS Config Certificate and Private Key, provide at least one certificate and private key to enable TLS communication between the Prometheus instance and your external service or database. These certificates and private keys appear as the cert_file and key_file properties in the tls_config section of the Prometheus configuration file.
      3. For TLS Config Server Name, enter the name of the server that facilitates TLS communication between the Prometheus instance and your external service or database. This server name appears as the server_name property in the tls_config section of the Prometheus configuration file.
      4. If the certificate you provided in the TLS Config Certificate and Private Key field is signed by a self-signed CA or a certificate that is signed by a self-signed CA, enable the TLS Config Skip SSL Validation checkbox to skip SSL validation during TLS handshakes.
  4. For Chunk Size (Disk) MB, enter in MB the size that you want to specify for chunks of free disk. The default value is 6144. Healthwatch uses this free disk chunk size to calculate the available disk chunks super value metric (SVM). If you rely on Pivotal Healthwatch v1.8 or earlier for any metrics, the Pivotal Healthwatch integration then uses this SVM to calculate the Diego_AvailableFreeChunksDisk metric. The Pivotal Healthwatch integration sends the Diego_AvailableFreeChunksDisk metric back into Loggregator so third-party nozzles can send it to external destinations, such as a remote server or external aggregation service. For more information, see SVM Forwarder VM - Platform Metrics and SVM Forwarder VM - Healthwatch Component Metrics in Healthwatch Metrics.

  5. For Chunk Size (Memory) MB, enter in MB the size that you want to specify for chunks of free memory. The default value is 4096. Healthwatch uses this free memory chunk size to calculate the available memory chunks SVM. If you rely on Pivotal Healthwatch v1.8 or earlier for any metrics, the Pivotal Healthwatch integration then uses this SVM to calculate the Diego_AvailableFreeChunksMemory metric. The Pivotal Healthwatch integration sends the Diego_AvailableFreeChunksMemory metric back into Loggregator so third-party nozzles can send it to external destinations, such as a remote server or external aggregation service. For more information, see SVM Forwarder VM - Platform Metrics and SVM Forwarder VM - Healthwatch Component Metrics in Healthwatch Metrics.

  6. (Optional) For Static IPs for the Prometheus VM(s), enter a comma-separated list of valid static IP addresses that you want to reserve for the Prometheus instance. You must enter a separate IP address for each VM in the Prometheus instance. These IP addresses must not be within the reserved IP ranges you configured in the BOSH Director tile. To find the IP addresses of the VMs:

    1. Select the Status tab.
    2. From the IPs column, record the IP addresses of each VM listed in the TSDB row.

    Note: The Prometheus instance includes two VMs by default. For more information about viewing or scaling your VMs, see Healthwatch Components and Resource Requirements.

  7. Click Save.

(Optional) Configure Alertmanager

In the Alertmanager Configuration pane, you configure alerting for Healthwatch. To configure alerting for Healthwatch, you configure the alerting rules that Alertmanager follows and the alert receivers to which Alertmanager sends alerts.

To configure the Alertmanager Configuration pane, see Configuring Alerting.

(Optional) Configure Grafana

In the Grafana Configuration pane, you configure how users access and authenticate with the Grafana UI, as well as which dashboards appear in the Grafana UI. For more information about the Grafana UI as it relates to Healthwatch, see Healthwatch.

The values that you enter in the fields in the Grafana Configuration pane also define their corresponding properties the Grafana configuration file. For more information, see Configuration in the Grafana documentation.

To configure the Grafana Configuration pane:

  1. Select Grafana Configuration.

  2. (Optional) If you configured generic OAuth or UAA authentication for users to log in to the Grafana UI, or if you configured alerts through Alertmanager, enter a URL for the Grafana UI in Root URL for Grafana. You must configure this URL to enable a generic OAuth provider or UAA to redirect users to the Grafana UI. Alertmanager also uses this URL to generate links to the Grafana UI in alert messages. This URL appears as the root_url property in the [server] section of the Grafana configuration file.

    Note: Healthwatch v2.1 does not automatically assign a default root URL to the Grafana UI. You must manually configure a root URL for the Grafana UI in the Root URL for Grafana field.

    After you deploy the Healthwatch tile for the first time, you must use this root URL and the public IP address of either a single Grafana VM or the load balancer associated with your Grafana instance to configure a DNS entry for your Grafana instance in the console for your IaaS. Your Grafana instance listens on either port 443 or 80, depending on whether you provide an SSL certificate in the Enable HTTPS by providing certificates field below. For more information about configuring DNS for your Grafana instance, see Configuring DNS for Your Grafana Instance.

  3. Under Enable HTTP(s) Proxy Settings for Grafana, choose whether to enable or disable the Grafana instance to make HTTP and HTTPS proxy requests:

    1. To disable HTTP and HTTPS proxy requests, select Disabled. HTTP and HTTPS proxy settings are disabled by default.
    2. To configure proxy settings for the Grafana instance:
      1. Select Enabled.
      2. For HTTP Proxy for Grafana, enter your HTTP proxy server URL. The Grafana instance uses this URL as the proxy URL for all HTTP and HTTPS requests except those from hosts you configure in the HTTPS Proxy for Grafana and No Proxy for Grafana fields below.
      3. For HTTPS Proxy for Grafana, enter your HTTPS proxy server URL. The Grafana instance uses this URL as the proxy URL for all HTTPS requests except those from hosts you configure in the No Proxy for Grafana field below.
      4. For No Proxy for Grafana, enter a comma-separated list of the hosts you want to exclude from proxying. VMware recommends including *.bosh and the range of your internal network IP addresses so the Grafana instance can still access the Prometheus instance without going though the proxy. For example, *.bosh,10.0.0.0/8,*.example.com allows the Grafana instance to access all BOSH DNS addresses and all internal network IP addresses containing 10.0.0.0/8 or *.example.com directly, without going though the proxy.

        Note: You only need to configure proxy settings if you are deploying Healthwatch in an air-gapped environment and want to configure alert channels to external addresses, such as the external Slack webhook.

  4. (Optional) For Static IPs for the Grafana VM(s), enter a comma-separated list of valid static IP addresses that you want to reserve for the Grafana instance. These IP addresses must not be within the reserved IP ranges you configured in the BOSH Director tile.

  5. (Optional) To prevent users from logging in to Grafana UI with basic authentication, including admin users, clear the Enable Grafana Login Form checkbox. This checkbox is enabled by default.

  6. Under Discover Product Dashboards, select how you want the Grafana instance to discover the runtimes in your foundations.

    • Dynamic: The Grafana instance creates a dashboard in the Grafana UI for the versions of VMware Tanzu Application Service for VMs (TAS for VMs) and TKGI that are currently installed on your foundations. This option is selected by default.
    • Manual: The Grafana instance creates a dashboard in the Grafana UI for the versions of TAS for VMs and TKGI you specify in the TAS Version to Monitor and TKGI Version to Monitor fields.
    • Disabled: The Grafana instance does not discover or create dashboards in the Grafana UI for TAS for VMs or TKGI.
  7. (Optional) If you want the Grafana instance to create a dashboard for MySQL, select the Enable MySQL dashboards checkbox. This checkbox is disabled by default.

  8. (Optional) If you want the Grafana instance to create a dashboard for RabbitMQ, select the Enable RabbitMQ dashboards checkbox. This checkbox is disabled by default.

  9. (Optional) To enable HTTPS for the Grafana instance, you must provide one or more SSL certificates in Enable HTTPS by providing certificates.

    VMware recommends also providing a certificate signed by a third-party CA in CA for SSL certificates. You can generate a self-signed certificate using the Ops Manager root CA, but doing so causes your browser to warn you that your CA is invalid every time you access the Grafana UI.

    • To use a certificate signed by a third-party CA:
      1. For Enable HTTPS by providing certificates, provide one or more SSL certificates.
      2. For CA for SSL certificates, provide the third-party CA that signs the SSL certificates you provided in the previous step.
    • To generate a self-signed certificate from the Ops Manager root CA:

      1. Under Enable HTTPS by providing certificates, click Change.
      2. Click Generate RSA Certificate.
      3. In the Generate RSA Certificate pop-up window, enter *.DOMAIN, where DOMAIN is the domain of the DNS entry that you configured for the Grafana instance. For example, if the DNS entry you configured for the Grafana instance is grafana.example.com, enter *.example.com. For more information about configuring a DNS entry for the Grafana instance, see Configuring DNS for Your Grafana Instance.
      4. Click Generate.
      5. SSH into the Ops Manager VM by following the procedure in Log In to the Ops Manager VM with SSH in Advanced Troubleshooting with the BOSH CLI in the Ops Manager documentation.
      6. Run:

        cat /var/tempest/workspaces/default/root_ca_certificate
        
      7. Record the Ops Manager root CA.

      8. For Enable HTTPS by providing certificates, provide the Ops Manager root CA that you recorded in the previous step.

  10. (Optional) To configure an additional cipher suite for TLS connections to the Grafana instance, enter a comma-separated list of ciphers in Additional Cipher Suite Support. For a list of supported cipher suites, see cipher_suites.go in the Go repository on GitHub.

  11. Under Select an authentication mechanism for Grafana, select the user authentication method you want the Grafana instance to use:

  12. Under Enable SMTP for Grafana Alerts, choose whether to enable or disable email alerts from the Grafana UI.

    • To diable email alerts, select Disabled. Email alerts are disabled by default.
    • To enable email alerts:
      1. Select Enabled.
      2. For Host Name, enter the host name of your SMTP server. This host name appears as the host property in the [smtp] section of the Grafana configuration file.
      3. For Port, enter the port of your SMTP server. This port appears as the port property in the [smtp] section of the Grafana configuration file.
      4. For Username, enter your SMTP authentication username. This username appears as the user property in the [smtp] section of the Grafana configuration file.
      5. For Password, enter your SMTP authentication password. This password appears as the password property in the [smtp] section of the Grafana configuration file.
      6. (Optional) To enable the Grafana instance to skip SSL validation when communicating with your SMTP server over TLS, enable the Skip SSL Verification checkbox. This checkbox appears as the skip_verify property in the [smtp] section of the Grafana configuration file.
      7. For From Address, enter the sender email address that appears on outgoing email alerts. This email address appears as the from_address property in the [smtp] section of the Grafana configuration file.
      8. For From Name, enter the sender name that appears on outgoing email alerts. This name appears as the from_name property in the [smtp] section of the Grafana configuration file.
      9. For EHLO Identity, enter a name for the client identity that your SMTP server uses when sending EHLO commands. This name appears as the ehlo_identity property in the [smtp] section of the Grafana configuration file.
      10. For TLS Credentials, enter a certificate and private key to enable the Grafana instance to communicate with your SMTP server over TLS. This certificate and private key appear as the cert_file and key_file properties in the [smtp] section of the Grafana configuration file.

        For more information, see [smtp] in the Grafana documentation.
  13. Click Save.

(Optional) Configure Canary URLs

In the Canary URL Configuration pane, you configure target URLs to which the Blackbox Exporters in the Prometheus instance sends canary tests. Testing a canary target URL allows you to gauge the overall health and accessibility of an app, runtime, or deployment.

The Canary URL Configuration pane configures the Blackbox Exporters in the Prometheus instance. For more information, see the Blackbox exporter repository on GitHub.

To configure the Canary URL Configuration pane:

  1. Select Canary URL Configuration.

  2. For Exporter Port, specify the port that the Blackbox Exporter exposes to the Prometheus instance. The default port is 9115. You do not need to specify a different port unless port 9115 is already in use on the Prometheus instance.

  3. (Optional) For Ops Manager URL, enter the fully-qualified domain name (FQDN) of your Ops Manager deployment. This creates a canary target URL that allows the Blackbox Exporter to test whether the Ops Manager Installation Dashboard is accessible. The results from these canary tests appear in the Ops Manager Health dashboard in the Grafana UI.

    Note: If you have SAML authentication enabled for Ops Manager, enter https://OPS-MANAGER-FQDN/api/v0/info, where OPS-MANAGER-FQDN is the FQDN of your your Ops Manager deployment. Otherwise, the canary test fails.

  4. (Optional) If your Ops Manager deployment uses self-signed certificates, enable the Skip Ops Manager SSL Validation checkbox. Enabling this checkbox allows the Prometheus instance to communicate with your Ops Manager deployment.

  5. (Optional) Under Target URLs, you can configure canary target URLs. The Prometheus instance runs continuous canary tests to these URLs and records the results. To configure canary target URLs:

    1. Click Add.
    2. For HTTP(S) URL, enter the URL to which you want the Prometheus instance to send canary tests. VMware recommends including apps.sys.FOUNDATION-URL if you have TAS for VMs installed, or api.pks.FOUNDATION-URL:8443 if you have TKGI installed, where FOUNDATION-URL is the root URL of your foundation.

      Note: The Prometheus instance automatically creates scrape jobs for these URLs. You do not need to create additional scrape jobs for them in the Prometheus Configuration pane.

  6. Click Save.

(Optional) Configure Remote Write

In the Remote Write Configuration pane, you can configure the Prometheus instance to write to remote storage, in addition to its local time series database (TSDB). Healthwatch stores monitoring data for six weeks before deleting it. Configuring remote write enables Healthwatch to store data that is older than six weeks in a remote database or storage endpoint. For a list of compatible remote databases and storage endpoints, see Remote Endpoints and Storage in Integrations in the Prometheus documentation.

The values that you enter in the fields in the Remote Write Configuration pane also define their corresponding properties in the remote_write section of the Prometheus configuration file. For more information, see <remote_write> in Configuration in the Prometheus documentation.

To configure the Remote Write Configuration pane:

  1. Select Remote Write Configuration.

  2. Click Add.

  3. For URL, enter the URL for your remote storage endpoint. For example, https://REMOTE-STORAGE-FQDN, where REMOTE-STORAGE-FQDN is the FQDN of your remote storage endpoint.

  4. In Remote Timeout (seconds), enter in seconds the amount of time that the Prometheus VM must try to make a request to the remote storage endpoint before the request fails.

  5. (Optional) To enable the Prometheus instance to write to your storage endpoint using basic authentication:

    1. For Basic Auth Username, enter the username that the Prometheus instance uses to log in to your remote storage endpoint.
    2. For Basic Auth Password, enter the password that the Prometheus instance uses to log in to your remote storage endpoint.
  6. (Optional) To enable TLS communication between the Prometheus instance and your remote storage endpoint:

    1. For TLS Config Certificate Authority, provide a CA that signs the certificates you provide in the TLS Config Certificate and Private Key field below. This CA appears as the ca_file property in the tls_config section of the remote_write configuration.
    2. For TLS Config Certificate and Private Key, provide at least one certificate and private key to enable TLS communication between the Prometheus instance and your remote storage endpoint. These certificates and private keys appear as the cert_file and key_file properties in the tls_config section of the remote_write configuration.
    3. For TLS Config Server Name, enter the name of the server that facilitates TLS communication between the Prometheus instance and your remote storage endpoint. This server name appears as the server_name property in the tls_config section of the remote_write configuration.
    4. If the certificate you provided in the TLS Config Certificate and Private Key field is signed by a self-signed CA or a certificate that is signed by a self-signed CA, enable the TLS Config Skip SSL Validation checkbox to skip SSL validation during TLS handshakes.
  7. (Optional) To enable the Prometheus instance to make HTTP or HTTPS proxy requests to the remote storage endpoint, enter a proxy URL in Proxy URL.

  8. You can configure more granular settings for writing to your remote storage endpoint by specifying additional parameters for the shards containing in-memory queues that read from the write-ahead log (WAL) in the Prometheus instance. The following fields configure the queue_config section of the remote write configuration:

    1. For Queue Capacity, enter how many samples the remote storage endpoint can queue in memory per shard before the Prometheus instance blocks the queue from reading from the WAL. This number appears as the capacity property in the queue_config section of the remote write configuration.
    2. For Maximum Number of Shards, enter the maximum number of shards the Prometheus instance can use for each remote write queue. This number appears as the max_shards property in the queue_config section of the remote write configuration.
    3. For Minimum Number of Shards, enter the minimum number of shards the Prometheus instance can use for each remote write queue. This number is also the number of shards the Prometheus VM uses when remote write begins after each deployment of the Healthwatch tile. This number appears as the min_shards property in the queue_config section of the remote write configuration.
    4. For Maximum Number of Samples Per Send, enter the maximum number of samples the Prometheus VM can send to a shard at a time. This number appears as the max_samples_per_send property in the queue_config section of the remote write configuration.
    5. For Batch Send Deadline (seconds), enter in seconds the maximum amount of time the Prometheus instance can wait before sending a batch of samples to a shard, whether that shard has reached the limit configured in Maximum Number of Samples Per Send or not. This number appears as the batch_send_deadline property in the queue_config section of the remote write configuration.
    6. For Minimum Backoff Time (milliseconds), enter in milliseconds the minimum amount of time the Prometheus instance can wait before retrying a failed request to the remote storage endpoint. This number appears as the min_backoff property in the queue_config section of the remote write configuration.
    7. For Maximum Retry Delay (milliseconds), enter in milliseconds the maximum amount of time the Prometheus instance can wait before retrying a failed request to the remote storage endpoint. This number appears as the max_backoff property in the queue_config section of the remote write configuration.

      For more information about configuring these queue parameters, see Remote Write Tuning in the Prometheus documentation.
  9. Click Save.

(Optional) Configure TKGI Cluster Discovery

In the TKGI Cluster Discovery Configuration pane, you configure TKGI cluster discovery for Healthwatch. You only need to configure this pane if you have foundations with TKGI installed.

To configure TKGI cluster discovery, see Configuring TKGI Cluster Discovery.

(Optional) Configure Errands

Errands are scripts that Ops Manager runs automatically when it installs or uninstalls a product, such as a new version of Healthwatch. There are two types of errands: post-deploy errands run after the product is installed, and pre-delete errands run before the product is uninstalled. However, there are no pre-delete errands for Healthwatch.

By default, Ops Manager always runs all errands.

In the Errands pane, you can select On to always run an errand or Off to never run it.

For more information about how Ops Manager manages errands, see Managing Errands in Ops Manager in the Ops Manager documentation.

To configure the Errands pane:

  1. Select Errands.

  2. (Optional) Choose whether to always run or never run the following errands:

    • Smoke Test Errand: Verifies that the Grafana and Prometheus instances are running.
    • Update Grafana Admin Password: Updates the admin password for the Grafana UI.
  3. Click Save.

(Optional) Configure Syslog

In the Syslog pane, you can configure system logging in Healthwatch to forward log messages from Healthwatch component VMs to an external destination for troubleshooting, such as a remote server or external syslog aggregation service.

To configure the Syslog pane:

  1. Select Syslog.

  2. Under Do you want to configure Syslog forwarding?, select one of the following options:

    • No, do not forward Syslog: Disables syslog forwarding.
    • Yes: Enables syslog forwarding and allows you to edit the configuration fields described below.
  3. For Address, enter the IP address or DNS domain name of your external destination.

  4. For Port, enter a port on which your external destination listens.

  5. For Transport Protocol, select TCP or UDP from the dropdown. This determines which transport protocol Healthwatch uses to forward system logs to your external destination.

  6. (Optional) To transmit logs over TLS:

    1. Select the Enable TLS checkbox. This checkbox is disabled by default.
    2. For Permitted Peer, enter either the name or SHA1 fingerprint of the remote peer.
    3. For SSL Certificate, enter the SSL certificate for your external destination.
  7. (Optional) For Queue Size, specify the number of log messages Healthwatch can hold in a buffer at a time before sending them to your external destination. The default value is 100000.

  8. (Optional) To forward debug logs to your external destination, enable the Forward Debug Logs checkbox. This checkbox is disabled by default.

  9. (Optional) To specify a custom syslog rule, enter it in Custom rsyslog configuration in RainerScript syntax. For more information about custom syslog rules, see Customizing Platform Log Forwarding in the TAS for VMs documentation. For more information about RainerScript syntax, see the rsyslog documentation.

  10. Click Save Syslog Settings.

(Optional) Configure Resources

In the Resource Config pane, you can scale Healthwatch component VMs up or down according to the needs of your deployment, as well as associate load balancers with a group of VMs. For example, you can scale the persistent disk size of the Prometheus instance to enable longer data retention.

To configure the Resource Config pane:

  1. Select Resource Config.

  2. (Optional) To scale a job, select an option from the dropdown for the resource you want to modify:

    • Instances: Configures the number of instances each job has.
    • VM Type: Configures the type of VM used in each instance.
    • Persistent Disk Type: Configures the amount of persistent disk space to allocate to the job.
  3. (Optional) To add a load balancer to a job:

    1. Click the icon next to the job name.
    2. For Load Balancers, enter the name of your load balancer.
    3. Ensure that the Internet Connected checkbox is disabled. Enabling this checkbox gives VMs a public IP address that enables outbound Internet access.
  4. Click Save.

Deploy Healthwatch

To complete your installation of the Healthwatch tile:

  1. Return to the Ops Manager Installation Dashboard.

  2. Click Review Pending Changes.

  3. Click Apply Changes.

For more information, see Reviewing Pending Product Changes in the Ops Manager documentation.

Next Steps

After you have successfully installed the Healthwatch tile, continue to one of the following topics to configure and deploy the Healthwatch Exporter tiles for the foundations you want to monitor: