Configuring Healthwatch Exporter for TKGI

Page last updated:

This topic describes how to manually configure and deploy the Healthwatch Exporter for Tanzu Kubernetes Grid Integrated (TKGI) tile.

To install, configure, and deploy Healthwatch Exporter for TKGI through an automated pipeline, see Installing, Configuring, and Deploying a Tile Through an Automated Pipeline.

Overview of Configuring and Deploying Healthwatch Exporter for TKGI

When installed on a foundation you want to monitor, Healthwatch Exporter for TKGI deploys metric exporter VMs to generate service level indicators (SLIs) related to the health of your TKGI deployment. The Prometheus instance that exists within your metrics monitoring system then scrapes the Prometheus exposition endpoints on the metric exporter VMs and imports those metrics into your monitoring system. For more information about the architecture of the Healthwatch Exporter for TKGI tile, see Healthwatch Exporter for TKGI in Reference Architecture.

After installing Healthwatch Exporter for TKGI, you configure the metric exporter VMs deployed by Healthwatch Exporter for TKGI through the tile UI. You can also configure errands and system logging, as well as scale VM instances up or down and configure load balancers for multiple VM instances.

To configure and deploy the Healthwatch Exporter for TKGI tile:

Note: If you want to quickly deploy the Healthwatch Exporter for TKGI tile to ensure that it deploys successfully before you fully configure it, you only need to configure the Assign AZ and Networks and BOSH Health Exporter Configuration panes.

  1. Ensure that you meet the prerequisite for configuring and deploying Healthwatch Exporter for TKGI. For more information, see Prerequisite below.

  2. Navigate to the Healthwatch Exporter for TKGI tile in the Ops Manager Installation Dashboard. For more information, see Navigate to the Healthwatch Exporter for TKGI Tile below.

  3. Assign jobs to your Availability Zones (AZs) and networks. For more information, see Assign AZs and Networks below.

  4. (Optional) Configure the TKGI Exporter Configuration pane. For more information, see (Optional) Configure TKGI and Certificate Expiration Metric Exporter VMs below.

  5. (Optional) Configure the TKGI SLI Exporter Configuration pane. For more information, see (Optional) Configure TKGI SLI Exporter VMs below.

  6. Configure the BOSH Health Exporter Configuration pane. For more information, see Configure the BOSH Health Metric Exporter VM below.

  7. (Optional) Configure the Bosh Deployments Exporter Configuration pane. For more information, see (Optional) Configure the BOSH Deployment Metric Exporter VM below.

  8. (Optional) Configure the Errands pane. For more information, see (Optional) Configure Errands below.

  9. (Optional) Configure the Syslog pane. For more information, see (Optional) Configure Syslog below.

  10. (Optional) Configure the Resource Config pane. For more information, see (Optional) Configure Resources below.

  11. Deploy the Healthwatch Exporter for TKGI tile through the Ops Manager Installation Dashboard. For more information, see Deploy Healthwatch Exporter for TKGI below.

  12. Once you have finished installing, configuring, and deploying Healthwatch Exporter for TKGI, configure a scrape job for Healthwatch Exporter for TKGI in the Prometheus instance that exists within your monitoring system. For more information, see Configure a Scrape Job for Healthwatch Exporter for TKGI below.

    Note: You only need to configure a scrape job for installations of Healthwatch Exporter for TKGI that are not on the same foundation as your Healthwatch tile. The Prometheus instance in the Healthwatch tile automatically discovers and scrapes Healthwatch Exporter tiles that are installed on the same foundation as the Healthwatch tile.

Prerequisite

Before you deploy Healthwatch Exporter for TKGI, you must have a metrics monitoring system that collects metrics using a Prometheus instance. This monitoring system can be one of the following:

  • The Healthwatch tile installed on either the same Ops Manager foundation as Healthwatch Exporter for TKGI or a different Ops Manager foundation.

  • A service or database located outside your Ops Manager foundation, such as an external time series database (TSDB) or an installation of the Healthwatch tile on the TKGI Control Plane.

Navigate to the Healthwatch Exporter for TKGI Tile

To navigate to the Healthwatch Exporter for TKGI tile:

  1. Navigate to the Ops Manager Installation Dashboard.

  2. Click the Healthwatch Exporter for Tanzu Kubernetes Grid - Integrated tile.

Assign AZs and Networks

In the Assign AZ and Networks pane, you assign jobs to your AZs and networks.

To configure the Assign AZ and Networks pane:

  1. Select Assign AZs and Networks.

  2. Under Place singleton jobs in, select the first AZ. Ops Manager runs any job with a single instance in this AZ.

  3. Under Balance other jobs in, select one or more other AZs. Ops Manager balances instances of jobs with more than one instance across the AZs that you specify.

  4. From the Network dropdown, select the runtime network that you created when configuring the BOSH Director tile.

  5. Click Save.

(Optional) Configure TKGI and Certificate Expiration Metric Exporter VMs

In the TKGI Exporter Configuration pane, you configure static IP addresses for the TKGI metric exporter and certificate expiration metric exporter VMs. After generating these metrics, the metric exporter VMs expose them in Prometheus exposition format on a secured endpoint.

To configure the TKGI Exporter Configuration pane:

Warning: The IP addresses you configure in the TKGI Exporter Configuration pane must not be within the reserved IP ranges you configured in the BOSH Director tile.

  1. Select TKGI Exporter Configuration.

  2. (Optional) For Static IP for TKGI Exporter VM, enter a valid static IP address that you want to reserve for the TKGI metric exporter VM. The TKGI metric exporter VM collects health metrics from the BOSH Director. For more information, see TKGI Metric Exporter VM in Healthwatch Metrics.

  3. (Optional) For Static IP for Cert Expiration Exporter VM, enter a valid static IP address that you want to reserve for the certificate expiration metric exporter VM. The certificate expiration metric exporter VM collects metrics that show when certificates in your Ops Manager deployment are due to expire. For more information, see Certificate Expiration Metric Exporter VM in Healthwatch Metrics and Monitoring Certificate Expiration.

    Note: If you have both Healthwatch Exporter for TKGI and Healthwatch Exporter for Tanzu Application Service for VMs (TAS for VMs) installed on the same foundation, scale the certificate expiration metric exporter VM to zero instances in the Resource Config pane in one of the Healthwatch Exporter tiles. Otherwise, the two certificate expiration metric exporter VMs create redundant sets of metrics.

  4. (Optional) If your Ops Manager deployment uses self-signed certificates, enable the Skip SSL Validation for Cert Expiration checkbox to enable the certificate expiration metric exporter VM to communicate with your Ops Manager deployment. This checkbox is disabled by default.

  5. Click Save.

(Optional) Configure the TKGI SLI Exporter VM

In the TKGI SLI Exporter Configuration pane, you configure the TKGI SLI exporter VM. The TKGI SLI exporter VM generates SLIs that allow you to monitor whether the core functions of the TKGI Command-Line Interface (TKGI CLI) are working as expected. The TKGI CLI enables developers to create and manage Kubernetes clusters through TKGI. For more information, see TKGI SLI Exporter VM in Healthwatch Metrics.

To configure the TKGI SLI Exporter Configuration pane:

  1. (Optional) For Static IP for TKGI SLI Exporter VM, enter a valid static IP address that you want to reserve for the TKGI SLI exporter VM. This IP address must not be within the reserved IP ranges you configured in the BOSH Director tile.

  2. For Test Frequency in Seconds, enter in seconds how frequently you want the TKGI SLI exporter VM to run SLI tests.

  3. (Optional) To enable TLS communication between the TKGI SLI exporter VM and the TKGI API, choose one of the following options:

    • To configure the TKGI SLI exporter VM to use a self-signed certificate authority (CA) or a certificate that is signed by a self-signed CA when communicating with the TKGI API over TLS:
      1. For TKGI API Certificate Authority, provide the CA. If you provide a self-signed CA, it must be the same CA that signs the certificate in the TKGI API.
      2. When you provide a self-signed CA or certificate that is signed by a self-signed CA in this field, the TKGI API Skip SSL Validation checkbox becomes configurable. Disable the TKGI API Skip SSL Validation checkbox.
    • To configure the TKGI SLI exporter VM to skip SSL validation when communicating with the TKGI API over TLS, leave TKGI API Certificate Authority blank. The TKGI API Skip SSL Validation checkbox is enabled and not configurable by default. VMware does not recommend skipping SSL validation in a production environment.
  4. Click Save.

Configure the BOSH Health Metric Exporter VM

In the BOSH Health Exporter Configuration pane, you configure the AZ and VM type of the BOSH health metric exporter VM. Healthwatch Exporter for TKGI deploys the BOSH health metric exporter VM, which creates a BOSH deployment called bosh-health every ten minutes. The bosh-health deployment deploys another VM, bosh-health-check, that runs a suite of SLI tests to validate the functionality of the BOSH Director. After the SLI tests are complete, the BOSH health metric exporter VM collects the metrics from the bosh-health-check VM, then deletes the bosh-health deployment and the bosh-health-check VM. For more information, see BOSH Health Metric Exporter VM in Healthwatch Metrics.

To configure the BOSH Health Exporter Configuration pane:

  1. Select BOSH Health Exporter Configuration.

  2. Under BOSH Health Check Availability Zone, select the AZ on which you want Healthwatch Exporter for TKGI to deploy the BOSH health metric exporter VM.

  3. Under BOSH Health Check VM Type, select from the dropdown the type of VM you want Healthwatch Exporter for TKGI to deploy.

  4. Click Save.

Note: If you have both Healthwatch Exporter for TKGI and Healthwatch Exporter for TAS for VMs installed on the same foundation, scale the BOSH health metric exporter VM to zero instances in the Resource Config pane in one of the Healthwatch Exporter tiles. Otherwise, the two sets of BOSH health metric exporter VM metrics cause a 401 error in your BOSH Director deployment, and one set of metrics reports that the BOSH Director is down in the Grafana UI. For more information, see BOSH Health Metrics Cause Errors When Two Healthwatch Exporter Tiles Are Installed in Troubleshooting Healthwatch.

(Optional) Configure the BOSH Deployment Metric Exporter VM

In the Bosh Deployments Exporter Configuration pane, you configure the authentication credentials and a static IP address for the BOSH deployment metric exporter VM. This VM checks every 30 seconds whether any BOSH deployments other than the one created by the BOSH health metric exporter VM are running. For more information, see BOSH Deployment Metric Exporter VM in Healthwatch Metrics.

To configure the Bosh Deployments Exporter Configuration pane:

  1. Select Bosh Deployments Exporter Configuration.

  2. (Optional) For Bosh Client Username and Secret, enter the username and secret for the UAA client that the BOSH deployment metric exporter VM uses to access the BOSH Director VM. For more information, see Create a UAA Client for the BOSH Deployment Metric Exporter VM below.

  3. (Optional) For Static IP for Bosh Deployments Exporter VM, enter a valid static IP address that you want to reserve for the BOSH deployment metric exporter VM. This IP address must not be within the reserved IP ranges you configured in the BOSH Director tile.

  4. Click Save.

Note: If you have both Healthwatch Exporter for TKGI and Healthwatch Exporter for TAS for VMs installed on the same foundation, scale the BOSH deployment metric exporter VM to zero instances in the Resource Config pane in one of the Healthwatch Exporter tiles. Otherwise, the two BOSH deployment metric exporter VMs create redundant sets of metrics.

Create a UAA Client for the BOSH Deployment Metric Exporter VM

To enable the BOSH deployment metric exporter VM to access the BOSH Director VM, you must create a new UAA client for the BOSH deployment metric exporter VM. The procedure to create this UAA client differs depending on the authentication settings of your Ops Manager deployment.

To create a UAA client for the BOSH deployment metric exporter VM:

  1. Return to the Ops Manager Installation Dashboard.

  2. Record the IP address for the BOSH Director VM and the login and admin credentials for the BOSH Director UAA instance:

    • If your Ops Manager deployment uses internal authentication:
      1. Click the BOSH Director tile.
      2. Select the Status tab.
      3. Record the IP address in the IPs column of the BOSH Director row.
      4. Select the Credentials tab.
      5. In the Uaa Admin Client Credentials row of the BOSH Director section, click Link to Credential.
      6. Record the value of password. This value is the secret for Uaa Admin Client Credentials.
      7. Return to the Credentials tab.
      8. In the Uaa Login Client Credentials row of the BOSH Director section, click Link to Credential.
      9. Record the value of password. This value is the secret for Uaa Login Client Credentials.

        For more information about internal authentication settings for your Ops Manager deployment, see Internal Authentication Settings in Using the Ops Manager Interface in the Ops Manager documentation.
    • If your Ops Manager deployment uses SAML authentication:
      1. Click the user account menu in the upper-right corner of the Ops Manager Installation Dashboard.
      2. Click Settings.
      3. Select SAML Settings.
      4. Enable the Provision an Admin Client in the BOSH UAA checkbox.
      5. Click Enable SAML Authentication.
      6. Return to the Ops Manager Installation Dashboard.
      7. Click the BOSH Director tile.
      8. Select the Status tab.
      9. Record the IP address in the IPs column of the BOSH Director row.
      10. Select the Credentials tab.
      11. In the Uaa Bosh Client Credentials row of the BOSH Director section, click Link to Credential.
      12. Record the value of password. This value is the secret for Uaa Bosh Client Credentials.

        For more information about SAML authentication settings for your Ops Manager deployment, see SAML Settings in Using the Ops Manager Interface in the Ops Manager documentation.
    • If your Ops Manager deployment uses LDAP authentication:
      1. Click the user account menu in the upper-right corner of the Ops Manager Installation Dashboard.
      2. Click Settings.
      3. Select LDAP Settings.
      4. Enable the Provision an Admin Client in the BOSH UAA checkbox.
      5. Click Enable LDAP Authentication.
      6. Return to the Ops Manager Installation Dashboard.
      7. Click the BOSH Director tile.
      8. Select the Status tab.
      9. Record the IP address in the IPs column of the BOSH Director row.
      10. Select the Credentials tab.
      11. In the Uaa Bosh Client Credentials row of the BOSH Director section, click Link to Credential.
      12. Record the value of password. This value is the secret for Uaa Bosh Client Credentials.

        For more information about LDAP authentication settings for your Ops Manager deployment, see LDAP Settings in Using the Ops Manager Interface in the Ops Manager documentation.
  3. SSH into the Ops Manager VM by following the procedure in Log In to the Ops Manager VM with SSH in Advanced Troubleshooting with the BOSH CLI in the Ops Manager documentation.

  4. Target the UAA instance for the BOSH Director by running:

    uaac target https://BOSH-DIRECTOR-IP:8443 --skip-ssl-validation
    

    Where BOSH-DIRECTOR-IP is the IP address for the BOSH Director VM that you recorded from the Status tab in the BOSH Director tile in a previous step.

  5. Log in to the UAA instance:

    • If your Ops Manager deployment uses internal authentication, log in to the UAA instance by running:

      uaac token owner get login -s UAA-LOGIN-CLIENT-SECRET
      

      Where UAA-LOGIN-CLIENT-SECRET is the secret you recorded from the Uaa Login Client Credentials row in the Credentials tab in the BOSH Director tile in a previous step.

    • If your Ops Manager deployment uses SAML or LDAP, log in to the UAA instance by running:

      uaac token client get bosh_admin_client -s BOSH-UAA-CLIENT-SECRET
      

      Where BOSH-UAA-CLIENT-SECRET is the secret you recorded from the Uaa Bosh Client Credentials row in the Credentials tab in the BOSH Director tile in a previous step.

  6. When prompted, enter the UAA admin client username admin and the secret you recorded from the Uaa Admin Client Credentials row in the Credentials tab in the BOSH Director tile in a previous step.

  7. Create a UAA client for the BOSH deployment metric exporter VM by running:

    uaac client add CLIENT-USERNAME \
     --secret CLIENT-SECRET \
     --authorized_grant_types client_credentials,refresh_token \
     --authorities bosh.read \
     --scope bosh.read
    

    Where:

    • CLIENT-USERNAME is the username you want to set for the UAA client.
    • CLIENT-SECRET is the secret you want to set for the UAA client.
  8. Return to the Ops Manager Installation Dashboard.

  9. Click the Healthwatch Exporter for Tanzu Kubernetes Grid - Integrated tile.

  10. Select Bosh Deployments Exporter Configuration.

  11. For Bosh Client Username and Secret, enter the username and secret for the UAA client you just created.

(Optional) Configure Errands

Errands are scripts that Ops Manager runs automatically when it installs or uninstalls a product, such as a new version of Healthwatch Exporter for TKGI. There are two types of errands: post-deploy errands run after the product is installed, and pre-delete errands run before the product is uninstalled. However, there are no pre-delete errands for Healthwatch Exporter for TKGI.

By default, Ops Manager always runs all errands.

In the Errands pane, you can select On to always run an errand or Off to never run it.

For more information about how Ops Manager manages errands, see Managing Errands in Ops Manager in the Ops Manager documentation.

To configure the Errands pane:

  1. Select Errands.

  2. (Optional) Choose whether to always run or never run the Smoke Tests errand. This errand verifies that the metric exporter VMs are running.

  3. Click Save.

(Optional) Configure Syslog

In the Syslog pane, you can configure system logging in Healthwatch Exporter for TKGI to forward log messages from tile component VMs to an external destination for troubleshooting, such as a remote server or external syslog aggregation service.

To configure the Syslog pane:

  1. Select Syslog.

  2. Under Do you want to configure Syslog forwarding?, select one of the following options:

    • No, do not forward Syslog: Disables syslog forwarding.
    • Yes: Enables syslog forwarding and allows you to edit the configuration fields described below.
  3. For Address, enter the IP address or DNS domain name of your external destination.

  4. For Port, enter a port on which your external destination listens.

  5. For Transport Protocol, select TCP or UDP from the dropdown. This determines which transport protocol Healthwatch Exporter for TKGI uses to forward system logs to your external destination.

  6. (Optional) To transmit logs over TLS:

    1. Select the Enable TLS checkbox. This checkbox is disabled by default.
    2. For Permitted Peer, enter either the name or SHA1 fingerprint of the remote peer.
    3. For SSL Certificate, enter the SSL certificate for your external destination.
  7. (Optional) For Queue Size, specify the number of log messages Healthwatch Exporter for TKGI can hold in a buffer at a time before sending them to your external destination. The default value is 100000.

  8. (Optional) To forward debug logs to your external destination, enable the Forward Debug Logs checkbox. This checkbox is disabled by default.

  9. (Optional) To specify a custom syslog rule, enter it in Custom rsyslog configuration in RainerScript syntax. For more information about custom syslog rules, see Customizing Platform Log Forwarding in the TAS for VMs documentation. For more information about RainerScript syntax, see the rsyslog documentation.

  10. Click Save Syslog Settings.

(Optional) Configure Resources

In the Resource Config pane, you can scale VMs in Healthwatch Exporter for TKGI VMs up or down according to the needs of your deployment, as well as associate load balancers with a group of VMs. For example, you can scale the persistent disk size of a metric exporter VM to enable longer data retention.

To configure the Resource Config pane:

  1. Select Resource Config.

  2. (Optional) To scale a job, select an option from the dropdown for the resource you want to modify:

    • Instances: Configures the number of instances each job has.
    • VM Type: Configures the type of VM used in each instance.
    • Persistent Disk Type: Configures the amount of persistent disk space to allocate to the job.
  3. (Optional) To add a load balancer to a job:

    1. Click the icon next to the job name.
    2. For Load Balancers, enter the name of your load balancer.
    3. Ensure that the Internet Connected checkbox is disabled. Enabling this checkbox gives VMs a public IP address that enables outbound Internet access.
  4. Click Save.

Deploy Healthwatch Exporter for TKGI

To complete your installation of the Healthwatch Exporter for TKGI tile:

  1. Return to the Ops Manager Installation Dashboard.

  2. Click Review Pending Changes.

  3. Click Apply Changes.

For more information, see Reviewing Pending Product Changes in the Ops Manager documentation.

Configure a Scrape Job for Healthwatch Exporter for TKGI

After you have successfully deployed Healthwatch Exporter for TKGI, you must configure a scrape job in the Prometheus instance that exists within your metrics monitoring system, unless you installed Healthwatch Exporter for TKGI on the same Ops Manager foundation as the Healthwatch tile. Follow the procedure in one of the following sections, depending on which monitoring system you use:

Configure a Scrape Job for Healthwatch Exporter for TKGI in Healthwatch

To configure a scrape job for Healthwatch Exporter for TKGI in the Healthwatch tile on your Ops Manager foundation, see (Optional) Configure Prometheus in Configuring Healthwatch.

Configure a Scrape Job for Healthwatch Exporter for TKGI in an External Monitoring System

To configure a scrape job for Healthwatch Exporter for TKGI in a service or database that is located outside your Ops Manager foundation:

  1. Open network communication paths from your external service or database to the metric exporter VMs in Healthwatch Exporter for TKGI. The procedure to open these network paths differs depending on your Ops Manager foundation’s IaaS. For a list of TCP ports used by each metric exporter VM, see Required Networking Rules for Healthwatch Exporter for TKGI in Reference Architecture.

  2. In the scrape_config section of the Prometheus configuration file, create a scrape job for your Ops Manager foundation. Under static_config, specify the TCP ports of each metric exporter VM as static targets for the IP address of your external service or database. For example:

    job_name: foundation-1
    metrics_path: /metrics
    scheme: https
    static_configs:
    - targets:
      - "1.2.3.4:8443"
      - "1.2.3.4:25555"
      - "1.2.3.4:443"
      - "1.2.3.4:25595"
      - "1.2.3.4:9021"
    

    For more information, see <scrape_config> and <static_config> in Configuration in the Prometheus documentation.