Installing and Configuring GCP Stackdriver Nozzle for PCF

This topic describes how to install and configure GCP Stackdriver Nozzle for Pivotal Cloud Foundry (PCF).

Prepare a GCP Project

The GCP Stackdriver Nozzle for PCF requires that you have a GCP project with a specific configuration. Follow the steps below to prepare a GCP project.

Create the Project

To create a GCP project for Stackdriver Nozzle, do the following:

  1. In a browser, navigate to the GCP Console. If you do not have an account, create one.

  2. From the GCP console, click the project drop-down menu between the GCP logo and the search bar and select Create Project.

  3. Enter a project name and click Create.

Enable APIs

To enable the APIs required by Stackdriver Nozzle, do the following:

  1. Navigate to the Stackdriver Logging API page and click Enable API.

  2. Navigate to the Stackdriver Monitoring API page and click Enable API.

Create a Service Account

To create a GCP Service Account for Stackdriver Nozzle, do the following:

  1. In the GCP console, open the Products and services menu above the home icon and select IAM & Admin > Service accounts.

  2. Click Create Service Account.

  3. Enter a Service account name.

  4. From the Role drop-down menu, select Logging > Logs Configuration Writer, Logging > Logs Writer, and Project > Editor. Create service account

  5. Select the checkbox to Furnish a new Private Key, and click Create.

  6. Save the automatically downloaded key file to a secure location for use later in this topic.

(Optional) Create a UAA User Account

If you are using Elastic Runtime v1.9.29+, v1.10.16+, v1.11.2+, or v1.12+, skip this procedure. If you are using an earlier version of Pivotal Elastic Runtime, follow the steps below.

To create a UAA user with access to the Firehose and Cloud Controller, do the following:

  1. Target your system with the UAA CLI (UAAC):

    $ uaac target https://uaa.YOUR-SYSTEM-DOMAIN

  2. Run the following command to authenticate and obtain an access token for the admin client from the UAA server.

    $ uaac token client get admin -s ADMIN-CLIENT-CREDENTIALS-SECRET

  3. Create a Stackdriver Nozzle user with the password of your choosing.

    $ uaac -t user add stackdriver-nozzle --password PASSWORD --emails na

  4. Add the user to the Cloud Controller Admin Read-Only group.

    $ uaac -t member add cloud_controller.admin_read_only stackdriver-nozzle

  5. Add the user to the Doppler Firehose group.

    $ uaac -t member add doppler.firehose stackdriver-nozzle

For information about creating a UAA user, see Creating and Managing Users with the UAA CLI.

Install GCP Stackdriver Nozzle for PCF in Ops Manager

Import to Ops Manager

Follow these steps to download the product file and add it to your Ops Manager Installation Dashboard.

  1. Download the product file from Pivotal Network.

  2. Navigate to the Ops Manager Installation Dashboard and click Import a Product to upload the product file.

  3. Click Add next to the uploaded GCP Stackdriver Nozzle for PCF tile in the Ops Manager Available Products view to add it to your staging area.

  4. Click the newly added GCP Stackdriver Nozzle for PCF tile.

  5. Follow the instructions in the next section to complete the tile configuration pane.

Configure

Complete the following fields to configure GCP Stackdriver Nozzle for PCF.

  1. Cloud Foundry API Endpoint: Enter the URL of the API endpoint for your PCF deployment. This value is https://api.YOUR-SYSYTEM-DOMAIN. To determine your system domain, see the Domains pane in the Elastic Runtime tile. For an example, see Step 5: Configure Domains in Deploying Elastic Runtime on AWS.

  2. Whitelist for Stackdriver Logging: Enter a comma-separated list, without spaces, of the Loggregator events you want to ingest into Stackdriver Logging.

  3. Whitelist for Stackdriver Monitoring: Enter a comma-separated list, without spaces, of the Loggregator events you want to ingest into Stackdriver Monitoring.

  4. UAA Username/UAA Password: If you created a username and password in Create a UAA User Account, enter it here. Otherwise, leave this field blank to use the default credentials provided by Elastic Runtime.

  5. Skip SSL validation on Cloud Foundry API Endpoint: For a production environment, set this value to false. For a development environment, set it to true.

  6. Service Account Credentials: Paste in the contents of your service account private key from the Set up a Service Account step above.

  7. Google Project ID: Enter the Project ID for the GCP project you created in the Create a GCP Project step above. To view your Project ID, click the project drop-down menu between the GCP logo and the search bar and select your project.

Optional Tuning Parameters

GCP Stackdriver Nozzle for PCF has a number of tuning parameters that can be used to alter its behavior. For almost all use cases, the defaults are appropriate.

  1. Metrics Buffer Duration: This parameter controls the interval (in seconds) between Nozzle writes to Stackdriver Monitoring. The Nozzle buffers and deduplicates incoming Firehose events over the interval for batching purposes, and to avoid exceeding request quota limits. Reducing this interval can allow for more fine-tuned reporting of Firehose data at the cost of significantly higher request rates to Stackdriver Monitoring, while increasing it has the converse effect.

  2. Metrics Batch Size: This parameter controls how many timeseries points are sent in each batched write to Stackdriver Monitoring. Stackdriver enforces a maximum of 200 points per request, so this number may only be adjusted downwards from the default of 200. Another potentially useful value for this parameter is 1, which will enforce a 1:1 mapping between timeseries point and request to Stackdriver Monitoring. This will almost certainly result in exceeding quota limits but may help identify timeseries that are causing Stackdriver errors.

  3. Logging Buffer Duration: This parameter controls the maximum amount of time (in seconds) the Nozzle will buffer Firehose events destined for Stackdriver Logging. If the number of buffered log messages reaches the logging batch size before the duration expires, those messages will be sent anyway. Chances are you want to change the batch size rather than this duration.

  4. Logging Batch Size: This parameter controls the number of Firehose events that will be buffered and sent to Stackdriver Logging in a single batch. Larger batch sizes will reduce request rates to Stackdriver, and may be necessary for heavily-utilized PCF installations.

  5. Logging Requests In-Flight: This parameter limits the maximum number of concurrent in-flight requests to Stackdriver Logging. This may need to be increased (along with the batch size) if Stackdriver Logging latency is observed to be high for the workload served by the PCF installation.

  6. Metric Path Prefix: This parameter configures the prefix prepended to all metric names that are sent to Stackdriver Monitoring. The default of “firehose” results in events with the origin gorouter and metric name total_requests being written to the custom metric name custom.googleapis.com/firehose/gorouter.total_requests.

  7. Foundation Name: This parameter configures the value of the “foundation” label that is added to all metrics and log messages published to Stackdriver. If you are monitoring multiple PCF foundations within a single Stackdriver project, giving each one a unique foundation name (the GCP region name, e.g. europe-west3, is one possibility) will distinguish which foundation the metrics and log messages were published from.

Filtering Firehose Events

GCP Stackdriver Nozzle for PCF allows events from the Firehose to be blacklisted or whitelisted for publishing to Stackdriver. Events that match a blacklist filter will not be published unless they also match a whitelist filter.

Note: The Nozzle tile ships with a restrictive whitelist for Stackdriver Monitoring by default, because Stackdriver enforces a limit of 500 custom metric names per project. Metrics documented by Pivotal as [Key Performance Indicators](https://docs.pivotal.io/pivotalcf/2-0/monitoring/kpi.html) or [Key Capacity Scaling Indicators](https://docs.pivotal.io/pivotalcf/2-0/monitoring/key-cap-scaling.html) are whitelisted for publishing to Stackdriver Monitoring, while all others are blacklisted. To stay within this limit, it is recommended that Nozzle users follow a policy of explicitly whitelisting metrics of interest rather than removing the default blacklist.

A filter rule has three elements:

  • A regexp, which must be a valid regular expression.
  • A type, which may be either “name” or “job”.
    • name matches against a concatenation of event origin and metric name with “.” (e.g. gorouter.total_requests), and is only applicable for CounterEvent and ValueMetric event types.
    • job matches against the event job.
  • A sink, which may be either “monitoring”, “logging”, or “all”. The latter applies the rule to all Firehose events, while the other two restrict the filter rule to events destined for Stackdriver Monitoring or Logging respectively.

An optional description field is provided for a human-readable name for each filter, to aid in identifying filters when the list elements are collapsed. This is ignored by the Stackdriver Nozzle.

Create a pull request or raise an issue on the source for this page in GitHub