Loggregator Guide for Cloud Foundry Operators

Page last updated:

This topic contains information for Cloud Foundry deployments operators about how to configure the Loggregator system to avoid data loss with high volumes of logging and metrics data.

Loggregator Message Throughput and Reliability

For determining the message throughput and reliability rates of your Loggregator system, see the section below.

Measuring Message Throughput

To measure the message throughput of the Loggregator system, you can monitor the total number of egress messages from all Metrons on your platform using the metron.egress metric.

If you do not use a monitoring platform, you can follow the instructions below to measure the overall message throughput of your Loggregator system:

  1. Log in to the Cloud Foundry Command Line Interface (cf CLI) with your admin credentials:
    $ cf login
  2. Install the Cloud Foundry Firehose plugin.
  3. Install Pipe Viewer:
    $ apt-get install pv
  4. Run the following command:
    $ cf nozzle -n | pv -l -i 10 -r > /dev/null

Measuring Message Reliability

To measure the message reliability rate of your Loggregator system, you can run black-box tests. If you want to use this method, see the open-source cf-logmon app and the configuration instructions provided in the README.md file.

Scaling Loggregator

Most Loggregator configurations use preferred resource defaults. For more information about customizing these defaults and planning the capacity of your Loggregator system, see Key Capacity Scaling Indicators.

Scaling Nozzles

You can scale a nozzle using the subscription ID specified when the nozzle connects to the Firehose. If you use the same subscription ID on each nozzle instance, the Firehose evenly distributes data across all instances of the nozzle.

For example, if you have two nozzle instances with the same subscription ID, the Firehose sends half of the data to one nozzle instance and half to the other. Similarly, if you have three nozzle instances with the same subscription ID, the Firehose sends one-third of the data to each instance.

If you want to scale a nozzle, the number of nozzle instances should match the number of Traffic Controller instances:

Number of nozzle instances = Number of Traffic Controller instances

Stateless nozzles should handle scaling gracefully. If a nozzle buffers or caches the data, the nozzle author must test the results of scaling the number of nozzle instances up or down.

Slow Nozzle Alerts

The Traffic Controller alerts nozzles if they consume events too slowly. If a nozzle falls behind, Loggregator alerts the nozzle in two ways:

  • TruncatingBuffer alerts: If the nozzle consumes messages more slowly than they are produced, the Loggregator system may drop messages. In this case, Loggregator sends the log message, TB: Output channel too full. Dropped N messages, where N is the number of dropped messages. Loggregator also emits a CounterEvent with the name doppler_proxy.slow_consumer. The nozzle receives both messages from the Firehose, alerting the operator to the performance issue.

Forwarding Logs to an External Service

You can configure Pivotal Application Service to forward log data from apps to an external aggregator service. Using Log Management Services explains how to bind apps to the external service and configure it to receive logs from Pivotal Application Service.

Log Message Size Constraints

When a Diego Cell emits app logs to Metron, Diego breaks up log messages greater than approximately 60 KiB into multiple envelopes.

Create a pull request or raise an issue on the source for this page in GitHub