Using PCF Log Search

This topic describes how to get started with Pivotal Cloud Foundry (PCF) Log Search. This topic focuses on Kibana, which is the front-end component of PCF Log Search. The Kibana web application lets you search and filter system logs, design visualizations of saved searches, and create dashboards.

Log in to Kibana

Note: Log Search only supports one set of access credentials, viewable through the Ops Manager to PCF admin users. Additional user creation is not supported.

  1. From the Installation Dashboard in Ops Manager, click on the Log Search tile.


  2. Select the Credentials tab.

  3. Click Link to Credentials to view and record the Kibana Credentials.


  4. Navigate to and log in to Kibana using the credentials that you recorded in the previous step.

  5. If prompted to configure an index pattern, enter logstash-* for the Index name or pattern and @timestamp for the Time-field name.

    Kibana Index pattern config screenshot

Get Started with Kibana

The PCF Log Search tile provides tags to standardize the data it receives from multiple tiles. The section below explains PCF Log Search tags. Once you understand how PCF Log Search tags work, you can use Kibana successfully.

Understand Log Search Tags

PCF Log Search receives data in JSON format from other tiles. PCF Log Search organizes this data into searchable fields based on the JSON keys, and also aggregates fields under custom tags. Log Search attaches these tags to data when it recognizes that different tile logs use different keys to refer to the same type of data. For instance, one tile may specify the timestamp under a Timestamp field, while another specifies this value under a T field. Log Search recognizes both of these values as a timestamp and attaches the @timestamp tag. You can use the common @timestamp tag in Kibana to search for timestamp data across all tiles.

Log Search attaches tags to other kinds of data as well. See the Log Search Tags Dictionary topic for the full list of tags generated by Log Search.

Filter, Search, and Visualize

The following list describes what you can do with the Kibana component of PCF Log Search:

  • Filter log data by field: You can filter log data based on tags generated by Log Search or any keys within the JSON logs themselves. The Available Fields list on the left side of the Discover page lists the Log Search tags first, followed by the parsed log keys.

  • Change the time scale: By default, the time scale is set to the last 15 minutes.

  • Change the refresh interval: By default, auto-refresh is set to Off.

  • Search log data: You can further refine your results from any filter or time span using the search bar at the top of the Discover page. You can also search against any field by entering your query in the following format: FIELD:VALUE. For example: @source.ip:

  • Design data visualizations: You can create visualizations such as Data Table, Line Chart, and Vertical Bar Chart. You can also customize your visualizations. For example, you can tailor the x and y axis of a Vertical Bar Chart using bucket aggregations and metric aggregations.

  • Create a dashboard: You can create a dashboard to display multiple visualizations and saved searches. You can also apply filters to your dashboard, which affects all the displayed panes. For instance, using the time filter applies your time changes to every displayed visualization and saved search.

For more information, view the Kibana documentation.

Forward Data to Splunk

You can configure PCF Log Search to forward some or all the data it receives to an external service such as Splunk in JSON format.

Step 1: Configure Splunk

Note: Pivotal recommends using UDP to avoid network communication problems with the Splunk Network input preventing data being indexed by Log Search

Follow the instructions for configuring a network input in the Get data from TCP and UDP ports topic of the Splunk documentation. When prompted, choose _json as the Source Type.

  1. From Ops Manager, click on the Log Search tile, and then the Experimental section.

  2. For Custom Logstash Outputs, enter the Splunk UDP network input that you configured in the previous step. See the following example configuration:

    if [@source][program] == "uaa" {
      udp {
        host => "SPLUNK-IP-OR-DNS"
        port => "SPLUNK-UDP-PORT-NUMBER"

    You can add a conditional statement to filter the data sent to Splunk, such as if [@source][program] == "uaa" { in the example above.

Step 3: Configure Firewall Rules

To send data from the Log Search to Splunk using UDP on the port you specified in Step 1, configure your firewall to allow:

  • Outgoing traffic from the Log Search Log parser VMs on the configured port. You can view the IP addresses for the Log Parser VMs in Ops Manager under the Status tab of the Log Search tile.
  • Incoming traffic to the Splunk installation on the configured port

Step 4: Verify Your Forwarding Configuration

Check that your data appears in both Log Search and Splunk:

  1. Using the example configuration from Step 2, search for @source.program:uaa in Kibana.

    UAA log in Log Search > Kibana screenshot

  2. Using the example configuration from Step 2, search for sourcetype="_json" in Splunk.

    UAA log in Splunk screenshot