Diagnosing Problems in PCF
Page last updated:
This guide provides help with diagnosing issues encountered during a Pivotal Cloud Foundry (PCF) installation.
Besides whether products install successfully or not, an important area to consider when diagnosing issues is communication between VMs deployed by Pivotal Cloud Foundry. Depending on what products you install, communication takes the form of messaging, routing, or both. If they go wrong, an installation can fail. For example, in an Pivotal Application Service (PAS) installation the PCF VM tries to push a test application to the cloud during post-installation testing. The installation fails if the resulting traffic cannot be routed to the HA Proxy load balancer.
The debug endpoint is a web page that provides information useful for diagnostics. If you have superuser privileges and can view the Ops Manager Installation Dashboard, you can access the debug endpoint.
In a browser, open the URL:
The debug endpoint offers three links:
- Files allows you to view the YAML files that Ops Manager uses to configure
products that you install.
The most important YAML file,
installation.yml, provides networking settings and describes
microbosh. In this case,
microboshis the VM whose BOSH Director component is used by Ops Manager to perform installations and updates of PAS and other products.
- Components describes the components in detail.
- Rails log shows errors thrown by the VM where the Ops Manager web
application (a Rails application) is running, as recorded in the
production.logfile. See the next section to learn how to explore other logs.
This section contains general tips for locating where a particular problem is called out in the log files. Refer to the later sections for tips regarding specific logs (such as those for PAS Components).
- Start with the largest and most recently updated files in the job log
- Identify logs that contain ‘err’ in the name
- Scan the file contents for a “failed” or “error” string
To troubleshoot specific PAS components by viewing their log files, browse to the Ops Manager interface and follow the procedure below.
- In Ops Manager, browse to the PAS Status tab. In the Job column, locate the component of interest.
In the Logs column for the component, click the download icon.
Browse to the PAS Logs tab.
Once the zip file corresponding to the component of interest moves to the Downloaded list, click the linked file path to download the zip file.
Once the download completes, unzip the file.
The contents of the log directory vary depending on which component you view. For example, the Diego cell log directory contains subdirectories for the
garden processes. To view the standard error stream for
garden, download the Diego cell logs and open
diego.0.job > garden > garden.stderr.log.
You can obtain diagnostic information from the Operations Manager by logging in to the VM where it is running. To log in to the Operations Manager VM, you need the following information:
- The IP address of the PCF VM shown in the
Settingstab of the Ops Manager Director tile.
- Your import credentials. Import credentials are the username and password
used to import the PCF
.ovffile into your virtualization system.
Complete the following steps to log in to the Operations Manager VM:
- Open a terminal window.
ssh IMPORT-USERNAME@PCF-VM-IP-ADDRESSto connect to the PCF installation VM.
- Enter your import password when prompted.
Change directories to the home directory of the web application:
You are now in a position to explore whether things are as they should be within the web application.
You can also verify that the
microboshcomponent is successfully installed. A successful MicroBOSH installation is required to install PAS and any products like databases and messaging services.
Change directories to the BOSH installation log home:
You may want to begin by running a tail command on the
If you are unable to resolve an issue by viewing configurations, exploring logs, or reviewing common problems, you can troubleshoot further by running BOSH diagnostic commands with the BOSH Command Line Interface (CLI).
Note: Do not manually modify the deployment manifest. Operations Manager will overwrite manual changes to this manifest. In addition, manually changing the manifest may cause future deployments to fail.
To view the VMs in your PCF deployment, perform the following steps specific to your IaaS.
- Log in to the AWS Console.
- Navigate to the EC2 Dashboard.
- Click Running Instances.
- Click the gear icon in the upper right.
- Select the following: job, deployment, director, index.
- Click Close.
- Install the novaclient.
- Point novaclient to your OpenStack installation and tenant by exporting the following environment variables:
$ export OS_AUTH_URL= YOUR_KEYSTONE_AUTH_ENDPOINT $ export OS_TENANT_NAME = TENANT_NAME $ export OS_USERNAME = USERNAME $ export OS_PASSWORD = PASSWORD
- List your VMs by running the following command:
$ nova list --fields metadata
- Log into vCenter.
- Select Hosts and Clusters.
- Select the top level object that contains your PCF deployment. For example, select Cluster, Datastore or Resource Pool.
- In the top tab, click Related Objects.
- Select Virtual Machines.
- Right click on the Table heading and select Show/Hide Columns.
- Select the following boxes: job, deployment, director, index.
The Apps Manager provides a graphical user interface to help manage organizations, users, applications, and spaces.
When troubleshooting Apps Manager performance, you might want to view the Apps Manager application logs. To view the Apps Manager application logs, follow these steps:
cf login -a api.MY-SYSTEM-DOMAIN -u adminfrom a command line to log in to PCF using the UAA Administrator credentials. In Pivotal Ops Manager, refer to PAS Credentials for these credentials.
$ cf login -a api.example.com -u admin API endpoint: api.example.com Password>****** Authenticating... OK
cf target -o system -s apps-managerto target the
systemorg and the
$ cf target -o system -s apps-manager
cf logs apps-managerto tail the Apps Manager logs.
$ cf logs apps-manager Connected, tailing logs for app apps-manager in org system / space apps-manager as admin...
The Apps Manager recognizes the
LOG_LEVEL environment variable.
LOG_LEVEL environment variable allows you to filter the messages
reported in the Apps Manager log files by severity level. The Apps Manager defines severity levels using the Ruby standard library Logger class.
By default, the Apps Manager
LOG_LEVEL is set to
The logs show more verbose messaging when you set the
To change the Apps Manager
cf set-env apps-manager LOG_LEVEL with the desired severity level.
$ cf set-env apps-manager LOG_LEVEL debug
You can set
LOG_LEVEL to one of the six severity levels defined by the Ruby
- Level 5:
unknown– An unknown message that should always be logged
- Level 4:
fatal– An unhandleable error that results in a program crash
- Level 3:
error– A handleable error condition
- Level 2:
warn– A warning
- Level 1:
info– General information about system operation
- Level 0:
debug– Low-level information for developers
Once set, the Apps Manager log files only include messages at the set
severity level and above.
For example, if you set
fatal, the log includes
unknown level messages only.
To obtain disk usage statistics by Diego Cell VMs and containers, see Examining GrootFS Disk Usage.