LATEST VERSION: 1.9 - CHANGELOG
Pivotal Cloud Foundry v1.7

Starting and Stopping Pivotal Cloud Foundry Virtual Machines

Page last updated:

This topic describes starting and stopping the component virtual machines (VMs) that make up a Pivotal Cloud Foundry (PCF) deployment. This procedure uses the BOSH Command Line Interface (CLI). See Prepare to Use the BOSH CLI for help setting up this tool.

Dependencies between the components in your PCF deployment require that you start and stop the VMs for those components in a specific order. These orders are specified below in the start order and stop order tables.

Note: When you deploy Elastic Runtime, the process automatically starts VMs. These steps are not required to install any part of your deployment.

Finding the Names for Your PCF Virtual Machines

You need the full names for the VMs to start or stop them using the BOSH CLI. To find full names for the VMs running each component, run bosh vms:

$ bosh vms
Acting as user 'director' on 'p-bosh-399383d4525762cdc35c'
Deployment `cf-1ef2da789c0ed8f3567f'

Director task 26

Task 26 done

+-------------------------------------------------------------------------------------------------------+---------+-----+--------------------------------------------------------------+--------------+
| VM                                                                                                    | State   | AZ  | VM Type                                                      | IPs          |
+-------------------------------------------------------------------------------------------------------+---------+-----+--------------------------------------------------------------+--------------+
| clock_global-partition-bb35e96d6d3184a2d672/0 (92174545-ea16-448c-bea8-5a3d27ef2078)                  | running | n/a | clock_global-partition-bb35e96d6d3184a2d672                  | 192.0.2.101 |
| cloud_controller-partition-bb35e96d6d3184a2d672/0 (a315eb23-04bf-4228-a346-0ff8cc936a86)              | running | n/a | cloud_controller-partition-bb35e96d6d3184a2d672              | 192.0.2.100 |
| cloud_controller_worker-partition-bb35e96d6d3184a2d672/0 (62ccf688-29dc-42f4-9191-78cf6c363afd)       | running | n/a | cloud_controller_worker-partition-bb35e96d6d3184a2d672       | 192.0.2.102 |
| consul_server-partition-bb35e96d6d3184a2d672/0 (5eb05f9b-90d3-437f-aeeb-4a1b30320c77)                 | running | n/a | consul_server-partition-bb35e96d6d3184a2d672                 | 192.0.2.92  |
| diego_brain-partition-bb35e96d6d3184a2d672/0 (2abe95cd-5094-4f87-bcb9-c9fec68c6033)                   | running | n/a | diego_brain-partition-bb35e96d6d3184a2d672                   | 192.0.2.104 |
| diego_cell-partition-bb35e96d6d3184a2d672/0 (23d12fad-ca7a-4efa-9cd7-dc7d242c89ae)                    | running | n/a | diego_cell-partition-bb35e96d6d3184a2d672                    | 192.0.2.105 |
| diego_cell-partition-bb35e96d6d3184a2d672/1 (9f94e756-f648-4c4b-a7e7-08099bd75263)                    | running | n/a | diego_cell-partition-bb35e96d6d3184a2d672                    | 192.0.2.106 |
| diego_database-partition-bb35e96d6d3184a2d672/0 (0cca205b-bc68-42d5-95f0-47fc0c072bb6)                | running | n/a | diego_database-partition-bb35e96d6d3184a2d672                | 192.0.2.95  |
| doppler-partition-bb35e96d6d3184a2d672/0 (a5bcd2ed-7b3c-4ebb-901c-614e2064d10c)                       | running | n/a | doppler-partition-bb35e96d6d3184a2d672                       | 192.0.2.108 |
| etcd_server-partition-bb35e96d6d3184a2d672/0 (3e3b53cd-0b68-4f94-81cf-61da48dd20ab)                   | running | n/a | etcd_server-partition-bb35e96d6d3184a2d672                   | 192.0.2.94  |
| ha_proxy-partition-bb35e96d6d3184a2d672/0 (bc7dada2-8e31-4d70-b314-f85ebb51a503)                      | running | n/a | ha_proxy-partition-bb35e96d6d3184a2d672                      | 192.0.2.254 |
| loggregator_trafficcontroller-partition-bb35e96d6d3184a2d672/0 (57157a16-6309-494f-8133-b831b52bb363) | running | n/a | loggregator_trafficcontroller-partition-bb35e96d6d3184a2d672 | 192.0.2.109 |
| mysql-partition-bb35e96d6d3184a2d672/0 (ff18bb42-3df8-44b3-ba69-0b7e1f2dfc30)                         | running | n/a | mysql-partition-bb35e96d6d3184a2d672                         | 192.0.2.99  |
| mysql_proxy-partition-bb35e96d6d3184a2d672/0 (e4f72312-336d-465d-a8d0-57dc6b7abd66)                   | running | n/a | mysql_proxy-partition-bb35e96d6d3184a2d672                   | 192.0.2.98  |
| nats-partition-bb35e96d6d3184a2d672/0 (654b2470-c6fb-40d1-a7d2-c7839a7c6403)                          | running | n/a | nats-partition-bb35e96d6d3184a2d672                          | 192.0.2.93  |
| nfs_server-partition-bb35e96d6d3184a2d672/0 (df18f438-a0f1-46b2-b3ea-31bf96d47a18)                    | running | n/a | nfs_server-partition-bb35e96d6d3184a2d672                    | 192.0.2.96  |
| router-partition-bb35e96d6d3184a2d672/0 (4fbc6b8c-8bc6-425c-a4fc-04b7127b1f4a)                        | running | n/a | router-partition-bb35e96d6d3184a2d672                        | 192.0.2.97  |
| uaa-partition-bb35e96d6d3184a2d672/0 (82c39142-da6e-42ec-af1c-b0247f74e8bd)                           | running | n/a | uaa-partition-bb35e96d6d3184a2d672                           | 192.0.2.103 |
+-------------------------------------------------------------------------------------------------------+---------+-----+--------------------------------------------------------------+--------------+

VMs total: 17

You can see the full name of each VM in the VM column of the terminal output. Each full name includes:

  • A prefix indicating the component function of the VM. The table below associates each component VM function with a prefix.

  • The word partition

  • An identifier string specific to your deployment

  • An INDEX suffix. For component processes that run on a single VM instance, INDEX is always 0. For processes running on multiple VMs, INDEX is a sequentially numbered value that uniquely identifies each VM.

  • An ID in parentheses that is the UUID (Universally Unique Identifier) for that instance and can generally be used interchangeably with the index value from the CLI.

For any component, you can look for its prefix in the bosh vms output to find the full name of the VM or VMs that run it. In the example shown here, the full name of one of the two Diego Cell VMs is diego_cell-partition-bb35e96d6d3184a2d672/1.

Starting PCF Virtual Machines

In the order specified in the Start Order table below, run bosh start VM-NAME for each component in your PCF deployment. Use the full name of the component VM as listed in your bosh vms terminal output with the INDEX at the end. In the example here, the first component you would start is the NATS VM, by running bosh start nats-partition-458f9d7042365ff810e9 0:

Processing deployment manifest
------------------------------
You are about to start nats-partition-458f9d7042365ff810e9/0

Detecting deployment changes
----------------------------
Start nats-partition-458f9d7042365ff810e9/0? (type 'yes' to continue): yes

Performing `start nats-partition-458f9d7042365ff810e9/0'...

...

Started updating job nats-partition-458f9d7042365ff810e9 > nats-partition-458f9d7042365ff810e9/0 (canary). Done (00:00:43)

Task 42 done

nats-partition-458f9d7042365ff810e9/0 has been started

Note: To start a specific instance of a VM, include the INDEX, space delimited, at the end of its full name. In the example here, you could start only the first Diego Cell instance by running: $ bosh start diego_cell-partition-458f9d7042365ff810e9 0

Start Order Component Job/index name prefix
(in bosh vms output)
1 NATS nats-
2 consul consul_server-
3 etcd etcd_server-
4 Diego Database diego_database-
5 NFS Server nfs_server-
6 Router router-
7 MySQL Proxy mysql_proxy-
8 MySQL Server mysql-
9 Cloud Controller Database ccdb-
10 UAA Database uaadb-
11 Apps Manager Database consoledb-
12 Cloud Controller cloud_controller-
13 HAProxy ha_proxy-
14 Health Manager health_manager-
15 Clock Global clock_global-
16 Cloud Controller Worker cloud_controller_worker-
17 Collector collector-
18 UAA uaa-
19 Diego Brain diego_brain-
20 Diego Cell diego_cell-
22 Doppler Server doppler-
23 Loggregator Traffic Controller loggregator_trafficcontroller-

Stopping PCF Virtual Machines

In the order specified in the Stop Order table below, run bosh stop VM-NAME INDEX-OR-ID --hard for each component in your PCF deployment. Use the full name of the component VM as listed in your bosh vms terminal output, with the INDEX, space delimited, at the end. In the example here, the first component you would stop is the Loggregator Traffic Controller VM, by running bosh stop loggregator_trafficcontroller-partition-458f9d7042365ff810e9 0 --hard:

$ bosh stop loggregator_trafficcontroller-partition-458f9d7042365ff810e9 0 --hard

Processing deployment manifest
------------------------------
You are about to stop loggregator_trafficcontroller-partition-458f9d7042365ff810e9/0

Detecting deployment changes
----------------------------
Stop loggregator_trafficcontroller-partition-458f9d7042365ff810e9/0? (type 'yes' to continue): yes

Performing `stop loggregator_trafficcontroller-partition-458f9d7042365ff810e9/0'...

...

Started updating job loggregator_trafficcontroller-partition-458f9d7042365ff810e9 > loggregator_trafficcontroller-partition-458f9d7042365ff810e9/0 (canary). Done (00:00:37)

loggregator_trafficcontroller-partition-458f9d7042365ff810e9/0 has been stopped

Note: To stop a specific instance of a VM, include the INDEX, space delimited, at the end of its full name. In the example here, you could stop only the second Diego Cell instance by running: $ bosh stop diego_cell-partition-bb35e96d6d3184a2d672 1 --hard

Stop Order Component Job/index name prefix
(in bosh vms output)
1 Loggregator Traffic Controller loggregator_trafficcontroller-
2 Doppler Server doppler-
4 Diego Cell diego_cell-
5 Diego Brain diego_brain-
6 UAA uaa-
7 Collector collector-
8 Cloud Controller Worker cloud_controller_worker-
9 Clock Global clock_global-
10 Health Manager health_manager-
11 HAProxy ha_proxy-
12 Cloud Controller cloud_controller-
13 Apps Manager Database consoledb-
14 UAA Database uaadb-
15 Cloud Controller Database ccdb-
16 MySQL Server mysql-
17 MySQL Proxy mysql_proxy-
18 Router router-
19 NFS Server nfs_server-
20 Diego Database diego_database-
21 etcd etcd_server-
22 consul consul_server-
23 NATS nats-
Was this helpful?
What can we do to improve?
View the source for this page in GitHub