LATEST VERSION: 1.9 - CHANGELOG
Pivotal Cloud Foundry v1.9

Starting and Stopping Pivotal Cloud Foundry Virtual Machines

Page last updated:

This topic describes starting and stopping the component virtual machines (VMs) that make up a Pivotal Cloud Foundry (PCF) deployment. You may in some cases want to stop or start all of your PCF VMs, for instance to power down your deployment or to recover from a power outage. You can do this with a single command, or you can perform the process manually. If you want to shut down a single VM in your deployment, you can use the manual process described on this page.

This procedure uses the BOSH Command Line Interface (CLI). See Prepare to Use the BOSH CLI for help setting up this tool.

Start and Stop Your PCF VMs

This section describes how to start or stop all the VMs in your deployment with a single command.

Start

Run the following command to start all the VMs in your deployment:

$ bosh -d PATH-TO-CF-DEPLOYMENT start

Stop

Run the following command to shut down all the VMs in your deployment:

$ bosh -d PATH-TO-CF-DEPLOYMENT stop --hard

Start and Stop Your PCF VMs Manually

This section describes how to start and stop all the VMs in your deployment individually. Dependencies between the components in your PCF deployment require that you start and stop the VMs for those components in a specific order. These orders are specified below in the start order and stop order tables.

Find the Names for Your PCF Virtual Machines

You need the full names for the VMs to start or stop them using the BOSH CLI. To find full names for the VMs running each component, run bosh vms:

$ bosh vms
Acting as user 'director' on 'p-bosh-399383d4525762cdc35c'
Deployment `cf-1ef2da789c0ed8f3567f'

Director task 26

Task 26 done

+-------------------------------------------------------------------------------------------------------+---------+-----+--------------------------------------------------------------+--------------+
| VM                                                                                                    | State   | AZ  | VM Type                                                      | IPs          |
+-------------------------------------------------------------------------------------------------------+---------+-----+--------------------------------------------------------------+--------------+
| clock_global-partition-bb35e96d6d3184a2d672/0 (92174545-ea16-448c-bea8-5a3d27ef2078)                  | running | n/a | clock_global-partition-bb35e96d6d3184a2d672                  | 192.0.2.101 |
| cloud_controller-partition-bb35e96d6d3184a2d672/0 (a315eb23-04bf-4228-a346-0ff8cc936a86)              | running | n/a | cloud_controller-partition-bb35e96d6d3184a2d672              | 192.0.2.100 |
| cloud_controller_worker-partition-bb35e96d6d3184a2d672/0 (62ccf688-29dc-42f4-9191-78cf6c363afd)       | running | n/a | cloud_controller_worker-partition-bb35e96d6d3184a2d672       | 192.0.2.102 |
| consul_server-partition-bb35e96d6d3184a2d672/0 (5eb05f9b-90d3-437f-aeeb-4a1b30320c77)                 | running | n/a | consul_server-partition-bb35e96d6d3184a2d672                 | 192.0.2.92  |
| diego_brain-partition-bb35e96d6d3184a2d672/0 (2abe95cd-5094-4f87-bcb9-c9fec68c6033)                   | running | n/a | diego_brain-partition-bb35e96d6d3184a2d672                   | 192.0.2.104 |
| diego_cell-partition-bb35e96d6d3184a2d672/0 (23d12fad-ca7a-4efa-9cd7-dc7d242c89ae)                    | running | n/a | diego_cell-partition-bb35e96d6d3184a2d672                    | 192.0.2.105 |
| diego_cell-partition-bb35e96d6d3184a2d672/1 (9f94e756-f648-4c4b-a7e7-08099bd75263)                    | running | n/a | diego_cell-partition-bb35e96d6d3184a2d672                    | 192.0.2.106 |
| diego_database-partition-bb35e96d6d3184a2d672/0 (0cca205b-bc68-42d5-95f0-47fc0c072bb6)                | running | n/a | diego_database-partition-bb35e96d6d3184a2d672                | 192.0.2.95  |
| doppler-partition-bb35e96d6d3184a2d672/0 (a5bcd2ed-7b3c-4ebb-901c-614e2064d10c)                       | running | n/a | doppler-partition-bb35e96d6d3184a2d672                       | 192.0.2.108 |
| etcd_server-partition-bb35e96d6d3184a2d672/0 (3e3b53cd-0b68-4f94-81cf-61da48dd20ab)                   | running | n/a | etcd_server-partition-bb35e96d6d3184a2d672                   | 192.0.2.94  |
| ha_proxy-partition-bb35e96d6d3184a2d672/0 (bc7dada2-8e31-4d70-b314-f85ebb51a503)                      | running | n/a | ha_proxy-partition-bb35e96d6d3184a2d672                      | 192.0.2.254 |
| loggregator_trafficcontroller-partition-bb35e96d6d3184a2d672/0 (57157a16-6309-494f-8133-b831b52bb363) | running | n/a | loggregator_trafficcontroller-partition-bb35e96d6d3184a2d672 | 192.0.2.109 |
| mysql-partition-bb35e96d6d3184a2d672/0 (ff18bb42-3df8-44b3-ba69-0b7e1f2dfc30)                         | running | n/a | mysql-partition-bb35e96d6d3184a2d672                         | 192.0.2.99  |
| mysql_proxy-partition-bb35e96d6d3184a2d672/0 (e4f72312-336d-465d-a8d0-57dc6b7abd66)                   | running | n/a | mysql_proxy-partition-bb35e96d6d3184a2d672                   | 192.0.2.98  |
| nats-partition-bb35e96d6d3184a2d672/0 (654b2470-c6fb-40d1-a7d2-c7839a7c6403)                          | running | n/a | nats-partition-bb35e96d6d3184a2d672                          | 192.0.2.93  |
| nfs_server-partition-bb35e96d6d3184a2d672/0 (df18f438-a0f1-46b2-b3ea-31bf96d47a18)                    | running | n/a | nfs_server-partition-bb35e96d6d3184a2d672                    | 192.0.2.96  |
| router-partition-bb35e96d6d3184a2d672/0 (4fbc6b8c-8bc6-425c-a4fc-04b7127b1f4a)                        | running | n/a | router-partition-bb35e96d6d3184a2d672                        | 192.0.2.97  |
| uaa-partition-bb35e96d6d3184a2d672/0 (82c39142-da6e-42ec-af1c-b0247f74e8bd)                           | running | n/a | uaa-partition-bb35e96d6d3184a2d672                           | 192.0.2.103 |
+-------------------------------------------------------------------------------------------------------+---------+-----+--------------------------------------------------------------+--------------+

VMs total: 17

You can see the full name of each VM in the Job/index column of the terminal output. Each full name includes:

  • A prefix indicating the component function of the VM. The table below associates each component VM function with a prefix.

  • The word partition

  • An identifier string specific to your deployment

  • An /INDEX suffix. For component processes that run on a single VM instance, INDEX is always 0. For processes running on multiple VMs, INDEX is a sequentially numbered value that uniquely identifies each VM.

For any component, you can look for its prefix in the bosh vms output to find the full name of the VM or VMs that run it. In the example shown here, the full name of one of the three Diego Cell VMs is diego_cell-partition-458f9d7042365ff810e9/2.

Start Your PCF Virtual Machines

In the order specified in the Start Order table below, run bosh start VM-NAME for each component in your PCF deployment. Use the full name of the component VM as listed in your bosh vms terminal output, without the /INDEX at the end. In the example here, the first component you would start is the NATS VM, by running bosh start nats-partition-458f9d7042365ff810e9:

$ bosh start nats-partition-458f9d7042365ff810e9

Processing deployment manifest
------------------------------
You are about to start nats-partition-458f9d7042365ff810e9/0

Detecting deployment changes
----------------------------
Start nats-partition-458f9d7042365ff810e9/0? (type 'yes' to continue): yes

Performing `start nats-partition-458f9d7042365ff810e9/0'...

...

Started updating job nats-partition-458f9d7042365ff810e9 > nats-partition-458f9d7042365ff810e9/0 (canary). Done (00:00:43)

Task 42 done

nats-partition-458f9d7042365ff810e9/0 has been started

Note: To start a specific instance of a VM, include the /INDEX at the end of its full name. In the example here, you could start only the first Diego Cell instance by running: $ bosh start diego_cell-partition-458f9d7042365ff810e9/0

Start Order Component Job/index name prefix
(in bosh vms output)
1 NATS nats-
2 consul consul_server-
3 etcd etcd_server-
4 Diego Database diego_database-
5 WebDAV Server nfs_server- (The WebDAV job has this prefix for historical reasons)
6 Router router-
7 MySQL Proxy mysql_proxy-
8 MySQL Server mysql-
9 Cloud Controller Database ccdb-
10 UAA Database uaadb-
11 Cloud Controller cloud_controller-
12 HAProxy ha_proxy-
13 Health Manager health_manager-
14 Clock Global clock_global-
15 Cloud Controller Worker cloud_controller_worker-
16 UAA uaa-
17 Diego Brain diego_brain-
18 Diego Cell diego_cell-
19 Doppler Server doppler-
20 Loggregator Traffic Controller loggregator_trafficcontroller-

Stop Your PCF Virtual Machines

In the order specified in the Stop Order table below, run bosh stop VM-NAME for each component in your PCF deployment. Use the full name of the component VM as listed in your bosh vms terminal output, without the /INDEX at the end. In the example here, the first component you would stop is the Loggregator Traffic Controller VM, by running bosh stop loggregator_trafficcontroller-partition-458f9d7042365ff810e9:

$ bosh stop loggregator_trafficcontroller-partition-458f9d7042365ff810e9

Processing deployment manifest
------------------------------
You are about to stop loggregator_trafficcontroller-partition-458f9d7042365ff810e9/0

Detecting deployment changes
----------------------------
Stop loggregator_trafficcontroller-partition-458f9d7042365ff810e9/0? (type 'yes' to continue): yes

Performing `stop loggregator_trafficcontroller-partition-458f9d7042365ff810e9/0'...

...

Started updating job loggregator_trafficcontroller-partition-458f9d7042365ff810e9 > loggregator_trafficcontroller-partition-458f9d7042365ff810e9/0 (canary). Done (00:00:37)

loggregator_trafficcontroller-partition-458f9d7042365ff810e9/0 has been stopped

Note: To stop a specific instance of a VM, include the /INDEX at the end of its full name. In the example here, you could stop only the third Diego Cell instance by running: $ bosh stop diego_cell-partition-458f9d7042365ff810e9/2

Stop Order Component Job/index name prefix
(in bosh vms output)
1 Loggregator Traffic Controller loggregator_trafficcontroller-
2 Doppler Server doppler-
3 Diego Cell diego_cell-
4 Diego Brain diego_brain-
5 UAA uaa-
6 Cloud Controller Worker cloud_controller_worker-
7 Clock Global clock_global-
8 Health Manager health_manager-
9 HAProxy ha_proxy-
10 Cloud Controller cloud_controller-
11 UAA Database uaadb-
12 Cloud Controller Database ccdb-
13 MySQL Server mysql-
14 MySQL Proxy mysql_proxy-
15 Router router-
16 WebDAV Server nfs_server- (The WebDAV job has this prefix for historical reasons)
17 Diego Database diego_database-
18 etcd etcd_server-
19 consul consul_server-
20 NATS nats-
Was this helpful?
What can we do to improve?
View the source for this page in GitHub