RabbitMQ for PCF Operations FAQ's
Ensure that all nodes in the cluster are healthy via the RabbitMQ Management UI, or health metrics exposed via the firehose. You cannot rely solely
bosh instances output as that reflects the state of the Erlang VM used by RabbitMQ and not
the RabbitMQ application.
Only BOSH commands should be used by the operator to interact with the RabbitMQ application. For example:
bosh stop rabbitmq-server
bosh start rabbitmq-server
There are BOSH job lifecycle hooks which are only fired when rabbitmq-server is stopped through BOSH. You can also stop individual instances by running:
bosh stop JOB [index]
Note Do not use
monit stop rabbitmq-server as this does not call the drain scripts
BOSH starts the shutdown sequence from the bootstrap instance.
We start by telling the RabbitMQ application to shutdown and then shutdown the Erlang VM within which it is running. If this succeeds, we run the following checks to ensure that the RabbitMQ application and Erlang VM have stopped:
/var/vcap/sys/run/rabbitmq-server/pidexists, check that the PID inside this file does not point to a running Erlang VM process. Notice that we are tracking the Erlang PID and not the RabbitMQ PID.
- Check that
rabbitmqctldoes not return an Erlang VM PID
Once this completes on the bootstrap instance, BOSH will continue the same sequence on the next instance. All remaining rabbitmq-server instances will be stopped one by one.
bosh stop fails, you will likely get an error saying that the drain
script failed with:
result: 1 of 1 drain scripts failed. Failed Jobs: rabbitmq-server.
The drain script logs to
/var/vcap/sys/log/rabbitmq-server/drain.log. If you
have a remote syslog configured, this will appear as the
bosh ssh into the failing rabbitmq-server instance and start the
rabbitmq-server job by running
monit start rabbitmq-server). You will not be
able to start the job via
bosh start as this always runs the drain script first
and will fail since the drain script is failing.
Once rabbitmq-server job is running (confirm this via
monit status), run
/var/vcap/jobs/rabbitmq-server/bin/drain. This will tell you exactly why it’s
It is possible to back up the state of a RabbitMQ cluster for both the on-demand and pre-provisioned services using the RabbitMQ Management API. Backups include vhosts, exchanges, queues and users.
Back up Manually
Log in to the RabbitMQ Management UI as the admin user you created.
Select export definitions from the main page.
Back up and Restore with a Script
Use the API to run scripts with code similar to the following:
For the backup:
curl -u "$USERNAME:$PASSWORD" "http://$RABBIT_ADDRESS:15672/api/definitions" -o "$BACKUP_FOLDER/rabbit-backup.json"
For the restore:
curl -u "$USERNAME:$PASSWORD" "http://$RABBIT_ADDRESS:15672/api/definitions" -X POST -H "Content-Type: application/json" -d "@$BACKUP_FOLDER/rabbit-backup.json"
Before doing any upgrade of RabbitMQ, Pivotal recommends checking the following:
- In Operations Manager check that the status of all of the instances is healthy.
- Log into the RabbitMQ Management UI and check that no alarms have been triggered and that all nodes are healthy, that is, they should display as green.
- Check that the system is not close to hitting either the memory or disk alarm. Do this by looking at what has been consumed by each node in the RabbitMQ Managment UI.