On-Demand Service Architecture
Note: This version of RabbitMQ for Pivotal Platform is no longer supported because it has reached the End of General Support phase. To stay up to date with the latest software and security updates, upgrade to a supported version.
This topic describes the architecture for on-demand RabbitMQ® for Pivotal Cloud Foundry (PCF).
For information about architecture of the older, pre-provisioned service, see Deploying the RabbitMQ® Service.
When you deploy Pivotal Cloud Foundry, you must create a statically defined network to host the component virtual machines that constitute the Pivotal Cloud Foundry infrastructure.
Pivotal Cloud Foundry components, like the Cloud Controller and UAA, run on this infrastructure network. On-demand Pivotal Cloud Foundry services may require that you host them on a network that runs separately from the Pivotal Cloud Foundry default network. You can also deploy tiles on separate service networks to meet your own security requirement.
Pivotal Cloud Foundry v2.1 and later include dynamic networking. Operators can use this dynamic networking with asynchronous service provisioning to define dynamically-provisioned service networks. For more information, see Default Network and Service Network.
In Pivotal Cloud Foundry v2.1 and later, on-demand services are enabled by default on all networks. Operators can create separate networks to host services in BOSH Director, but doing so is optional. Operators select which network hosts on-demand service instances when they configure the tile for that service.
On-demand Rabbit for PCF services rely on the BOSH 2.0 ability to dynamically deploy VMs in a dedicated network. The on-demand service broker uses this capability to create single-tenant service instances in a dedicated service network.
On-demand services use the dynamically-provisioned service network to host the single-tenant worker VMs that run as service instances within development spaces. This architecture lets developers provision IaaS resources for their service instances at creation time, rather than the operator pre-provisioning a fixed quantity of IaaS resources when they deploy the service broker.
By making services single-tenant, where each instance runs on a dedicated VM rather than sharing VMs with unrelated processes, on-demand services eliminate the “noisy neighbor” problem when one app hogs resources on a shared cluster. Single-tenant services can also support regulatory compliance where sensitive data must be compartmentalized across separate machines.
An on-demand service splits its operations between the default network and the service network. Shared components of the service, such as executive controllers and databases, run centrally on the default network along with the Cloud Controller, UAA, and other Rabbit for PCF components. The worker pool deployed to specific spaces runs on the service network.
The diagram below shows worker VMs in an on-demand service instance running on a separate services network, while other components run on the default network.
Before deploying a service tile that uses the on-demand service broker (ODB), request the needed network connections to allow components of Pivotal Cloud Foundry to communicate with ODB.
The specifics of how to open those connections varies for each IaaS.
See the following table for key components and their responsibilities in an on-demand architecture.
|Key Components||Their Responsibilities|
|BOSH Director||Creates and updates service instances as instructed by ODB.|
|BOSH Agent||Includes an agent on every VM that it deploys. The agent listens for instructions from the BOSH Director and carries out those instructions. The agent receives job specifications from the BOSH Director and uses them to assign a role, or job, to the VM.|
|BOSH UAA||Issues OAuth2 tokens for clients to use when they act on behalf of BOSH users.|
|PAS||Contains the apps that are consuming services|
|ODB||Instructs BOSH to create and update services, and connects to services to create bindings.|
|Deployed service instance||Runs the given data service. For example, the deployed Redis for Pivotal Platform service instance runs the Redis for Pivotal Platform data service.|
Regardless of the specific network layout, the operator must ensure network rules are set up so that connections are open as described in the table below.
|Source Component||Destination Component||Default TCP Port||Notes|
25555 (BOSH Director)
|The default ports are not configurable.|
|ODB||Deployed service instances||15672 (RabbitMQ Management UI)||This connection is for administrative tasks.
Avoid opening general use, app-specific ports for this connection.
|ODB||PAS||8443 (UAA)||The default port is not configurable.|
Deployed Service Instances
15672 (RabbitMQ Management UI)
|The default port is not configurable.|
|BOSH Agent||BOSH Director||4222||The BOSH Agent runs on every VM in the system, including the BOSH Director VM.
The BOSH Agent initiates the connection with the BOSH Director.
The default port is not configurable.
The communication between these components is two-way.
|Deployed apps on PAS||Deployed service instances||
15672 (RabbitMQ Management UI)
|This connection is for general use, app-specific tasks.
Avoid opening administrative ports for this connection.
|PAS||ODB||8080||This port may be different for individual services.
This port may also be configurable by the operator if allowed by the tile developer.
|Deployed apps on PAS||Runtime CredHub||8844 (CredHub)||This port is needed if secure service instance credentials are enabled.|