Networking for On-Demand Services
This section describes networking considerations for Pivotal Cloud Cache.
When you deploy Pivotal Cloud Foundry, you must create a statically defined network to host the component VMs that make up the Pivotal Cloud Foundry infrastructure. Pivotal Cloud Foundry components, such as Cloud Controller and UAA, run on this infrastructure network.
On-demand Pivotal Cloud Foundry services might require you to host them on a separate network from the Pivotal Cloud Foundry default network. You can also deploy on-demand services on a separate service networks to meet your own security requirements.
Pivotal Cloud Foundry supports dynamic networking. Operators can use dynamic networking with asynchronous service provisioning to define dynamically-provisioned service networks. For more information, see Default Network and Service Network below.
On-demand services are enabled by default on all networks. Operators can optionally create separate networks to host services in BOSH Director. Operators can select which network hosts on-demand service instances when they configure the tile for that service.
On-demand Pivotal Cloud Cache services use BOSH to dynamically deploy VMs and create single-tenant service instances in a dedicated network. On-demand services use the dynamically-provisioned service network to host single-tenant worker VMs. These worker VMs run as service instances within development spaces.
This on-demand architecture has the following advantages:
- Developers can provision IaaS resources for their services instances when the instances are created. This removes the need for operators to pre-provision a fixed amount of IaaS resources when they deploy the service broker.
- Service instances run on a dedicated VM and do not share VMs with unrelated processes. This removes the “noisy neighbor” problem, where an app monopolizes resources on a shared cluster.
- Single-tenant services can support regulatory compliances where sensitive data must be separated across different machines.
An on-demand service separates operations between the default network and the service network. Shared service components, such as executive controllers and databases, Cloud Controller, UAA, and other on-demand components, run on the default network. Worker pools deployed to specific spaces run on the service network.
The diagram below shows worker VMs in an on-demand service instance running on a separate services network, while other components run on the default network.
Before deploying a service tile that uses the on-demand service broker (ODB), you must create networking rules to enable Pivotal Cloud Foundry components to communicate with ODB. For instructions for creating networking rules, see the documentation for your IaaS.
The following table lists key components and their responsibilities in the on-demand architecture.
|Key Components||Component Responsibilities|
|BOSH Director||Creates and updates service instances as instructed by ODB.|
|BOSH Agent||Adds an agent on every VM that it deploys. The agent listens for instructions from the BOSH Director and executes those instructions. The agent receives job specifications from the BOSH Director and uses them to assign a role or job to the VM.|
|BOSH UAA||Issues OAuth2 tokens for clients to use when they act on behalf of BOSH users.|
|Pivotal Application Service||Contains the apps that consume services.|
|ODB||Instructs BOSH to create and update services. Connects to services to create bindings.|
|Deployed service instance||Runs the given service. For example, a deployed Pivotal Cloud Cache service instance runs the Pivotal Cloud Cache service.|
Regardless of the specific network layout, the operator must ensure network rules are set up so that connections are open as described in the table below.
|This component…||Must communicate with…||Default TCP Port||Communication direction(s)||Notes|
||One-way||The BOSH Director and BOSH UAA default ports are not configurable.
The CredHub default port is configurable.
|ODB||Deployed service instances||Specific to the service (such as RabbitMQ for PCF). May be one or more ports.||One-way||This connection is for administrative tasks. Avoid opening general use, app-specific ports for this connection.|
|ODB||PAS (or Elastic Runtime)||8443||One-way||The default port is not configurable.|
||One-way||The default port is not configurable.|
|BOSH Agent||BOSH Director||4222||Two-way||The BOSH Agent runs on every VM in the system, including the BOSH Director VM.
The BOSH Agent initiates the connection with the BOSH Director.
The default port is not configurable.
|Deployed apps on PAS (or Elastic Runtime)||Deployed service instances||Specific to the service. May be one or more ports.||One-way||This connection is for general use, app-specific tasks. Avoid opening administrative ports for this connection.|
|PAS (or Elastic Runtime)||ODB||8080||One-way||This port may be different for individual services. This port may also be configurable by the operator if allowed by the tile developer.|
PCC service instances running within distinct PCF foundations may communicate with each other across a WAN. In a topology such as this, the members within one service instance use their own private address space, as defined in RFC1918.
A VPN may be used to connect the private network spaces that lay across the WAN. The steps required to enable the connectivity by VPN are dependent on the IaaS provider(s).
The private address space for each service instance’s network must be configured with non-overlapping CIDR blocks. Configure the network prior to creating service instances. Locate directions for creating a network on the appropriate IAAS provider within the section titled Architecture and Installation Overview.