LATEST VERSION: 2.5 - RELEASE NOTES

On-Demand Networking

Page last updated:

This topic describes networking for on-demand services, including MySQL for Pivotal Cloud Foundry (PCF).

Service Network Requirement

When you deploy PCF, you must create a statically defined network to host the component virtual machines that constitute the PCF infrastructure.

PCF components, like the Cloud Controller and UAA, run on this infrastructure network. On-demand PCF services may require that you host them on a network that runs separately from the PCF default network. You can also deploy tiles on separate service networks to meet your own security requirement.

PCF v2.1 and later include dynamic networking. Operators can use this dynamic networking with asynchronous service provisioning to define dynamically-provisioned service networks. For more information, see Default Network and Service Network.

In PCF v2.1 and later, on-demand services are enabled by default on all networks. Operators can create separate networks to host services in Ops Manager Director, but doing so is optional. Operators select which network hosts on-demand service instances when they configure the tile for that service.

Default Network and Service Network

On-demand PCF services rely on the BOSH 2.0 ability to dynamically deploy VMs in a dedicated network. The on-demand service broker uses this capability to create single-tenant service instances in a dedicated service network.

On-demand services use the dynamically-provisioned service network to host the single-tenant worker VMs that run as service instances within development spaces. This architecture lets developers provision IaaS resources for their service instances at creation time, rather than the operator pre-provisioning a fixed quantity of IaaS resources when they deploy the service broker.

By making services single-tenant, where each instance runs on a dedicated VM rather than sharing VMs with unrelated processes, on-demand services eliminate the “noisy neighbor” problem when one app hogs resources on a shared cluster. Single-tenant services can also support regulatory compliance where sensitive data must be compartmentalized across separate machines.

An on-demand service splits its operations between the default network and the service network. Shared components of the service, such as executive controllers and databases, run centrally on the default network along with the Cloud Controller, UAA, and other PCF components. The worker pool deployed to specific spaces runs on the service network.

The diagram below shows worker VMs in an on-demand service instance running on a separate services network, while other components run on the default network.

Required Networking Rules for On-Demand Services

Before deploying a service tile that uses the on-demand service broker (ODB), request the needed network connections to allow components of Pivotal Cloud Foundry (PCF) to communicate with ODB.

The specifics of how to open those connections varies for each IaaS.

See the following table for key components and their responsibilities in an on-demand architecture.

Key Components Their Responsibilities
BOSH Director Creates and updates service instances as instructed by ODB.
BOSH Agent Includes an agent on every VM that it deploys. The agent listens for instructions from the BOSH Director and carries out those instructions. The agent receives job specifications from the BOSH Director and uses them to assign a role, or job, to the VM.
BOSH UAA Issues OAuth2 tokens for clients to use when they act on behalf of BOSH users.
PAS Contains the apps that are consuming services
ODB Instructs BOSH to create and update services, and connects to services to create bindings.
Deployed service instance Runs the given data service. For example, the deployed Redis for PCF service instance runs the Redis for PCF data service.

Required Networking Rules for MySQL for PCF

Regardless of the specific network layout, the operator must ensure network rules are set up so that connections are open as described in the table below.

This component… Must communicate with… Default TCP Port Communication direction(s) Notes
BOSH Agent BOSH Director 4222 Two-way The BOSH Agent runs on every VM in the system, including the BOSH Director VM. The BOSH Agent initiates the connection with the BOSH Director.
The default port is not configurable.
Broker and service instances the Doppler on PAS or Elastic Runtime 8082 One-way This port is for metrics.
Deployed apps on PAS or Elastic Runtime MySQL service instances 3306 One-way This port is for general use, app-specific tasks. In addition to configuring your IaaS, create a security group for the MySQL service instance.
ODB BOSH Director
BOSH UAA
25555
8443
One-way The default ports are not configurable.
ODB MySQL service instances 8443
3306
One-way This connection is for administrative tasks. Avoid opening general use, app-specific ports for this connection.
ODB PAS or Elastic Runtime 8443 One-way The default port is not configurable.
PAS or Elastic Runtime ODB 8080 One-way This port allows PAS or Elastic Runtime to communicate with the ODB component.
Deployed apps on PAS Runtime CredHub 8844 One-way This port is needed if secure service instance credentials are enabled. For information, see Configure Security.

Required Networking Rules for Leader-Follower Plans

In addition to the networking rules required for MySQL for PCF, if you are using a leader-follower service plan, the operator must ensure network rules are set up so that connections are open as described in the table below.

This component… Must communicate with… Default TCP Port Communication direction(s) Notes
Leader VM Follower VM 8443
8081
Two-way This port is needed if leader-follower is enabled. For information, see Configure a Leader-Follower Service Plan.

Required Networking Rules for Highly Available (HA) Cluster Plans

In addition to the networking rules required for MySQL for PCF, if you are using a highly available cluster service plan, the operator must ensure network rules are set up so that connections are open as described in the table below.

WARNING: Highly available (HA) cluster service plans are currently in beta. HA clusters are for advanced use cases only.

This component… Must communicate with… Default TCP Port Communication direction(s) Notes
PAS or Elastic Runtime MySQL service instances 8083 One-way This port is needed to monitor cluster health with Switchboard. For more information, see Monitoring Node Health
mysql-diag MySQL service instances 8112 One-way This port is needed for troubleshooting cluster nodes with mysql-diag.
HA cluster node HA cluster node 4567
4568
4444
Two-way This port is needed to maintains network connectivity between nodes in an HA cluster. For more infomation, see Firewall Configuration in the Percona XtraDB Cluster documentation.
Galera healthcheck Galera healthcheck 9200 Two-way This port is for monitoring the health of nodes in an HA cluster.
Create a pull request or raise an issue on the source for this page in GitHub