On-Demand Networking

Page last updated:

This topic describes networking for on-demand services, including MySQL for Pivotal Platform.

Service Network Requirement

When you deploy Pivotal Cloud Foundry, you must create a statically defined network to host the component virtual machines that constitute the Pivotal Cloud Foundry infrastructure.

Pivotal Cloud Foundry components, like the Cloud Controller and UAA, run on this infrastructure network. On-demand Pivotal Cloud Foundry services may require that you host them on a network that runs separately from the Pivotal Cloud Foundry default network. You can also deploy tiles on separate service networks to meet your own security requirement.

Pivotal Cloud Foundry v2.1 and later include dynamic networking. Operators can use this dynamic networking with asynchronous service provisioning to define dynamically-provisioned service networks. For more information, see Default Network and Service Network.

In Pivotal Cloud Foundry v2.1 and later, on-demand services are enabled by default on all networks. Operators can create separate networks to host services in BOSH Director, but doing so is optional. Operators select which network hosts on-demand service instances when they configure the tile for that service.

Default Network and Service Network

On-demand MySQL for Pivotal Platform services rely on the BOSH 2.0 ability to dynamically deploy VMs in a dedicated network. The on-demand service broker uses this capability to create single-tenant service instances in a dedicated service network.

On-demand services use the dynamically-provisioned service network to host the single-tenant worker VMs that run as service instances within development spaces. This architecture lets developers provision IaaS resources for their service instances at creation time, rather than the operator pre-provisioning a fixed quantity of IaaS resources when they deploy the service broker.

By making services single-tenant, where each instance runs on a dedicated VM rather than sharing VMs with unrelated processes, on-demand services eliminate the “noisy neighbor” problem when one app hogs resources on a shared cluster. Single-tenant services can also support regulatory compliance where sensitive data must be compartmentalized across separate machines.

An on-demand service splits its operations between the default network and the service network. Shared components of the service, such as executive controllers and databases, run centrally on the default network along with the Cloud Controller, UAA, and other MySQL for Pivotal Platform components. The worker pool deployed to specific spaces runs on the service network.

The diagram below shows worker VMs in an on-demand service instance running on a separate services network, while other components run on the default network.

Required Networking Rules for On-Demand Services

Before deploying a service tile that uses the on-demand service broker (ODB), request the needed network connections to allow components of Pivotal Platform to communicate with ODB.

The specifics of how to open those connections varies for each IaaS.

See the following table for key components and their responsibilities in an on-demand architecture.

Key Components Their Responsibilities
BOSH Director Creates and updates service instances as instructed by ODB.
BOSH Agent Includes an agent on every VM that it deploys. The agent listens for instructions from the BOSH Director and carries out those instructions. The agent receives job specifications from the BOSH Director and uses them to assign a role, or job, to the VM.
BOSH UAA Issues OAuth2 tokens for clients to use when they act on behalf of BOSH users.
Pivotal Application Service Contains the apps that are consuming services
ODB Instructs BOSH to create and update services, and connects to services to create bindings.
Deployed service instance Runs the given data service. For example, the deployed Redis for Pivotal Platform service instance runs the Redis for Pivotal Platform data service.

Required Networking Rules for MySQL for Pivotal Platform

Regardless of the specific network layout, the operator must ensure network rules are set up so that connections are open as described in the table below.

Source Component Destination Component Default TCP Port Notes
BOSH Agent BOSH Director 4222 The BOSH Agent runs on every VM in the system, including the BOSH Director VM. The BOSH Agent initiates the connection with the BOSH Director.

The default port is not configurable.

The communication between these components is two-way.
Broker and service instances Doppler on Pivotal Application Service 8082 This port is for metrics.
Deployed apps on Application Service MySQL service instances 3306 This port is for general use, app-specific tasks. In addition to configuring your IaaS, create a security group for the MySQL service instance.
ODB BOSH Director

BOSH UAA
25555 (BOSH Director)

8443 (UAA)

8844 (CredHub)

The default ports are not configurable.
ODB MySQL service instances 8443
3306
This connection is for administrative tasks. Avoid opening general use, app-specific ports for this connection.
ODB Application Service 8443 The default port is not configurable.
Application Service ODB 8080 This port allows Application Service to communicate with the ODB component.
Deployed apps on Application Service Runtime CredHub 8844 This port is needed if secure service instance credentials are enabled. For information, see Configure Security.
Application Service MySQL service instances 8853 This port is for DNS to run health checks against services instances.

Required Networking Rules for Leader-Follower Plans

In addition to the networking rules required for MySQL for Pivotal Platform, if you are using a leader-follower service plan, the operator must ensure network rules are set up so that connections are open as described in the table below.

Source VM Destination VM Default TCP Port Notes
Leader VM Follower VM 8443
8081
This port is needed if leader-follower is enabled. For more information, see Configure a Leader-Follower Service Plan.

The communication between these VMs is two-way.

Required Networking Rules for Highly Available (HA) Cluster Plans

In addition to the networking rules required for MySQL for Pivotal Platform, if you are using a highly available cluster service plan, the operator must ensure network rules are set up so that connections are open as described in the table below.

Warning: Highly available plans are in general availability (GA). However, they are for advanced use cases only.

Source VM Destination VM Default TCP Port Notes
Application Service MySQL service instances 8083 This port is needed to monitor cluster health with Switchboard. For more information, see Monitoring Node Health (HA Cluster).
Jumpbox VM Application Service UAA 8443 This port is needed so that the replication canary can create a UAA client for sending email notifications. For more information, see About the Replication Canary.
HA cluster node HA cluster node 4567
4568
4444
This port is needed to maintain network connectivity between nodes in an HA cluster. For more information, see Firewall Configuration in the Percona XtraDB Cluster documentation.

The communication between these VMs is two-way.
Galera healthcheck Galera healthcheck 9200 This port is for monitoring the health of nodes in an HA cluster.

The communication between these VMs is two-way.