LATEST VERSION: 2.3 - RELEASE NOTES

On-Demand Service Architecture

This topic describes the architecture for on-demand MySQL for Pivotal Cloud Foundry (PCF).

For information about architecture of the older, pre-provisioned service, see the Architecture topic for MySQL for PCF v1.9.

BOSH 2.0 and the Service Network

When you deploy PCF, you must create a statically defined network to host the component virtual machines that constitute the PCF infrastructure.

PCF components, like the Cloud Controller and UAA, run on this infrastructure network. In PCF v2.0 and earlier, on-demand PCF services require that you host them on a network that runs separately from this network.

Cloud operators pre-provision service instances from Ops Manager. Then, for each service, Ops Manager allocates and recovers static IP addresses from a pre-defined block of addresses.

To enable on-demand services in PCF v2.0 and earlier, operators must create a service networks in Ops Manager Director and select the Service Network checkbox. Operators then can select the service network to host on-demand service instances when they configure the tile for that service.

Default Network and Service Network

On-demand PCF services rely on the BOSH 2.0 ability to dynamically deploy VMs in a dedicated network. The on-demand service broker uses this capability to create single-tenant service instances in a dedicated service network.

On-demand services use the dynamically-provisioned service network to host the single-tenant worker VMs that run as service instances within development spaces. This architecture lets developers provision IaaS resources for their service instances at creation time, rather than the operator pre-provisioning a fixed quantity of IaaS resources when they deploy the service broker.

By making services single-tenant, where each instance runs on a dedicated VM rather than sharing VMs with unrelated processes, on-demand services eliminate the “noisy neighbor” problem when one application hogs resources on a shared cluster. Single-tenant services can also support regulatory compliance where sensitive data must be compartmentalized across separate machines.

An on-demand service splits its operations between the default network and the service network. Shared components of the service, such as executive controllers and databases, run centrally on the default network along with the Cloud Controller, UAA, and other PCF components. The worker pool deployed to specific spaces runs on the service network.

The diagram below shows worker VMs in an on-demand service instance running on a separate services network, while other components run on the default network.

ODB Architecture

Required Networking Rules for On-Demand Services

Prior to deploying any service tile that uses the on-demand broker (ODB), the operator must request the network connections needed to allow various components of Pivotal Cloud Foundry (PCF) to communicate with ODB. The specifics of how to open those connections varies for each IaaS.

The following table shows the responsibilities of the key components in an on-demand architecture.

Key Components Their Responsibility
BOSH Director Creates and updates service instances as instructed by ODB
BOSH Agent BOSH includes an Agent on every VM that it deploys. The Agent listens for instructions from the Director and carries out those instructions. The Agent receives job specifications from the Director and uses them to assign a role, or Job, to the VM.
BOSH UAA As an OAuth2 provider, BOSH UAA issues tokens for clients to use when they act on behalf of BOSH users.
ERT Contains the apps that are consuming services
ODB Instructs BOSH to create and updated services, and connects to services to create bindings
Deployed service instance Runs the given data service (for example, the deployed Redis for PCF service instance runs the Redis for PCF data service)

Regardless of the specific network layout, the operator must ensure network rules are set up so that connections are open as described in the table below.

This component… Must communicate with… Default TCP Port Communication direction(s) Notes
ODB
  • BOSH Director
  • BOSH UAA
  • 25555
  • 8443
One-way The default ports are not configurable.
ODB MySQL service instances 3306 One-way This connection is for administrative tasks. Avoid opening general use, app-specific ports for this connection.
ODB ERT 8443 One-way The default port is not configurable.
Errand VMs
  • ERT
  • ODB
  • Deployed Service Instances
  • 8443
  • 8080
  • Specific to the service. May be one or more ports.
One-way The default port is not configurable.
BOSH Agent BOSH Director 4222 Two-way The BOSH Agent runs on every VM in the system, including the BOSH Director VM. The BOSH Agent initiates the connection with the BOSH Director.
The default port is not configurable.
Deployed apps on ERT Deployed service instances Specific to the service. May be one or more ports. One-way This connection is for general use, app-specific tasks. Avoid opening administrative ports for this connection.
ERT ODB 8080 One-way This port allows Elastic Runtime to communicate with the ODB component.

MySQL Server Defaults

This section describes the defaults that the MySQL for PCF tile applies to its Percona Server components. There are other components that can be customized as well.

Max Connections

All service instances accept up to 750 connections. System processes count towards this limit.

Max Allowed Packet: 256 MB

MySQL for PCF allows blobs up to 256 MB in size. You can change this size in a session variable if necessary.

Table Definition Cache: 8192

For more information about updating this variable, see the MySQL documentation.

Reverse Name Resolution Off

Disabling reverse DNS lookups improves performance. To enable reverse name resolution, clear this option. A cleared reverse name resolution causes the MySQL servers to perform a reverse DNS lookup on each new connection. Typically, MySQL restricts access by hostname, but MySQL for PCF uses user credentials, not hostnames, to authenticate access. Because of this, most deployments do not need reverse DNS lookups.

MySQL for PCF is configured to prevent the use of symlinks to tables. This recommended security setting prevents users from manipulating files on the server’s file system.

MyISAM Recover Options: BACKUP, FORCE

This setting enables MySQL for PCF to recover from most MyISAM problems without human intervention. For more information, see the MySQL documentation.

Log Bin Trust Function Creators: ON

This setting relaxes certain constraints on how MySQL writes stored procedures to the binary log. For more information, see the MySQL documentation.

Event Scheduler: ON

MySQL for PCF enables the event scheduler so users can create and utilize events in their dedicated service instances.

Lower Case Table Names: ON

By default, all table names are case sensitive. This option may be modified on the MySQL Configuration page. For more information about the use for lowercase table names, see the MySQL documentation.

Audit Log: OFF

The MySQL audit log is off by default. When enabled on the MySQL Monitoring page, logs are written as CSVs to /var/vcap/sys/log/mysql/mysql_audit_log as well as a remote syslog drain if it is enabled.

InnoDB Buffer Pool Size

Dynamically configured to be 50% of the available memory on each service instance.

InnoDB Log File Size: 256 MB

MySQL for PCF clusters default to a log-file size of 256 MB.

InnoDB Log Buffer Size: 32 MB

MySQL for PCF defaults to 32 MB to avoid excessive disk I/O when issuing large transactions.

InnoDB Auto Increment Lock Mode: 2

Auto Increment uses “interleaved” mode. This enables multiple statements to execute at the same time. There may be gaps in auto-incrementing columns.

Collation Server: UTF8 General CI

MySQL for PCF defaults the collation server to utf8_general_ci. You can override this during a session.

Character Set: UTF8

MySQL for PCF defaults all character sets to utf8. You can override this during a session.

Create a pull request or raise an issue on the source for this page in GitHub