On-Demand Service Architecture
This topic describes the architecture for on-demand MySQL for Pivotal Cloud Foundry (PCF).
For information about architecture of the older, pre-provisioned service, see the Architecture topic for MySQL for PCF v1.9.
Before BOSH 2.0, cloud operators pre-provisioned service instances from Ops Manager. In the Ops Manager Director Networking pane, they allocated a block of IP addresses for the service instance pool, and under Resource Config they provisioned pool VM resources, specifying the CPU, hard disk, and RAM they would use. All instances had to be provisioned at the same level. With each
create-service request from a developer, Ops Manager handed out a static IP address from this block, and with each
delete-service it cleaned up the VM and returned it to the available pool.
With BOSH 2.0 dynamic networking and Cloud Foundry asynchronous service provisioning, operators can now define a dynamically-provisioned service network that hosts instances more flexibly. The service network runs separate from the PCF default network. While the default network hosts VMs launched by Ops Manager, the VMs running in the service network are created and provisioned on-demand by BOSH, and BOSH lets the IaaS assign IP addresses to the service instance VMs. Each dynamic network attached to a job instance is typically represented as its own Network Interface Controller in the IaaS layer.
Operators enable on-demand services when they deploy PCF, by creating one or more service networks in the Ops Manager Director Create Networks pane and selecting the Service Network checkbox. Designating a network as a service network prevents Ops Manager from creating VMs in the network, leaving instance creation to the underlying BOSH.
When they deploy an on-demand service, operators select the service network when configuring the tile for that on-demand service.
Like other on-demand PCF services, on-demand MySQL for PCF relies on BOSH 2.0’s ability to dynamically deploy VMs in a dedicated network. The on-demand service broker uses this capability to create single-tenant service instances in a dedicated service network.
On-demand services use the dynamically-provisioned service network to host the single-tenant worker VMs that run as service instances within development spaces. This architecture lets developers provision IaaS resources for their service instances at creation time, rather than the operator pre-provisioning a fixed quantity of IaaS resources when they deploy the service broker.
By making services single-tenant, where each instance runs on a dedicated VM rather than sharing VMs with unrelated processes, on-demand services eliminate the “noisy neighbor” problem when one application hogs resources on a shared cluster. Single-tenant services can also support regulatory compliance where sensitive data must be compartmentalized across separate machines.
An on-demand service splits its operations between the default network and the service network. Shared components of the service, such as executive controllers and databases, run centrally on the default network along with the Cloud Controller, UAA, and other PCF components. The worker pool deployed to specific spaces runs on the service network.
Prior to deploying the MySQL for PCF service tile that uses the on-demand broker (ODB), the operator must request the network connections needed to allow various components of Pivotal Cloud Foundry (PCF) to communicate with ODB. The specifics of how to open those connections varies for each IaaS.
The following table shows the responsibilities of the key components in an on-demand architecture.
|Key Components||Their Responsibility|
|BOSH Director||Creates and updates service instances as instructed by ODB|
|BOSH Agent||BOSH includes an Agent on every VM that it deploys. The Agent listens for instructions from the Director and carries out those instructions. The Agent receives job specifications from the Director and uses them to assign a role, or Job, to the VM.|
|BOSH UAA||As an OAuth2 provider, BOSH UAA issues tokens for clients to use when they act on behalf of BOSH users.|
|PAS or Elastic Runtime||Contains the apps that are consuming services|
|ODB||Instructs BOSH to create and updated services, and connects to services to create bindings|
|Deployed service instance||Runs the given data service (for example, the deployed Redis for PCF service instance runs the Redis for PCF data service)|
Regardless of the specific network layout, the operator must ensure network rules are set up so that connections are open as described in the table below.
|This component…||Must communicate with…||Default TCP Port||Communication direction(s)||Notes|
|BOSH Agent||BOSH Director||4222||Two-way||The BOSH Agent runs on every VM in the system, including the BOSH Director VM.
The BOSH Agent initiates the connection with the BOSH Director.
The default port is not configurable.
|Broker and service instances||the Doppler on PAS or Elastic Runtime||8082||One-way||This port is for metrics.|
|Deployed apps on PAS or Elastic Runtime||MySQL service instances||3306||One-way||This port is for general use, app-specific tasks. In addition to configuring your IaaS, create a security group for the MySQL service instance.|
|Leader VM||Follower VM||8443
|Two-way||This port is needed if leader-follower is enabled. For information, see Configure a Leader-Follower Service Plan.|
||One-way||The default ports are not configurable.|
|ODB||MySQL service instances||3306||One-way||This connection is for administrative tasks. Avoid opening general use, app-specific ports for this connection.|
|ODB||PAS or Elastic Runtime||8443||One-way||The default port is not configurable.|
|PAS or Elastic Runtime||ODB||8080||One-way||This port allows PAS or Elastic Runtime to communicate with the ODB component.|
This section describes the defaults that the MySQL for PCF tile applies to its Percona Server components. There are other components that can be customized as well.
All service instances accept up to 750 connections. System processes count towards this limit.
Max Allowed Packet: 256 MB
MySQL for PCF allows blobs up to 256 MB in size. You can change this size in a session variable if necessary.
Table Definition Cache: 8192
For more information about updating this variable, see the MySQL documentation.
Reverse Name Resolution Off
Disabling reverse DNS lookups improves performance. To enable reverse name resolution, clear this option. A cleared reverse name resolution causes the MySQL servers to perform a reverse DNS lookup on each new connection. Typically, MySQL restricts access by hostname, but MySQL for PCF uses user credentials, not hostnames, to authenticate access. Because of this, most deployments do not need reverse DNS lookups.
Skip Symbolic Links
MySQL for PCF is configured to prevent the use of symlinks to tables. This recommended security setting prevents users from manipulating files on the server’s file system.
MyISAM Recover Options: BACKUP, FORCE
This setting enables MySQL for PCF to recover from most MyISAM problems without human intervention. For more information, see the MySQL documentation.
Log Bin Trust Function Creators: ON
This setting relaxes certain constraints on how MySQL writes stored procedures to the binary log. For more information, see the MySQL documentation.
Event Scheduler: ON
MySQL for PCF enables the event scheduler so users can create and utilize events in their dedicated service instances.
Lower Case Table Names: ON
Audit Log: OFF
The MySQL audit log is off by default.
When enabled on the MySQL Monitoring page,
logs are written as CSVs to
/var/vcap/sys/log/mysql/mysql_audit_log as well as a remote syslog drain if it is enabled.
InnoDB Buffer Pool Size
Dynamically configured to be 50% of the available memory on each service instance.
InnoDB Log File Size: 256 MB
MySQL for PCF clusters default to a log-file size of 256 MB.
InnoDB Log Buffer Size: 32 MB
MySQL for PCF defaults to 32 MB to avoid excessive disk I/O when issuing large transactions.
InnoDB Auto Increment Lock Mode: 2
Auto Increment uses “interleaved” mode. This enables multiple statements to execute at the same time. There may be gaps in auto-incrementing columns.
Collation Server: UTF8 General CI
MySQL for PCF defaults the collation server to
You can override this during a session.
Character Set: UTF8
MySQL for PCF defaults all character sets to
You can override this during a session.