LATEST VERSION: 1.9 - CHANGELOG
MySQL for PCF v1.8

MySQL for Pivotal Cloud Foundry

This is documentation for MySQL for Pivotal Cloud Foundry (PCF).

Product Snapshot

The following table provides version and version-support information about MySQL for PCF:

Element Details
Version v1.8.10
Release date June 22, 2017
Software component version MariaDB v10.1.18, Galera v25.3.17
Compatible Ops Manager version(s) v1.8.x, v1.9.x, v1.10.x
Compatible Elastic Runtime version(s) v1.8.x, v1.9.x, v1.10.x
IaaS support AWS, Azure, OpenStack, and vSphere
IPsec support Yes

Upgrading to the Latest Version

Consider the following compatibility information before upgrading MySQL.

For more information, refer to the full Product Compatibility Matrix.

Ops Manager Version Supported Upgrades from Imported MySQL Installation
From To
v1.8.x, v1.9.x,
v1.10.x
v1.6.1 – v1.6.25Next v1.6.x release – v1.6.26
v1.7.0 – v1.7.30
v1.8.9
v1.7.0 – v1.7.29Next v1.7.x release – v1.7.30
v1.8.9
v1.8.0 – v1.8.8v1.8.9

Release Notes

Consult the Release Notes for information about changes between versions of this product.

Overview

The MySQL for PCF product delivers a fully managed, “Database as a Service” to Cloud Foundry users. When installed, the tile deploys and maintains a single or three-node cluster running a recent release of MariaDB, SQL Proxies for super-fast failover, and Service Brokers for Cloud Foundry integration. We work hard to ship the service configured with sane defaults, following the principle of least surprise for a general-use relational database service.

When installed, Developers can attach a database to their applications in as little as two commands, cf create-service and cf bind-service. Connection credentials are automatically provided in the standard manner. Developers can select from a menu of service plans options, which are configured by the platform operator.

Two configurations are supported:

SingleHighly Available
MySQL1 node3-node cluster
SQL Proxy1 node2 nodes
Service Broker1 node2 nodes
High Availability-Yes
Multi-AZ Support-Yes *
Rolling Upgrades-Yes
Automated BackupsYesYes
Customizable PlansYesYes
Customizable VM InstancesYesYes
Plan MigrationsYesYes
Encrypted CommunicationYes ✝Yes ✝
Encrypted Data at-rest--
Long-lived Canaries-Replication Canary

(*) vSphere and AWS (v1.8.0-edge.15 and later)
(✝) Requires IPSEC BOSH plug-in

Limitations

When deployed in HA configuration, MySQL for PCF is deployed using multiple master nodes. In this configuration, there are some changes to be aware of that make accessing the service different than a traditional single MySQL database server.

  • Single and three-node clusters are the only supported topologies. Ops Manager will allow the Operator to set the number of instances to other values, only one and three are advised. Please see the note in the Cluster Behavior document.
  • Although two Proxy instances are deployed by default, there is no automation to direct clients from one to the other. See the note in the Proxy section, as well as the entry in Known Issues.
  • Only the InnoDB storage engine is supported; it is the default storage engine for new tables. Attempted use of other storage engines (including MyISAM) may result in data loss.
  • All databases are managed by shared, multi-tenant server processes. Although data is securely isolated between tenants using unique credentials, application performance may be impacted by noisy neighbors.
  • Round-trip latency between database nodes must be less than five seconds; if the latency is higher than this, nodes will become partitioned. If more than half of cluster nodes are partitioned, the cluster will lose quorum and become unusable until manually bootstrapped.
  • See also the list of Known Limitations in MariaDB cluster.

Known Issues

Consult the Known Issues topic for information about issues in current releases of MySQL for PCF.

Planning your Deployment

Network Layout

MySQL for PCF supports deployment to multiple availability zones (AZs) on vSphere only. For other infrastructures, make sure to specify only one AZ.

To achieve best uptime, it is best to deploy a load balancer in front of the SQL Proxy nodes. Please see the note in the proxy section below. When configuring this load balancer, increase the minimum idle timeout if possible, as many load balancers are tuned for short-lived connections unsuitable for long-running database queries. See the known issue for more details.

Provided below is a table for those that deploy the MySQL service on a different network than Elastic Runtime. Refer to this table to configure any firewall rules to allow inbound and outbound traffic required by the MySQL service.

Type Listening service TCP Port
Inbound/TCP Service broker 8081
Inbound/TCP SQL proxy 3306
Inbound/TCP Proxy health check 1936
Inbound/TCP Proxy API 8080
Inbound/TCP Proxy health check 1936
Outbound/TCP NATS 4222
Internal/TCP MySQL server 3306
Internal/TCP Galera 4567
Internal/TCP Galera health check 9200

Application Security Groups

You must create an Application Security Group (ASG) for MySQL for PCF in order for applications to access to the service. See Creating Application Security Groups for MySQL for instructions.

Note: The service will not be usable until an ASG is in place.

Instance Capacity

An operator can configure how many database instances can be provisioned (instance capacity) by configuring the amount of persistent disk allocated to the MySQL server nodes. The broker will provision a requested database if there is sufficient unreserved persistent disk. This can be managed using the Persistent Disk field for the MySQL Server job in the Resource Config setting page in Operations Manager. Not all persistent disk will be available for instance capacity; about 2-3 GB is reserved for service operation.

In determining how much persistent disk to make available for databases, operators should also consider that MariaDB servers require sufficient CPU, RAM, and IOPS to promptly respond to client requests for all databases. The MariaDB cluster nodes are configured by default with 100GB of persistent disk. The deployment will fail if this is less than 3GB; we recommend allocating 10GB minimum.

Installation

  1. Download the product file from Pivotal Network.

  2. Navigate to the Ops Manager Installation Dashboard.

    Available products

  3. Click Import a Product to upload the product file to your Ops Manager installation.

    Add product

  4. Click Add next to the uploaded product description in the Available Products view to add this product to your staging area.

    Config mysql

  5. Click the newly added tile to review configurable Settings.

    Apply changes

  6. Click Apply Changes to deploy the service.

Settings

Service Plans

Starting with v1.8.0 of MySQL for PCF, operators can configure multiple service plans. To add or delete plans, follow the instructions below.

Note: If you are upgrading from an earlier version of MySQL for PCF, the upgrade process is not able to continue using the original name of the plan. Upgrading from a version of MySQL for PCF that offered only a single plan causes the plan to be renamed. Regardless of the name of the previous plan (e.g., “100mb-dev”), the plan will now be named pre-existing-plan. To retain the same plan name, edit the name before clicking Apply Changes to upgrade to MySQL for PCF v1.8.0.

Update Existing Service Instances

You can update service instances using the cf CLI as follows:

cf update-service SERVICE_INSTANCE -p NEW_PLAN

The following rules apply when updating a service instance plan:

  • Updating a service instance to a plan with a larger max_storage_mb is always supported.
  • Updating a service instance to a plan with a smaller max_storage_mb is supported only if the current usage is less than the new value.
  • The update command fails if the current usage is greater than the new value.
  • Updating a service instance does not disrupt running application connections so long as the application uses fewer than the maximum number of connections allowed by the new plan.

Add a Plan

  1. Navigate to the Ops Manager Installation Dashboard and click MySQL for Pivotal Cloud Foundry.
  2. Click Service Plans.
  3. Click Add to add a new service plan. Click the small triangles to expand or collapse a plan’s details. Service plan config
  4. Complete the following fields:
    • Service Plan name: Plan names may include only lowercase letters, numbers, hyphens, and underscores. Developers use the name of the service plan to create service instances in Apps Manager and the cf CLI.

      Note: PCF enforces plan name constraints only at deploy time. The form does not yet allow for real-time validation. If a plan name does not conform to the correct format, the tile fails to deploy after you click Apply Changes. The error appears in the Recent Install Logs dropdown in the following format: Error 100: Error filling in template 'settings.yml.erb' for 'cf-mysql-broker-partition-20d9770a220f749796c2/0' (line 40: Plan name 'ONE HUNDRED MEGA BYTES!!!' must only contain lowercase letters, numbers, hyphen(-), or underscore(_).)

    • Description: The descriptive text that accompanies the plan name. Use this to provide context beyond what can be expressed by the plan name. For example, “general use, small footprint.”
    • Storage Quota: The maximum amount, in megabytes, of storage allowed each instance of the Service Plan.
    • Concurrent Connections Quota: The maximum number of simultaneous database connections allowed to each instance of the Service Plan.
    • Private: By default, all plans are published to all organizations. Clicking Not available by default requires the operator to publish plans manually using cf enable-service-access.

      Note: If you have previously marked a plan as public, and later decide you would like this plan to be private, you need to run cf disable-service-access manually for each of your Organizations.

Note: You cannot deploy MySQL for PCF with zero service plans. One plan, at the minimum, is required. If you want to deploy MySQL for PCF without offering any plans, mark the plan as private and do not enable access to any organizations.

Deleting a Plan

  1. Navigate to the Ops Manager Installation Dashboard and click the MySQL for Pivotal Cloud Foundry.
  2. Click Service Plans. Delete plan
  3. Click the corresponding trash can icon to delete a plan.

    Note: If you accidentally click the trash can, do not click Save. Instead, return to the Installation Dashboard and any accidental changes will be discarded. If you do click Save, do not click Apply Changes on the Installation Dashboard. Instead, click Revert to discard any accidental changes.

  4. Click Save.
  5. Click Apply Changes from the Installation Dashboard.

If no service instances of the deleted plan exist, the plan will disappear from the Services Marketplace.

If service instances of the deleted plan exist, they will continue to be maintained, but developers cannot create new instances. The service plan will continue to display in the Services Marketplace, marked as “inactive.” Once all service plan instances are deleted, the operator can remove the plan from the Services Marketplace by following these steps:

  1. Run bosh deployments to find the full name of the MySQL for PCF deployment. For example, p-mysql-180290d67d5441ebf3c5.
  2. Run bosh deployment P-MYSQL-DEPLOYMENT-NAME. For example, bosh deployment p-mysql-180290d67d5441ebf3c5.
  3. Run bosh run errand broker-registrar.

Service Plans before MySQL for PCF v1.8.0

In v1.7 and earlier, the product is only capable of offering one service plan at a time.

A single service plan enforces quotas of 100 megabytes of storage per database and 40 concurrent connections per user by default. Users of Ops Manager can configure these plan quotas. Changes to quotas will apply to all existing database instances as well as new instances. In calculating storage utilization, indexes are included along with raw tabular data.

The name of the plan is 100mb-dev by default and is automatically updated if the storage quota is modified. Thus, if the storage quota is changed to 1024 megabytes, the new default plan name will be 1024mb-dev.

Provisioning a service instance from this plan creates a MySQL database on a multi-tenant server, suitable for development workloads. Binding applications to the instance creates unique credentials for each application to access the database.

Note: After changing a plan’s definition, all instances of the plan must be updated. For each plan, either the operator or the user must run cf update-service SERVICE_INSTANCE -p NEW_PLAN_NAME on the command line.

Note: Changing a plan’s definition does not work properly in versions of MySQL for PCF v1.6.3 and earlier. See the entry in Known Issues for the recommended workaround.

Options and Features

In the Advanced Options pane, you can change the configuration of the following features:

  • Disable Reverse DNS lookups

    This feature is enabled by default, and improves performance. Un-checking this option causes the MySQL servers to perform a reverse DNS lookup on each new connection. It is only necessary when restricting access by hostname, which is not required in typical MySQL for PCF installations.

  • Read-Only User Password

    Activates a special user, roadmin, a read-only administrator. Supply a special password, to be used only by administrators who require the ability to view all of the data maintained by the MySQL for PCF installation. Leaving the field blank de-activates the read-only user.

  • The Replication Canary, see the monitoring documentation.

  • The Interruptor, see the monitoring documentation.

  • Quota Enforcer Frequency

    By default, the Quota Enforcer polls for violators and reformers every 30 seconds. This setting, in seconds, changes how long the quota enforcer pauses between checks. If you wish to reduce the small amount of load caused by the Quota Enforcer’s polling, you may increase this time period. Be aware, however, that increasing the duration may make it possible for applications to write more data than their pre-determined limit allows.

  • MySQL Start Timeout

    The maximum amount of time necessary for the MySQL process to start, in seconds. When restarting the MySQL server processes, there are conditions under which the process takes longer than expected to appear as running. This can cause parts of the system automation to assume that the process has failed to start properly, and will appear as failing in OpsManager and BOSH output. Depending on the data stored by the database, and the time represented in logs, it may be necessary to increase this beyond the default of 60 seconds.

  • New Cluster Probe Timeout

    There is special logic to detect if a starting node is the first node of a new installation, or if it is part of an existing cluster. Part of this logic is to probe if the other nodes of the cluster have already been deployed. This is set to 30s by default, however due to high latency, this timeout period may be too long. When these probes take too long to respond, it’s possible that the MySQL Start Timeout may fail the deployment. To account for this, either increase the MySQL Start Timeout or lower the New Cluster Probe Timeout such that a new node doesn’t spend too long performing this test.

  • Replication Debug logging

    By default, the MySQL service will log events related to replication errors. Only turn off this option if error logging is causing undue strain on your logging systems.

  • Server Activity Logging

    The MySQL service includes the MariaDB Audit plugin. You can disable this plugin, or configure which events are recorded. The log can be found at /var/vcap/store/mysql_audit_logs/mysql_server_audit.log on each VM. When enabled, the file is rotated every 100 megabytes, and 30 of the most recent files are retained.

    Note: Due to the sensitive nature of these logs, they are not transmitted to the syslog server.

Proxy

The proxy tier is responsible for routing connections from applications to healthy MariaDB cluster nodes, even in the event of node failure.

Applications are provided with a hostname or IP address to reach a database managed by the service. For more information, see Application Binding. By default, the MySQL service will provide bound applications with the IP of the first instance in the proxy tier. Even if additional proxy instances are deployed, client connections will not be routed through them. This means the first proxy instance is a single point of failure.

Note In order to eliminate the first proxy instance as a single point of failure, operators must configure a load balancer to route client connections to all proxy IPs, and configure the MySQL service to give bound applications a hostname or IP address that resolves to the load balancer.

Configuring a Load Balancer

In older versions of the product, applications were given the IP of the single MySQL server in bind credentials. When upgrading to v1.5.0, existing applications will continue to function, but, to take advantage of high availability features, they must be rebound to receive either the IP of the first proxy instance or the IP/hostname of a load balancer.

In order to configure a load balancer with the IPs of the proxy tier before v1.5.0 is deployed and prevent applications from obtaining the IP of the first proxy instance, the product enables an operator to configure the IPs that will be assigned to proxy instances. The following instructions applies to the Proxy settings page for the MySQL product in Operation Manager.

  • In the Proxy IPs field, enter a list of IP addresses that should be assigned to the proxy instances. These IP addresses must be in the CIDR range configured in the Director tile and not be currently allocated to another VM. Look at the Status pages of other tiles to see what IP addresses are in use.

  • In the Binding Credentials Hostname field, enter the hostname or IP address that should be given to bound applications for connecting to databases managed by the service. This hostname or IP address should resolve to your load balancer and be considered long-lived. When this field is modified, applications must be rebound to receive updated credentials.

Configure your load balancer to route connections for a hostname or IP to the proxy IPs. As proxy instances are not synchronized, we recommend configuring your load balancer to send all traffic to one proxy instance at a time until it fails, then failover to another proxy instance. For details, see the known issue on proxy behavior. Additionally, increase the idle timeout of your load balancer to accomodate long running query times. For more information, see the known issue on load balancer timeouts.

Important: To configure your load balancer with a healthcheck or monitor, use TCP against port 1936. Unauthenticated healthchecks against port 3306 will cause the service to become unavailable, and will require manual intervention to fix.

Adding a Load Balancer after an Initial Deploy

If v1.5.0 is initially deployed without a load balancer and without proxy IPs configured, a load balancer can be setup later to remove the proxy as a single point of failure. However, there are several implications to consider:

  • Applications will have to be rebound to receive the hostname or IP that resolves to the load balancer. To rebind: unbind your application from the service instance, bind it again, then restage your application. For more information, see Managing Service Instances with the CLI. In order to avoid unnecessary rebinding, we recommend configuring a load balancer before deploying v1.5.0.
  • Instead of configuring the proxy IPs in Operations manager, use the IPs that were dynamically assigned by looking at the Status page. Configuration of proxy IPs after the product is deployed with dynamically assigned IPs is not well supported; see Known Issues.

Lifecycle Errands

Two lifecycle errands are run by default: the broker registrar and the smoke test. The broker registrar errand registers the broker with the Cloud Controller and makes the service plan public. The smoke test errand runs basic tests to validate that service instances can be created and deleted, and that applications pushed to Elastic Runtime can be bound and write to MySQL service instances. Both errands can be turned on or off on the Lifecycle Errands page under the Settings tab.

Note: You might also notice a broker-deregistrar errand. Do not run this errand unless instructed to do so by Support. Broker-deregistrar is a part of the automation used by Ops Manager while deleting a tile. Running this errand under any other circumstance will delete user data.

Provisioning and Binding via Cloud Foundry

As part of installation the product is automatically registered with Pivotal Cloud Foundry Elastic Runtime (see Lifecycle Errands). On successful installation, the MySQL service is available to application developers in the Services Marketplace, via the web-based Developer Console or cf marketplace. Developers can then provision instances of the service and bind them to their applications:

$ cf create-service p-mysql 100mb-dev mydb
$ cf bind-service myapp mydb
$ cf restart myapp

For more information about the use of services, see the Services Overview.

Example Application

To help application developers get started with MySQL for PCF, we have provided an example application, which can be downloaded here. Instructions can be found in the included README.

Service Instance Dashboard

Developers can check their current storage usage and service plan quota in the service instance dashboard. You can access this dashboard by either navigating to it from Apps Manager or obtaining its URL from the Cloud Foundry Command-Line Interface (cf CLI):

  • From Apps Manager
    1. Select the space that the service instance runs in.
    2. Select the Services tab.
    3. Under Services click the service instance to check.
    4. Click Manage at top right to open the service instance dashboard.
  • From the cf CLI
    1. Log into the space that the service instance runs in.
    2. Run cf service INSTANCE-NAME:
      $ cf service acceptDB
      Service instance: acceptDB
      Service: p-mysql
      Plan: 100mb-dev
      Description: MySQL service for application development and testing
      Documentation url:
      Dashboard: https://p-mysql.sys.acceptance.cf-app.example.com/manage/instances/ddfa6842-b308-4983-a544-50b3d1fb62f0
      
    3. Navigate to the URL listed in the output as Dashboard. In the example above, the instance dashboard URL is https://p-mysql.sys.acceptance.cf-app.example.com/manage/instances/ddfa6842-b308-4983-a544-50b3d1fb62f0.

The MySQL for PCF service instance dashboard shows how much storage the instance currently uses and the maximum usage allowed by its service plan. It does not show or manage database contents. You can think of it as a gas gauge, while the Pivotal MySQL Database Management App provides an interface through which you can drive.

Note: The service instance dashboard is distinct from the proxy dashboard that PCF operators can access to check the proxy instances handling queries for all MySQL for PCF service instances in a PCF deployment.

Pivotal MySQLWeb Database Management App

The Pivotal MySQLWeb app provides a web-based UI for managing MySQL for PCF databases. The free app lets you view and operate on tables, indexes, constraints, and other database structures, and directly execute SQL commands.

Pivotal MySQLWeb

You can run the Pivotal MySQLWeb app in two ways:

  • Standalone on your own machine
  • Deployed to PCF

If you deploy Pivotal MySQLWeb to PCF, you can configure it in the deployment manifest to automatically bind to a specific service instance.

For how to install and run Pivotal MySQLWeb, see the Pivotal MySQLWeb code repo and demo video

Command-Line Interface MySQL Plugin

To connect to your MySQL for PCF databases from a command line, use the Cloud Foundry Command-Line Interface (cf CLI) MySQL plugin. The plugin lets you:

  • Inspect databases for debugging.
  • Manually adjust database schema or contents in development environments.
  • Dump and restore databases.

To install the cf CLI MySQL plugin, run the following:

$ cf install-plugin -r "CF-Community" mysql-plugin

For more information, see the cf-mysql-plugin repository.

Proxy Dashboard

The service provides a dashboard where administrators can observe health and metrics for each instance in the proxy tier. Metrics include the number of client connections routed to each backend database cluster node.

The dashboard for each proxy instance can be found at http://proxy-JOB-INDEX-p-mysql.SYSTEM-DOMAIN.

The job index starts at 0 so if you have two proxy instances deployed and your system-domain is example.com, you would access dashboards at http://proxy-0-p-mysql.example.com and http://proxy-1-p-mysql.example.com.

To access the proxy dashboard, you need basic auth credentials which you can find in the Credentials tab of the MySQL for PCF tile in Ops Manager.

For more information about Switchboard, see the proxy documentation.

See Also

Create a pull request or raise an issue on the source for this page in GitHub