Using Rolling Schema Upgrade (HA Cluster)
This topic describes how to run a Rolling Schema Upgrade (RSU).
RSU is a method for processing schema upgrades. During a RSU, nodes are desynchronized from your highly available (HA) cluster so that schema upgrades do not block the rest of the nodes. Operators can then manually apply data definition language (DDL) statements to the desynchronized nodes individually. If you want to run a RSU, contact your operator and request them to do the procedures in this topic.
For more information about RSU, see wsrep_OSU_method in the Percona XtraDB Cluster documentation.
If you are modifying a large number of rows, you must use a RSU. Large transactions fail under Total Order Isolation (TOI). For more information about TOI, see wsrep_OSU_method in the Percona XtraDB Cluster documentation.
Pivotal recommends developers run migrations on a test environment to gauge how long the production DDL will take. If this is longer than the HA cluster can be unavailable, contact your operator to do the following procedure.
Note: DDL statements are applied to nodes one at a time. Because of this, the statements must be backwards compatible with the previous schema.
Before you run a RSU, you must have the following:
- A generated SQL file to perform data imports and run MySQL commands.
- An ephemeral disk with enough space for your largest IBD data file. See Check Ephemeral Disk Size.
Running a RSU on a large table can fill up your ephemeral disk. MySQL for PCF cannot function properly if there is not sufficient free space on file systems. Before running a RSU, confirm that your ephemeral disk is large enough to run the upgrade.
To check the size of your ephemeral disk, tell your operator to do the following:
To SSH into a node, follow the procedure in BOSH SSH.
To view your data directory, run the following command:
To retrieve the size of your largest IDB file, run the following command:
ls -l -ahS
Record the size of the largest IDB file.
To retrieve your ephemeral disk usage, do the procedure in Run mysql-diag Using the BOSH Command Line Interface (CLI) and record the size of the ephemeral disk being used.
If there is not enough free space to add the largest IBD file to the ephemeral disk, update your service plan to one with a larger ephemeral disk.
To run a RSU, tell your operator to do the following:
- Retrieve the ID for each node in your HA cluster. See Retrieve Node ID.
- Run the RSU on each node in your deployment. See Apply a New Schema.
You need the ID for each of your nodes to run a RSU.
To retrieve the IDs for your nodes, do the following:
To retrieve your service instance GUID, run the following command:
cf service MY-INSTANCE --guid
YOUR-INSTANCEis the name of your MySQL for PCF service instance. Record the guid returned in the output.
To identify the nodes in your HA cluster, run the following command:
bosh -d service-instance_GUID vms
GUIDis the guid you recorded in the previous step.
To run a RSU to apply a new schema on each node in your deployment, do the following:
To BOSH SSH into your node, do the procedure in BOSH SSH in the Pivotal Cloud Foundry documentation.
To set enable
global read_onlyas an admin user, run the following command:
mysql --defaults-file=/var/vcap/jobs/pxc-mysql/config/mylogin.cnf -e "set global read_only=on;"
To verify that the node is unhealthy, do the procedure in Monitoring Node Health.
Add the following to the SQL file you generated:
set wsrep_osu_method=rsu;to the top of the import file.
set global wsrep_desync=on;to the top of the import file.
set global wsrep_desync=off;to the bottom of the import file.
set wsrep_osu_method=rsu; set global wsrep_desync=on; ... set global wsrep_desync=off;
To execute the SQL file, run the following command:
mysql --defaults-file=/var/vcap/jobs/pxc-mysql/config/mylogin.cnf -e -D DATABASE-NAME < SQL-FILE-PATH
DATABASE-NAMEis the name of the database to use.
SQL-FILE-PATHis the path to your import file.
global read_only, run the following command:
$ mysql --defaults-file=/var/vcap/jobs/pxc-mysql/config/mylogin.cnf -e "set global read_only=off;""
To verify that the node is healthy, do the procedure in Monitoring Node Health.
Repeat the above procedure for your next node.
At this point your HA cluster should be stable and running with the new schema.