tisdag 19 maj 2020

Automate MySQL Partitioning using events and procedures


The purpose of this post if to show how you can automate the creation and pruning of partitioned tables. If you want to read about partitioning I recommend reading our manual.

In short partitioning makes it possible to spread your individual tables across a file system according to the partition rules you specify.

Reasons for partition your tables might be:
- Insert performance, smaller index trees for stable insert throughput.
- Select performance, only read data from selected (aka partitioning pruning) partitions.
- Delete performance, drop partitioning is must quicker than doing range deletes of old data.

Partitioning definitions is part for the CREATE/ALTER table statements, in the samples below we will use RANGE partitions on a DATE column in our table. This design would be appropriate for log/transaction tables where you want to remove old data.

Lets first create a simple table called logs:

       
CREATE TABLE logs (
    id INT AUTO_INCREMENT NOT NULL,
    message VARCHAR(500) NOT NULL,
    code TINYINT UNSIGNED NOT NULL,
    created DATETIME DEFAULT CURRENT_TIMESTAMP,
    PRIMARY KEY (id,created)
); 


Next step we want to partition this table using the column created, we will use range partitoning and partition data per day like:

       
ALTER TABLE logs PARTITION BY RANGE COLUMNS(created) (
    PARTITION p20200217 VALUES LESS THAN ('2020-02-17'),
    PARTITION p20200218 VALUES LESS THAN ('2020-02-18'),
    PARTITION p20200219 VALUES LESS THAN ('2020-02-19'),
    PARTITION p20200220 VALUES LESS THAN ('2020-02-20'),
    PARTITION p20200221 VALUES LESS THAN ('2020-02-21')
);


After the initial setup of our partitioned table it's now time to think about how to make sure we always have new partitions to insert data to and also remove old partitions. You can of course do this as a manual task using ALTER TABLE statements but that will become a tedious and error prune task over time.

To solve this problem we can use events and stored procedures in MySQL, we will also have a helper table to store some meta data of the tables we want to automate partitioning for.

Lets first create a table that will contain some information about the tables we want to automate partition handling for.

       
CREATE TABLE IF NOT EXISTS `MyPartTables` (
  `schema_name` VARCHAR(100) NOT NULL,
  `table_name` VARCHAR(100) NOT NULL,
  `no_of_future_partitions` INT NOT NULL,
  `max_partitions` INT NOT NULL,
  PRIMARY KEY (`schema_name`,`table_name`)
);


By looking at column names its quite clear what they mean, you simply insert one row per table you want to automate partition handling for. For our example table above this would look something like:

       
INSERT INTO MyPartTables VALUES ('ted','logs',2,20);


So, for out logs table in schema ted this means, create 2 partitions in the future and keep a maximum of 20 partitions before starting to remove old partitions.

Next we need to have some stored procedures to handle the create of new partitions and removal of old partitions. Both of these stored procedures will use INFORMATION_SCHEMA.PARTITIONS to look at current state and then add/remove any needed partitions as specified by information in MyPartTables for the specific table.

First lets have a look at AddPartitions, this procedure will fetch all rows from MyPartTables and then add any needed partitions. It's today hard coded to work only with RANGE partitioning using days for tables that look like our test table logs above.

       
CREATE PROCEDURE `AddPartitions`()
BEGIN
  DECLARE done   BOOL DEFAULT FALSE;
  DECLARE maxp   INT DEFAULT 0;
  DECLARE pdate  INT DEFAULT 0;
  DECLARE pname  VARCHAR(20);    
  DECLARE v_table_name  VARCHAR(100);
  DECLARE v_schema_name  VARCHAR(100);
  DECLARE v_no_future_part INT;

  DECLARE list CURSOR FOR SELECT `table_name`,`schema_name`,`no_of_future_partitions` FROM MyPartTables;

  DECLARE CONTINUE HANDLER FOR
  SQLSTATE '02000'
  SET done = TRUE;

  OPEN list;
  tloop: LOOP
    IF done THEN
      CLOSE list;
      LEAVE tloop;
    END IF;

    FETCH list INTO v_table_name,v_schema_name,v_no_future_part;
    -- DEBUG SELECT v_table_name,v_schema_name,v_no_future_part;

    SET pdate=TO_DAYS(DATE(NOW() + INTERVAL (v_no_future_part) DAY));
    SELECT TO_DAYS(TRIM(BOTH "'" FROM MAX(PARTITION_DESCRIPTION))) INTO maxp
           FROM INFORMATION_SCHEMA.PARTITIONS
           WHERE TABLE_NAME=v_table_name
           AND TABLE_SCHEMA=v_schema_name
           AND PARTITION_DESCRIPTION!="MAXVALUE";
    -- DEBUG' SELECT pdate, maxp;

    WHILE pdate > maxp DO
       SET maxp = maxp + 1;
       SET pname = CONCAT('p', DATE_FORMAT(FROM_DAYS(maxp), '%Y%m%d'));
       SET @qry = CONCAT('ALTER TABLE ',v_schema_name,'.',v_table_name, ' ADD PARTITION (PARTITION ',pname,' VALUES LESS THAN ("',DATE_FOR
MAT(FROM_DAYS(maxp),'%Y-%m-%d'),'"))');
       -- DEBUG SELECT @qry;
       PREPARE stmt FROM @qry;
       EXECUTE stmt;
       DEALLOCATE PREPARE stmt;
    END WHILE;
  END LOOP;
END ;;

Next lets look at the the procedure for removing old partitions, this is structured in the same way as the AddPartitions procedure above but instead of adding partitions it removes any partitions above the limit given in MyPartTables.

       
CREATE PROCEDURE RemovePartitions ()
BEGIN
  DECLARE done BOOL DEFAULT FALSE;
  DECLARE v_table_name  VARCHAR(100);
  DECLARE v_schema_name  VARCHAR(100);
  DECLARE v_max_partitions INT;
  DECLARE v_no_of_partitions_to_remove INT;

  DECLARE cur CURSOR FOR SELECT `table_name`,`schema_name`,`max_partitions` FROM MyPartTables;

  DECLARE CONTINUE HANDLER FOR
  SQLSTATE '02000'
  SET done = TRUE;

  OPEN cur;

  tloop: LOOP
    FETCH cur INTO v_table_name,v_schema_name,v_max_partitions;
    IF done THEN
      CLOSE cur;
      LEAVE tloop;
    END IF;
    -- DEBUG SELECT v_table_name,v_schema_name,v_max_partitions;
    SET @qry = CONCAT('SELECT COUNT(DISTINCT(PARTITION_DESCRIPTION)) INTO @v_no_of_partitions ',
                     'FROM INFORMATION_SCHEMA.PARTITIONS ',
                     'WHERE TABLE_NAME="',v_table_name, '" ',
                     'AND TABLE_SCHEMA = "',v_schema_name, '" ');
    -- DEBUG SELECT @qry;
    PREPARE stmt FROM @qry;
    EXECUTE stmt;
    DEALLOCATE PREPARE stmt;
    -- DEBUG SELECT @v_no_of_partitions, v_max_partitions;

    IF @v_no_of_partitions > v_max_partitions THEN
      SET v_no_of_partitions_to_remove = @v_no_of_partitions - v_max_partitions;
      SET @qry = CONCAT('SELECT GROUP_CONCAT(DISTINCT(PARTITION_NAME)) INTO @v_list_of_partitions_to_remove FROM ( ',
                            'SELECT PARTITION_NAME ',
                            'FROM INFORMATION_SCHEMA.PARTITIONS ',
                            'WHERE TABLE_NAME = "',v_table_name, '" ',
                            'AND TABLE_SCHEMA = "',v_schema_name, '" ',
                            'AND PARTITION_DESCRIPTION!="MAXVALUE" ',
                            'ORDER BY PARTITION_DESCRIPTION LIMIT ',v_no_of_partitions_to_remove,
                            ' ) as slabb');
      -- DEBUG SELECT @qry;
      -- DEBUG SELECT @v_partitions_to_remove;
      PREPARE stmt FROM @qry;
      EXECUTE stmt;
      DEALLOCATE PREPARE stmt;

      SET @qry = CONCAT('ALTER TABLE ',v_table_name, ' DROP PARTITION ', @v_list_of_partitions_to_remove);
      -- DEBUG SELECT @qry;
      PREPARE stmt FROM @qry;
      EXECUTE stmt;
      DEALLOCATE PREPARE stmt;
    END IF;
  END LOOP;
END ;;


Now we only need to create two events that will run our add and remove partition procedures above, as we are dealing with daily partitions its enough to run the procedures once per day like:

       
CREATE EVENT AddPartitions ON SCHEDULE EVERY 1 DAY
DO
  BEGIN
  CALL AddPartitions();
END ;;

CREATE EVENT RemovePartitions ON SCHEDULE EVERY 1 DAY
DO
  BEGIN
  CALL RemovePartitions();
END ;;


If you want to try this out there is complete running example here.

This is just an example on how you can automate the procedure of handling your partitioned tables in MySQL, the code in the stored procedures is hard coded to only work with RANGE and daily partitioned tables but you adopt these to work with any kind of partition setups.

Having a metadata table like MyPartTables  makes it easy handle all of our partitioned tables.


söndag 10 maj 2020

Install MySQL Cluster Manager "MCM" using Systemd

For more information on how to create a service look at man systemd.service

Install

Directory for MCM installation is /opt Download latest tar package of MCM from edelivery.oracle.com or support.oracle.com (today this is package: MySQL Cluster Manager 1.4.8 TAR for Generic Linux x86 (64bit))
Run commands below once you have downloaded the MCM package:
sudo su -
cd /opt
tar xzf /tmp/mcm-1.4.8-linux-glibc2.12-x86-64bit.tar.gz
mv mcm-1.4.8-linux-glibc2.12-x86-64bit mcm
cp /opt/mcm/mcm1.4.8/etc/mcmd.ini /opt/mcm/
mkdir /opt/mcm/mcm_data
Set manager-directory to folder /opt/mcm/mcm_data in configuration file:
cat /opt/mcm/mcmd.ini | sed 's/^##manager-directory.*/manager-directory = \/opt\/mcm\/mcm_data/' >> /opt/mcm/mcmd.ini
Structure should be:
 /opt/mcm
         mcmd.ini
         mcm1.4.8
         mcm_data

 Folder mcm_data is where all data on disk will be stored by default.
 Folder mcmN.N.N contains installed version of MCM.

Systemd configuration

Create user to run the MCM service:
sudo useradd --no-create-home -s /bin/false mcm
sudo chown -R mcm:mcm /opt/mcm
Create systemd file:
Make sure to update path to correct version of mcm binaries in mcm service file below.
Create file /etc/systemd/system/mcm.service like below.
[Unit]
Description=MySQL Cluster Manager
Documentation=https://dev.mysql.com/doc/mysql-cluster-manager/en/
After=network-online.target

[Service]
User=mcm
Group=mcm
Restart=always
Type=simple

ExecStart=/opt/mcm/mcm1.4.8/bin/mcmd --defaults-file=/opt/mcm/mcmd.ini

[Install]
WantedBy=multi-user.target
Start and enable service:
sudo systemctl daemon-reload
sudo systemctl start mcm
sudo systemctl enable mcm
sudo systemctl status mcm
If the service is not started correctly, look in messages file: sudo tail -150f /var/log/messages  

Remember to set option StopOnError to 0 when creating your cluster so mcmd can restart failed data nodes (set StopOnError:ndbmtd=0 mycluster;).
You can find this and many other MySQL guides here: https://github.com/wwwted

Happy Clustering!

tisdag 5 maj 2020

Database change tracking using MySQL EE Audit

Use MySQL Enterprise Audut plugin to track data changes on specific tables.
Scripts and documentation on github: https://github.com/wwwted/Database-change-tracking
Tracked data (audit logs) can then be inserted into reporting db using parser scripts.
Limitations:
  • Changing tracking rules stopps all tracking for existing connections until the reconnect. This is a limitation in MySQL EE Audit filter handling.
  • No filtering for different users, easy to implement if needed in change_tracking.sql SP (START/STOP).
  • Only tracking INSERT/UPDATE/DELETE statements. Easy to modify if needed.

1 Enable tracking on target server

1.1 Enable audit plugin

Add below to my.cnf for JSON format: loose_audit_log_format = JSON
Load audit plugin:
mysql -uroot -proot < ./mysqlsrc/share/audit_log_filter_linux_install.sql
mysql -uroot -proot -se"show plugins" | grep audit
Set audit log to rotate on size (for test set it low so you get some files to read):
mysql> set persist audit_log_rotate_on_size=4096;

1.2 Install tracking script

mysql -uroot -proot < change_tracking.sql

2 Setup reporting server

Reporting server will host all audit logs from all sources and be the "reporting" database.

2.1 Add reporting schema and tables

mysql -umsandbox -pmsandbox -h127.0.0.1 -P8017 < report_schemas.sql

2.2 Import data into reporting database

Update script with database connection variables and path to audit files Run import: ./audit-parser-json.pl
Imported files are recorded in table audit_information.audit_jobs
Sample output:
mysql> select * from audit_information.audit_jobs;
+----+---------------------------+---------------------+-------------+
| ID | AUDIT_LOG_NAME            | PARSED_DATE_TIME    | LOG_ENTRIES |
+----+---------------------------+---------------------+-------------+
|  1 | audit.20200429T095031.log | 2020-04-29 11:51:21 |          27 |
|  2 | audit.20200429T102434.log | 2020-04-29 12:24:40 |          12 |
+----+---------------------------+---------------------+-------------+
Data is stored in table audit_information.audit_data. Scripts are not inserting all data from audit logs, just some sample columns.

3 USAGE

  • Start tracking:CALL tracking.START_TRACKING("db","table");
  • List tracked tables:CALL tracking.LIST_TRACKING();
  • Stop tracking: CALL tracking.STOP_TRACKING("db","table");
Tracking uses database tracking and one interal tables CHANGE_TRACKING.

4 Test

CREATE DATABASE test;
use test;
CREATE TABLE slafs (i INT);
CALL tracking.START_TRACKING("test","slafs");
--  (you need to log out and in again to have filter activated)
--  (filters are stored in select * from mysql.audit_log_filter;)
CALL tracking.LIST_TRACKING();
INSERT INTO test.slafs VALUES (6);
--  (run tail -f on audit-log)
CALL tracking.STOP_TRACKING("test","slafs");

Misc

Audit filters:
  • SELECT * FROM mysql.audit_log_filter;
  • SELECT * FROM mysql.audit_log_user;

måndag 17 februari 2020

Getting started with MySQL Cluster using MySQL Cluster Manager

Creating a MySQL NDB Cluster is really easy when using MySQL Cluster Manager aka "MCM".
In this blog post we will look at how to install and configure MCM and then create a new cluster using MCM commands.

Steps to install and and setup MCM as a service is explained here.

Once you have completed the setup of MCM next step is to install MySQL NDB Cluster.
First step is to download latest Cluster 7.6 binaries from: https://dev.mysql.com/downloads/cluster/
(7.6.13 as of today)

cd /tmp
wget https://dev.mysql.com/get/Downloads/MySQL-Cluster-7.6/mysql-cluster-gpl-7.6.13-linux-glibc2.12-x86_64.tar.gz

cd /opt
tar xzf /tmp/mysql-cluster-gpl-7.6.13-linux-glibc2.12-x86_64.tar.gz
mv mysql-cluster-gpl-7.6.13-linux-glibc2.12-x86_64 mysql-cluster-7.6.13
Now it's time to start the MCM client and create our cluster.
Start mcm: PATH=$PATH:/opt/mysql-cluster-7.6.13/bin /opt/mcm/mcm1.4.8/bin/mcm
(For future need I recommend that you set the $PATH variable:
export PATH=$PATH:/opt/mysql-cluster-7.6.13/bin:/opt/mcm/mcm1.4.7/bin)

Run commands below in mcm client (change basedir to folder with cluster binaries):
(for details on commands below look here)
create site --hosts=127.0.0.1 mysite;
add package --basedir=/opt/mysql-cluster-7.6.13 cluster7613;
create cluster --package=cluster7613 --processhosts=ndb_mgmd@127.0.0.1,ndbmtd@127.0.0.1,ndbmtd@127.0.0.1 mycluster;
add process --processhosts=mysqld@127.0.0.1,mysqld@127.0.0.1 mycluster;
add process --processhosts=ndbapi@127.0.0.1,ndbapi@127.0.0.1 mycluster;
add process --processhosts=ndbapi@127.0.0.1,ndbapi@127.0.0.1 mycluster;
set StopOnError:ndbmtd=0 mycluster;
set port:mysqld:50=3310 mycluster;
set port:mysqld:51=3311 mycluster;
start  cluster mycluster;
show status -r mycluster;
Output from last command "show status" should look like:
+--------+----------+------------+---------+-----------+-------------+
| NodeId | Process  | Host       | Status  | Nodegroup | Package     |
+--------+----------+------------+---------+-----------+-------------+
| 49     | ndb_mgmd | 127.0.0.1  | running |           | cluster7611 |
| 1      | ndbmtd   | 127.0.0.1  | running | 0         | cluster7611 |
| 2      | ndbmtd   | 127.0.0.1  | running | 0         | cluster7611 |
| 50     | mysqld   | 127.0.0.1  | running |           | cluster7611 |
| 51     | mysqld   | 127.0.0.1  | running |           | cluster7611 |

Congratulations, you have installed and created your first MySQL NDB Cluster!!

If you want to learn more:
https://github.com/wwwted/ndb-cluster-workshop
https://dev.mysql.com/doc/mysql-cluster-manager/1.4/en/
https://www.slideshare.net/ocklin/mysql-ndb-cluster-101
https://www.slideshare.net/frazerClement/mysql-cluster-80-tutorial
https://www.slideshare.net/ocklin/mysql-ndb-cluster-80-sql-faster-than-nosql
https://www.slideshare.net/datacharmer/mysql-ndb-80-clusters-in-your-laptop-with-dbdeployer
http://mikaelronstrom.blogspot.com/

måndag 27 januari 2020

How to clone a MySQL test or development instance from InnoDB Cluster?


Introduction to InnoDB Cluster

If you have not heard about MySQL InnoDB Cluster MySQL, InnoDB Cluster is a built-in high-availability solution for MySQL. The key benefit over old high-availability solutions is that InnoDB Cluster is built into MySQL and supported on all platforms where MySQL is supported.

The key components of MySQL InnoDB Cluster:
- MySQL Group Replication
- MySQL Shell
- MySQL Router

MySQL Group Replication is a plugin that makes sure that; data is distributed to all nodes, conflicts are handled and also recovery if needed.
MySQL Shell makes is easy to configure and administer your InnoDB Cluster.
MySQL Router is the last part of InnoDB cluster, it's a lightweight middleware that provides transparent routing between the application and back-end MySQL Servers part of group replication.

If you want to get started with InnoDB Cluster:
- https://dev.mysql.com/doc/refman/8.0/en/mysql-innodb-cluster-userguide.html
- https://github.com/wwwted/MySQL-InnoDB-Cluster-3VM-Setup
- https://lefred.be/content/category/mysql/innodb-cluster/

Clone a test/development environment from InnoDB Cluster

Most users need some way to periodically refresh test and development environments from production. In the past many users solved thus using traditional master-slave replication, the solution was to setup a MySQL slave that use LVM to leverage snapshots to create new test/developer environments.

For InnoDB Cluster we can use MySQL Clone (available from MySQL 8.0.l7) to creating a test or development environment much easier.

Steps needed on InnoDB Cluster (create a dedicated clone user for the donor):
CREATE USER clone_user@'%' IDENTIFIED BY 'clone_password';
GRANT BACKUP_ADMIN ON *.* to clone_user@'%';
GRANT SELECT ON performance_schema.* TO clone_user@'%';
GRANT EXECUTE ON *.* to clone_user@'%';
You might limit the clone user to only be able to connect from some sub-net and not from all host (%). The InnoDB Cluster must run MySQL 8.0.17 or later.

Next we need to create the test or development server.
1) Start a new MySQL instance (using MySQL 8.0.17 or later).
2) Provision data using clone:
INSTALL PLUGIN clone SONAME 'mysql_clone.so';
INSTALL PLUGIN group_replication SONAME 'group_replication.so';
SET GLOBAL clone_valid_donor_list = '10.0.0.17:3306';
SET global log_error_verbosity=3;
CLONE INSTANCE FROM clone_user@10.0.0.17:3306 identified by 'clone_password';
UNINSTALL PLUGIN clone;
UNINSTALL PLUGIN group_replication;
Some notes on above, you need to configure the clone_valid_donor_list to the IP of the donor instance in the cluster you have selected. The log level is set to 3 so you can monitor progress of clone in the log file.
There are some controls in the MySQL Server that forces us to load the group replication plugin prior to executing the clone command. If you do not load the group replication plugin you will get an error like: ERROR 3870 (HY000): Clone Donor plugin group replication is not active in Recipient

Of course, the final step will be to clean/wash the data before handing over the MySQL instance to test or development, this step is not covered by this blog but using our masking and De-Identification can be used to solve this step ;)

I have tested above procedures using MySQL 8.0.18.

Happy Clustering!

onsdag 11 september 2019

Setting up multi-source replication (MySQL 8.0) with GTID based replication

Last week I posted a blog on how to setup multi-source for MySQL 5.7 using GTID based replication, in this blog we will do the same using MySQL 8.0.

Most of the procedure are the same, there are only two things that we need to change are;
1) Procedure for creating the replication user.
2) Fetching the GTID value from the MySQL dumpfile.

1) Create replication user for MySQL 8
MySQL 8 uses a new authentication plugin so to create the replication user will differ from MySQL 5.7. The procedure for 5.7 was with:
mysql> CREATE USER 'repl_user'@'slave-host' IDENTIFIED BY 'repl_pass';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl_user'@'slave-host';
With MySQL 8 you need to create the user like:
mysql> CREATE USER 'repl_user'@'slave-host' IDENTIFIED WITH sha256_password BY 'repl_pass';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl_user'@'slave-host';
Another option is to configure MySQL to use old authentication method, then you do not have to modify the CREATE statement above. This can be done by setting configuration variable default_authentication_plugin to "mysql_native_password".

2) Changed format of GTIF_PURGED information in dumpfile
The GTID_PERGED information in the dumpfile now included a comment like:
SET @@GLOBAL.GTID_PURGED=/*!80000 '+'*/ 'dfa44283-d466-11e9-80ec-ec21e522bf21:1-6';

To fetch the GTID you now have to use below command that will remove also the new comment:
cat dumpM1.sql | grep GTID_PURGED | perl -p0 -e 's#/\*.*?\*/##sg' | cut -f2 -d'=' | cut -f2 -d$'\''

The github page and the scripts to create a test environment have also been updated!

fredag 6 september 2019

Setting up multi-source replication (5.7) with GTID based replication

Purpose of this guide is to show how to setup a new multi-source slave from two (or more) existing masters. We will also set up a replication filter to only apply data from some of the databases.

So the scenarios is that we have already 2 master up and running and we want filter one database from each. Lets call them master1 and master2 with databases master1 and master2 to make it easy.

First step will be to make sure that both the master databases have GTID replication enabled. Configuration (my.cnf) needed on master servers will be (restart of databases will be needed):
gtid-mode=ON
log-slave-updates=ON
enforce-gtid-consistency=true
master-info-repository=TABLE
relay-log-info-repository=TABLE
log-bin=mysql-bin
binlog-format=ROW

Next step is to create a user for replication on both master servers:
mysql> CREATE USER 'repl_user'@'slave-host' IDENTIFIED BY 'repl_pass';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl_user'@'slave-host';

Provisioning of data to new multi-source slave

1) Now it's time to provision some data to our slave, step one is to run mysqldump on master databases: 
mysqldump -u<user> -p<pass> --single-transaction --triggers --routines --set-gtid-purged=ON --databases master1 > dumpM1.sql
mysqldump -u<user> -p<pass> --single-transaction --triggers --routines --set-gtid-purged=ON --databases master2 > dumpM2.sql
(we only dump database master1 from master1 and master2 from master2)

2) Get GTID_PURGED information from dump files and remember this (you need this later):
cat dumpM1.sql | grep GTID_PURGED | cut -f2 -d'=' | cut -f2 -d$'\''
cat dumpM2.sql | grep GTID_PURGED | cut -f2 -d'=' | cut -f2 -d$'\''
(should look something like: aeaeb1f9-cfab-11e9-bf5d-ec21e522bf21:1-5)

3) Now we need to remove GTID_PURGED information from dump files before import:
sed '/GTID_PURGED/d' dumpM1.sql > dumpM1_nopurge.sql
sed '/GTID_PURGED/d' dumpM2.sql > dumpM2_nopurge.sql

4) Import data into new multi-source slave:
mysql -u<user> -p<pass> < dumpM1_nopurge.sql
mysql -u<user> -p<pass> < dumpM2_nopurge.sql

5) Clear GTID state of slave server and set GTID_PURGED to values collected earlier in step 2):
mysql> RESET MASTER;
mysql> SET GLOBAL GTID_PURGED="<Master1 GTID_PURGED>,<Master2 GTID_PURGED>";

Configure and start multi-source replication

1) Now it's time to configure the replication channels and set the filter rule
    (filters on slave will be for all channels):
mysql> CHANGE MASTER TO MASTER_HOST=<master1-host>, MASTER_USER="repl", MASTER_PASSWORD="repl", MASTER_AUTO_POSITION=1 FOR CHANNEL "master1";
mysql> CHANGE MASTER TO MASTER_HOST=<master2-host>, MASTER_USER="repl", MASTER_PASSWORD="repl", MASTER_AUTO_POSITION=1 FOR CHANNEL "master2";
mysql> CHANGE REPLICATION FILTER REPLICATE_WILD_DO_TABLE=('master1.%','master2.%');

2) After this we start both channels:
mysql> START SLAVE FOR CHANNEL "master1";
mysql> START SLAVE FOR CHANNEL "master2";

3) You can now looks at status with:
mysql> SHOW SLAVE STATUS FOR CHANNEL "master1"\G
mysql> SHOW SLAVE STATUS FOR CHANNEL "master2"\G

Setup your own multi-source sandbox environment

If you want to play around with Multi-Source replication and want to jumpstart a sandbox environment then you might want to look at my workshop at GitHub.