måndag 17 februari 2020

Getting started with MySQL Cluster using MySQL Cluster Manager

Creating a MySQL NDB Cluster is really easy when using MySQL Cluster Manager aka "MCM".
In this blog post we will look at how to install and configure MCM and then create a new cluster using MCM commands.

Steps to install and and setup MCM as a service is explained here.

Once you have completed the setup of MCM next step is to install MySQL NDB Cluster.
First step is to download latest Cluster 7.6 binaries from: https://dev.mysql.com/downloads/cluster/
(7.6.13 as of today)

cd /tmp
wget https://dev.mysql.com/get/Downloads/MySQL-Cluster-7.6/mysql-cluster-gpl-7.6.13-linux-glibc2.12-x86_64.tar.gz

cd /opt
tar xzf /tmp/mysql-cluster-gpl-7.6.13-linux-glibc2.12-x86_64.tar.gz
mv mysql-cluster-gpl-7.6.13-linux-glibc2.12-x86_64 mysql-cluster-7.6.13
Now it's time to start the MCM client and create our cluster.
Start mcm: PATH=$PATH:/opt/mysql-cluster-7.6.13/bin /opt/mcm/mcm1.4.8/bin/mcm
(For future need I recommend that you set the $PATH variable:
export PATH=$PATH:/opt/mysql-cluster-7.6.13/bin:/opt/mcm/mcm1.4.7/bin)

Run commands below in mcm client (change basedir to folder with cluster binaries):
(for details on commands below look here)
create site --hosts= mysite;
add package --basedir=/opt/mysql-cluster-7.6.13 cluster7613;
create cluster --package=cluster7613 --processhosts=ndb_mgmd@,ndbmtd@,ndbmtd@ mycluster;
add process --processhosts=mysqld@,mysqld@ mycluster;
add process --processhosts=ndbapi@,ndbapi@ mycluster;
add process --processhosts=ndbapi@,ndbapi@ mycluster;
set port:mysqld:50=3310 mycluster;
set port:mysqld:51=3311 mycluster;
start  cluster mycluster;
show status -r mycluster;
Output from last command "show status" should look like:
| NodeId | Process  | Host       | Status  | Nodegroup | Package     |
| 49     | ndb_mgmd |  | running |           | cluster7611 |
| 1      | ndbmtd   |  | running | 0         | cluster7611 |
| 2      | ndbmtd   |  | running | 0         | cluster7611 |
| 50     | mysqld   |  | running |           | cluster7611 |
| 51     | mysqld   |  | running |           | cluster7611 |

Congratulations, you have installed and created your first MySQL NDB Cluster!!

If you want to learn more:

måndag 27 januari 2020

How to clone a MySQL test or development instance from InnoDB Cluster?

Introduction to InnoDB Cluster

If you have not heard about MySQL InnoDB Cluster MySQL, InnoDB Cluster is a built-in high-availability solution for MySQL. The key benefit over old high-availability solutions is that InnoDB Cluster is built into MySQL and supported on all platforms where MySQL is supported.

The key components of MySQL InnoDB Cluster:
- MySQL Group Replication
- MySQL Shell
- MySQL Router

MySQL Group Replication is a plugin that makes sure that; data is distributed to all nodes, conflicts are handled and also recovery if needed.
MySQL Shell makes is easy to configure and administer your InnoDB Cluster.
MySQL Router is the last part of InnoDB cluster, it's a lightweight middleware that provides transparent routing between the application and back-end MySQL Servers part of group replication.

If you want to get started with InnoDB Cluster:
- https://dev.mysql.com/doc/refman/8.0/en/mysql-innodb-cluster-userguide.html
- https://github.com/wwwted/MySQL-InnoDB-Cluster-3VM-Setup
- https://lefred.be/content/category/mysql/innodb-cluster/

Clone a test/development environment from InnoDB Cluster

Most users need some way to periodically refresh test and development environments from production. In the past many users solved thus using traditional master-slave replication, the solution was to setup a MySQL slave that use LVM to leverage snapshots to create new test/developer environments.

For InnoDB Cluster we can use MySQL Clone (available from MySQL 8.0.l7) to creating a test or development environment much easier.

Steps needed on InnoDB Cluster (create a dedicated clone user for the donor):
CREATE USER clone_user@'%' IDENTIFIED BY 'clone_password';
GRANT BACKUP_ADMIN ON *.* to clone_user@'%';
GRANT SELECT ON performance_schema.* TO clone_user@'%';
GRANT EXECUTE ON *.* to clone_user@'%';
You might limit the clone user to only be able to connect from some sub-net and not from all host (%). The InnoDB Cluster must run MySQL 8.0.17 or later.

Next we need to create the test or development server.
1) Start a new MySQL instance (using MySQL 8.0.17 or later).
2) Provision data using clone:
INSTALL PLUGIN clone SONAME 'mysql_clone.so';
INSTALL PLUGIN group_replication SONAME 'group_replication.so';
SET GLOBAL clone_valid_donor_list = '';
SET global log_error_verbosity=3;
CLONE INSTANCE FROM clone_user@ identified by 'clone_password';
UNINSTALL PLUGIN group_replication;
Some notes on above, you need to configure the clone_valid_donor_list to the IP of the donor instance in the cluster you have selected. The log level is set to 3 so you can monitor progress of clone in the log file.
There are some controls in the MySQL Server that forces us to load the group replication plugin prior to executing the clone command. If you do not load the group replication plugin you will get an error like: ERROR 3870 (HY000): Clone Donor plugin group replication is not active in Recipient

Of course, the final step will be to clean/wash the data before handing over the MySQL instance to test or development, this step is not covered by this blog but using our masking and De-Identification can be used to solve this step ;)

I have tested above procedures using MySQL 8.0.18.

Happy Clustering!

onsdag 11 september 2019

Setting up multi-source replication (MySQL 8.0) with GTID based replication

Last week I posted a blog on how to setup multi-source for MySQL 5.7 using GTID based replication, in this blog we will do the same using MySQL 8.0.

Most of the procedure are the same, there are only two things that we need to change are;
1) Procedure for creating the replication user.
2) Fetching the GTID value from the MySQL dumpfile.

1) Create replication user for MySQL 8
MySQL 8 uses a new authentication plugin so to create the replication user will differ from MySQL 5.7. The procedure for 5.7 was with:
mysql> CREATE USER 'repl_user'@'slave-host' IDENTIFIED BY 'repl_pass';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl_user'@'slave-host';
With MySQL 8 you need to create the user like:
mysql> CREATE USER 'repl_user'@'slave-host' IDENTIFIED WITH sha256_password BY 'repl_pass';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl_user'@'slave-host';
Another option is to configure MySQL to use old authentication method, then you do not have to modify the CREATE statement above. This can be done by setting configuration variable default_authentication_plugin to "mysql_native_password".

2) Changed format of GTIF_PURGED information in dumpfile
The GTID_PERGED information in the dumpfile now included a comment like:
SET @@GLOBAL.GTID_PURGED=/*!80000 '+'*/ 'dfa44283-d466-11e9-80ec-ec21e522bf21:1-6';

To fetch the GTID you now have to use below command that will remove also the new comment:
cat dumpM1.sql | grep GTID_PURGED | perl -p0 -e 's#/\*.*?\*/##sg' | cut -f2 -d'=' | cut -f2 -d$'\''

The github page and the scripts to create a test environment have also been updated!

fredag 6 september 2019

Setting up multi-source replication (5.7) with GTID based replication

Purpose of this guide is to show how to setup a new multi-source slave from two (or more) existing masters. We will also set up a replication filter to only apply data from some of the databases.

So the scenarios is that we have already 2 master up and running and we want filter one database from each. Lets call them master1 and master2 with databases master1 and master2 to make it easy.

First step will be to make sure that both the master databases have GTID replication enabled. Configuration (my.cnf) needed on master servers will be (restart of databases will be needed):

Next step is to create a user for replication on both master servers:
mysql> CREATE USER 'repl_user'@'slave-host' IDENTIFIED BY 'repl_pass';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl_user'@'slave-host';

Provisioning of data to new multi-source slave

1) Now it's time to provision some data to our slave, step one is to run mysqldump on master databases: 
mysqldump -u<user> -p<pass> --single-transaction --triggers --routines --set-gtid-purged=ON --databases master1 > dumpM1.sql
mysqldump -u<user> -p<pass> --single-transaction --triggers --routines --set-gtid-purged=ON --databases master2 > dumpM2.sql
(we only dump database master1 from master1 and master2 from master2)

2) Get GTID_PURGED information from dump files and remember this (you need this later):
cat dumpM1.sql | grep GTID_PURGED | cut -f2 -d'=' | cut -f2 -d$'\''
cat dumpM2.sql | grep GTID_PURGED | cut -f2 -d'=' | cut -f2 -d$'\''
(should look something like: aeaeb1f9-cfab-11e9-bf5d-ec21e522bf21:1-5)

3) Now we need to remove GTID_PURGED information from dump files before import:
sed '/GTID_PURGED/d' dumpM1.sql > dumpM1_nopurge.sql
sed '/GTID_PURGED/d' dumpM2.sql > dumpM2_nopurge.sql

4) Import data into new multi-source slave:
mysql -u<user> -p<pass> < dumpM1_nopurge.sql
mysql -u<user> -p<pass> < dumpM2_nopurge.sql

5) Clear GTID state of slave server and set GTID_PURGED to values collected earlier in step 2):

Configure and start multi-source replication

1) Now it's time to configure the replication channels and set the filter rule
    (filters on slave will be for all channels):

2) After this we start both channels:
mysql> START SLAVE FOR CHANNEL "master1";
mysql> START SLAVE FOR CHANNEL "master2";

3) You can now looks at status with:

Setup your own multi-source sandbox environment

If you want to play around with Multi-Source replication and want to jumpstart a sandbox environment then you might want to look at my workshop at GitHub.

måndag 1 april 2019

MySQL Error: Too many connections!

When your application get error "too many connections" underlying problem might be caused by multiple things. For a in-depth understanding of how MySQL handles connections read this blog.

In this blog we will investigate why and how to solve these problems when running MySQL installed on Debian/Ubuntu using systemd (default MySQL packages).

OS: Debian 9
MySQL Server version: 5.7.25 MySQL Community Server (GPL)
(will most likely be the same for MySQL 8.0 versions)

Goal is to have 10.000 working connections to MySQL!

The default value for max_connections is 151 connections so first step is to increase the max_connections variable to 10.000.
This is documented in the manuals here:
- https://dev.mysql.com/doc/refman/5.7/en/too-many-connections.html
- https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_max_connections

The max_connections is a dynamic setting so lets increase value to 10.000 and see what happens.

root@debian:~# mysql -uroot -proot -se "select @@max_connections"

Hmmm, it looks like we only got 214 connections, lets look at the error log:

2019-04-01T06:29:48.301871Z 0 [Warning] Changed limits: max_open_files: 1024 (requested 50000)
2019-04-01T06:29:48.302000Z 0 [Warning] Changed limits: max_connections: 214 (requested 10000)
2019-04-01T06:29:48.302004Z 0 [Warning] Changed limits: table_open_cache: 400 (requested 2000)

Looks like we hit some resource limit.
Lets look in the MySQL manual for running MySQL under systemd, that can be found here.
Looks like we need to increase the number of allowed open files for MySQL, locate the systemd configuration folder for MySQL and create file /etc/systemd/system/mysqld.service.d/override.conf (file can be called anything ending with .conf). You can also create/modify the override file by using native systemctl command like: systemctl edit mysql

Add LimitNOFILE=10000 in file override.conf like:

root@debian:~# cat /etc/systemd/system/mysql.service.d/override.conf

After this we need to reload the systmed daemon and restart the MySQL service like:

root@debian:~# systemctl daemon-reload
root@debian:~# systemctl restart mysql

MySQL is now saying we have 9190 connections:

root@debian:~# mysql -uroot -proot -se "select @@max_connections"

So, MySQL is using some files for additional work and we need to set this a bit higher to get 10.000 connections, lets set it to 11.000 and reload the systemd daemon and restart the MySQL service.

root@debian:~# mysql -uroot -proot -se "select @@max_connections"
mysql: [Warning] Using a password on the command line interface can be insecure.

Good, now we have 10.000 connections available according to MySQL.

Lets run our application and verify we can get 10.000 connections, I use this small perl script to open 10.000 connections.

Just below 5.000 connection I get a new error in the application "Can't create a new thread (errno 11)"
Lets have a look at the MySQL error log:

root@debian:~# tail -1 /var/log/mysql/error.log
2019-04-01T06:50:35.657397Z 0 [ERROR] Can't create thread to handle new connection(errno= 11)

I found this new limit by running command below when perl script was running:
watch -n.5 "mysql -uroot -proot -se'show status like \"%threads%\"'"

Strange, where is this new limitation just below 5.000 connections coming from?

Looking at resource limits for my MySQL daemon I should have close to 8000 processes:

root@debian:~# cat /proc/$( pgrep -o mysql )/limits
Max processes             7929                 7929                 processes
Max open files            11000                11000                files

Lets looks at status report for my MySQL service:

root@debian:~# systemctl status mysql | grep Tasks
    Tasks: 127 (limit: 4915)

There seem to be some additional limit on Tasks that limit me to 4915 connections.
Lets expand our override.conf configuration to cover for 11.000 tasks also.

root@debian:~# cat /etc/systemd/system/mysql.service.d/

(remember to reload systemd and restart MySQL service after each change in override.conf)

Now we got just under 8.000 connection and got the same error " Can't create a new thread (errno 11)" but this time it's because of the limit of max processes:

root@debian:~# cat /proc/$( pgrep -o mysql )/limits
Max processes             7929                 7929                 processes

Lets increase this limit to 11.000 in our override.conf:

root@debian:~# cat /etc/systemd/system/mysql.service.d/override.conf

After reloading systemd configuration and restarting MySQL service I can now get 10.000 connections and the perl script runs without any errors!

There are different limits when setting max_connections in MySQL:
- The default max connections is 151.
- At 214 connections you are limited by max open files.
- At 4915 you are limited by Max Tasks (systemd)
- Just below 8000 you are limited by max number of processes

By adding a override file (as showed above) you can overcome all these limits.

Remember to:
Look at error message in application and MySQL error logs.
Look at output from: cat /proc/$( pgrep -o mysql )/limits
Look at output from: systemctl status mysql
Test your application (or use my perl script) and monitor that it works!
Monitor how many connections you have: mysql -uroot -p -e'show status like "Threads_connected"'

onsdag 9 mars 2016

MySQL footprint less than 10MB

The MySQL binary package have been growing a lot the last GA versions. I wrote a blog post on creating a minimal installation a bit over a year ago and after this we have seen some interesting posts by Mablomy and Giuseppe on creating smaller installations of MySQL for different purposes.

I will continue where Mablomy left off and use UPX to try to shrink the MySQL binary even more. UPX (Ultimate Packer for Executables) is a free and open source executable packer supporting a number of file formats from different operating systems. If you want to read more I recommend the link above or the manual page once you have installed upx.

Using the default installation of MySQL 5.7 and then running strip(1) to remove symbol information the mysqld binary goes down in size from 240MB to 26MB.

Next step is to compress the MySQL binary using UPX.
I'm using Ubuntu 14.04 and installed UPX via Ubuntu repositories like: sudo apt-get install upx
After this run upx on mysqld binary to shrink it further.

bin$ upx mysqld
                       Ultimate Packer for eXecutables
                          Copyright (C) 1996 - 2013
UPX 3.91        Markus Oberhumer, Laszlo Molnar & John Reiser   Sep 30th 2013
        File size         Ratio      Format      Name
   --------------------   ------   -----------   -----------
  26649816 ->   9145508   34.32%  linux/ElfAMD   mysqld        
-rwxr-xr-x 1 ted ted   9145508 mar  9 17:05 mysqld

Size of mysqld binary is now 8.8MB!
By adding option --best to upx I was able to get binary down to 8.2MB.

After shrinking the binary everything works as before, I started MySQL using mysqld_safe and it worked as expected. You can use upx to shrink all kind of binaries.

EDIT: Great news, with MySQL 8.0.16 we now have minimal packages via our normal downloads, see more here: https://mysqlrelease.com/2019/05/the-amazing-shrinking-tarball/

torsdag 26 november 2015

Loss-less replication using MySQL semi-syncronous replication and MySQL Fabric!

In my last post playing with MySQL fabric here and here I used MySQL fabric version 1.4.1 and MySQL 5.6, i must say lot's of things have happened over the last 20 months! MySQL have released several new versions of MySQL Fabric and also a new version of the server with MySQL 5.7.

Current GA version of MySQL Fabric is 1.5.6 and there is also a MySQL Fabric 1.6.2 beta, both versions can be downloaded from here.
As I said earlier, lots of new things have happened, looks like focus has been on making it easier to work with MySQL Fabric but also making output from fabric commands much easier to understand. With latest version 1.6.2 (not yet GA) fabric also have support for running multiple fabric instances, more news can be found here.

Semi-synchronous replication
Semi-synchronous replication is an enhancement to normal replication making it possible to failover to a slave server with minimal loss of data (will depend of durability settings of MySQL). In MySQL 5.7 we have introduced some changes making this feature even better, more details can be found here . So, how can we use semi-synchronous replication together with MySQL Fabric? Well it's not that hard to configure, just a few commands as explained in our manual, but hey let's try to upgrade the old MySQL fabric environment and make it use semi-synchronous replication!

Upgrading environment to MySQL 5.7
Some things in installation/configuration had to be modified moving from MySQL 5.6 to 5.7:
  • installation process has changed a lot (more secure by default), if you want old behavior use --initialize-insecure to get back some old behavior.
  • server-id is now mandatory if you enble binary logging, more information here and here.
For more information on what new in MySQL 5.7 and some removed or deprecated functionality read more here and here.

MySQL Fabric
Next step is to install MySQL Fabric, I will not cover the steps as this was covered on my old posts and the procedure for latest version of MySQL fabric is explained in detail in our manual.

Scenario is, we have MySQL Fabric up and running and 3 stand alone MySQL 5.7 servers waiting to be used by MySQL Fabric.

First step is to create our group and populate it with our 3 MySQL servers. We also run promote command to elect a primary server, the activate command enables automatic failover is primary server dies.

mysqlfabric group create group1
mysqlfabric group add group1
mysqlfabric group add group1
mysqlfabric group add group1
mysqlfabric group promote group1
mysqlfabric group activate group1

Let's look at current status of our group of servers:

ted@ted-PORTEGE-Z30-A:/opt/MySQL-fabricII$ mysqlfabric group lookup_servers group1
Fabric UUID:  5ca1ab1e-a007-feed-f00d-cab3fe13249e
Time-To-Live: 1
                         server_uuid         address    status       mode weight
------------------------------------ --------------- --------- ---------- ------
8066541b-9423-11e5-ab31-7c7a910307fa   PRIMARY READ_WRITE    1.0
839f04ac-9423-11e5-ab4d-7c7a910307fa SECONDARY  READ_ONLY    1.0
8681ea65-9423-11e5-ad0f-7c7a910307fa SECONDARY  READ_ONLY    1.0

All looks fine, our servers are up and running and group is ready!

Enabling Semi-synchronous replication
This whole procedure can be done without restarting our secondary server, for safety we set the secondary server in SPARE mode so it might not be elected as primary on case of failover whilst we are working with the server.

On Primary Server load semi-sync plugin and enable semi-sync replication:

mysql> INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so';
mysql> SET GLOBAL rpl_semi_sync_master_enabled = 1;

On one or both of the secondary servers; load semi-sync plugin, enable semi-sync replication, set server in SPARE mode and restart replication channel.

mysql> INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so';
mysql> SET GLOBAL rpl_semi_sync_slave_enabled = 1;
mysqlfabric server set_status <server-uuid> SPARE
mysqlfabric server set_status <server-uuid> SECONDARY

Verify semi-synchronous replication is up and running on slaves:

mysql> show status like  'Rpl_semi_sync_slave_status';
| Variable_name              | Value |
| Rpl_semi_sync_slave_status | ON    |

We are done, you now have semi-sync replication up and running in your group!

My MySQL Fabric wish list for the future is:
  • Feature request Semisynchronous replication support in MySQL Fabric.
  • Fabric GUI section in workbench for managing my groups of servers. I would also like to extend this functionality to handle basic monitoring and help to architect/deploy groups or failed servers.
  • Support for MySQL Group Replication in fabric, so we have three different ways of replicating data between servers in our HA-groups.