torsdagen den 10:e april 2014

Oracle MySQL Tech Tour event in Helsinki 7th of May

Marko Mäkelä from InnoDB engineering team will deliver session on " InnoDB on-line schema changes and improved durability in MySQL 5.7"

Are you about to develop a new modern web-based application, wondering what the best database solution is for you? Would you like to learn how to use NoSQL access to MySQL, as well as the new MySQL Database and MySQL Cluster features to build next generation applications?

Join us for this “Performance, Scalability and High Availability with MySQL” Tech Tour! You will hear directly from the source about the latest MySQL innovations from Oracle, including how the MySQL 5.7 Development Milestone Releases yet again significantly increase the performance of MySQL, and how the new MySQL Fabric provides you with a framework for both High-Availability and sharding of MySQL Databases. Our technical experts will show you how to take advantage of the new features and enhancements in the best MySQL product releases ever, and share tips and tricks to confer high performance, scalability, availability and flexibility to your applications.

Register for the event  here!

See you in Helsinki on the 7th of May!

onsdagen den 26:e februari 2014

Easy HA and automatic failover using MySQL Fabric - Part II

In Part I of this blog series we installed and configured our MySQL Fabric, now we are ready to start test our python application against MySQL Fabric.

Most of this is covered in first blog but lets do a short recap. We now have a HA-group consisting of 3 MySQL servers, these are running on ports 63307 - 63309. We have also configured one MySQL instance 63306 to act as backing store for MySQL Fabric, that is holding all fabric meta data for us.

Now it time to start focus on application side of things, as we see in picture a key component is the fabric aware connector, it is here where all the magic is happening, the connector also store the state stored in backing store, so we do not do any unnecessary lookup to Fabric node for every statement executed.

HA and failover

The python program we will be using is a slightly modified version of our example program in "quick start" from here. I have altered group name, changed connection parameters, error handling and also added a forever loop at end to make things a bit more interesting. The program will firstly add 10 rows into table test.employees and then each half second add one more row and select last 5 added rows. If you want to stop program just hit Ctrl-c. Also, as you might see I'm not a natural python coder, so any comments on the code will be silently ignored ;)

import mysql.connector
from mysql.connector import fabric

import time

def add_employee(conn, emp_no, first_name, last_name):
       conn.set_property(group="mygroup", mode=fabric.MODE_READWRITE)
       cur = conn.cursor()
          "INSERT INTO employees VALUES (%s, %s, %s)",
          (emp_no, first_name, last_name)
    except mysql.connector.Error:
       print "Database connection error, trying to reconnect ..."

def find_employee(conn, emp_no):
       conn.set_property(group="mygroup", mode=fabric.MODE_READONLY)
       cur = conn.cursor()
           "SELECT first_name, last_name FROM employees "
           "WHERE emp_no = %s", (emp_no, )
    except mysql.connector.Error:
       print "Database connection error, trying to reconnect ..."
    for row in cur:
        print row

# Address of the Fabric, not the host we are going to connect to.
def connect():
          fabric={"host" : "localhost", "port" : 8080},
          user="root", database="test", password="root",
    except mysql.connector.Error:
       print "Error trying to get a new database connection"
    return conn

conn = connect()
conn.set_property(group="mygroup", mode=fabric.MODE_READWRITE)
cur = conn.cursor()
cur.execute("DROP TABLE IF EXISTS employees")
    "CREATE TABLE employees ("
    "   emp_no INT, "
    "   first_name CHAR(40), "
    "   last_name CHAR(40)"

for x in range(0, high):
   add_employee(conn, x, "John"+":"+str(x), "Doe")

while True:

   add_employee(conn, high, "John"+":"+str(high), "Doe")
   for x in range(high-5, high):
      find_employee(conn, x)

So, lets start this application up and see what happens.

# Start demo program
bash$ python

(u'John:0', u'Doe')
(u'John:1', u'Doe')
(u'John:2', u'Doe')
(u'John:3', u'Doe')
(u'John:4', u'Doe')
(u'John:5', u'Doe')

It's working, lets see how the load is spread.

Uptime: 9176  Threads: 5  Questions: 200842  Slow queries: 0  Opens: 74  Flush tables: 1  Open tables: 64  Queries per second avg: 21.887
ted@ted-Latitude-E6410:/opt/MySQL-fabric/src$ mysqladmin -uroot -proot -h127.0.0.1 -P63308 status
Uptime: 9178  Threads: 6  Questions: 211802  Slow queries: 0  Opens: 76  Flush tables: 1  Open tables: 65  Queries per second avg: 23.077
ted@ted-Latitude-E6410:/opt/MySQL-fabric/src$ mysqladmin -uroot -proot -h127.0.0.1 -P63309 status
Uptime: 9181  Threads: 5  Questions: 212836  Slow queries: 0  Opens: 74  Flush tables: 1  Open tables: 64  Queries per second avg: 23.182

All MySQL servers in the group are sharing the work load, that's great!

Switchover for regular maintenance

This is for when you need to take your primary offline and do regular maintenance, this could be anything from OS update or adding more disk to machine. All we need to do in  this case if to tell MySQL Fabric to promote a new primary, since we like control we can also tell Fabric wich server we want to elect as primary.

bash$ mysqlfabric group lookup_servers mygroup
Command :
{ success     = True
  return      = [['fae19070-9ea5-11e3-8d61-5c260a4a26ad', '', False, 'SECONDARY'], ['fae19b5f-9ea5-11e3-8d61-5c260a4a26ad', '', True, 'PRIMARY'], ['fae19b62-9ea5-11e3-8d61-5c260a4a26ad', '', False, 'SECONDARY']]
  activities  =

bash$ mysqlfabric group promote mygroup fae19070-9ea5-11e3-8d61-5c260a4a26ad
Procedure :
{ uuid        = 37e80ebe-9a1f-4d26-a647-d62ce6c49fd1,
  finished    = True,
  success     = True,
  return      = True,
  activities  =

bash$ mysqlfabric group lookup_servers mygroup
Command :
{ success     = True
  return      = [['fae19070-9ea5-11e3-8d61-5c260a4a26ad', '', True, 'PRIMARY'], ['fae19b5f-9ea5-11e3-8d61-5c260a4a26ad', '', False, 'SECONDARY'], ['fae19b62-9ea5-11e3-8d61-5c260a4a26ad', '', False, 'SECONDARY']]
  activities  =

First we ran lookup_servers to see the current state of Fabric, next we promote MySQL instance in port 63307 to be new primary and finally we verify that our changes went through. Most of the time this operation is not affecting the application but somethime we see that the "Database connection error, trying to reconnect ..." is triggered and application reconnects.

Failover in case of Primary failure

Now, the interesting part begins, let's see what happens when we kill our primary server in the group. First we need to find our what server is primary.

bash$ mysqlfabric group lookup_servers mygroup
Command :
{ success     = True
  return      = [['ef4593df-9e2b-11e3-8a45-5c260a4a26ad', '', True, 'PRIMARY'], ['ef49ce23-9e2b-11e3-8a45-5c260a4a26ad', '', False, 'SECONDARY'], ['ef49d0b6-9e2b-11e3-8a45-5c260a4a26ad', '', False, 'SECONDARY']]
  activities  =

So, MySQL instance running on port 63307 is elected as primary server for my group. Get pid for this instance and lets kill it!

bash$ pgrep -fl mysql | grep 63307
ted@ted-Latitude-E6410:/opt/MySQL-fabric$ pgrep -fl mysql | grep 63307
24762 .......

bash$ kill -9 24762

In the fabric log you will now see:
[INFO] 1393349679.578709 - FailureDetector(mygroup) - Server (ef4593df-9e2b-11e3-8a45-5c260a4a26ad) in group (mygroup) has been lost.
[INFO] 1393349679.578908 - FailureDetector(mygroup) - Master (ef4593df-9e2b-11e3-8a45-5c260a4a26ad) in group (mygroup) has been lost.

Application is still working, no hickups and nothing stopped, let's see how MySQL Fabric has reorganized itself.

bash$ mysqlfabric group lookup_servers mygroup
Command :
{ success     = True
  return      = [['ef4593df-9e2b-11e3-8a45-5c260a4a26ad', '', False, 'FAULTY'], ['ef49ce23-9e2b-11e3-8a45-5c260a4a26ad', '', True, 'PRIMARY'], ['ef49d0b6-9e2b-11e3-8a45-5c260a4a26ad', '', False, 'SECONDARY']]
  activities  =

MySQL instance on port 63307 is now considered as Faulty by Fabric and MySQL instance running on port 63309 took over as primary. Everything worked as expected. First time I forgot to run command to set group as active "mysqlfabric group activate mygroup", this meant no automatic failure detection and no new primary was elected so my python program died ...
If you don't want automatic failure detection or need to take your HA group offline you can always  deactivate the group, if you do this you need to run the promote command manually to get new primary if your current one dies.


I'm very positive by this first encounter with MySQL Fabric, I like the fact that it is using the connectors to handle failover, no need to have VIP address or DNS manipulation solution in front of MySQL to achieve failover, of course you can always invest in a load balancer but this is normally very costly and adds one more product to take care of.

As a future wish I hope we will have support for Fabric in all our connectors so all of you can leverage the usefulness of MySQL Fabric!

tisdagen den 25:e februari 2014

Easy HA and automatic failover using MySQL Fabric - Part I

The purpose of this blog-post and next one (Part II) is to see how MySQL Fabric can be used as a HA solution. Reason for this is many users today have the need for HA and automatic failover, but not many users need the sharding functionality, I wanted to focus on creating one HA group and see how failover is handled using MySQL Fabric.

Talking to customers and users of MySQL they all want to have HA, the normal demands are:
  • Do not want to alter application code
  • Easy to install and manage
There are a lot of solutions for achieving HA around MySQL in the MySQL ecosystem but I think one key benefit of MySQL Fabric is that it is built into the connectors. This will make the technology easy to adopt for developers and easy to manage for DBA's since there are a minimal of extra parts to take care of.

Getting started with MySQL Fabric, I consumed the following blogs and docs:
If you want to see all resources around MySQL Fabric I highly recommend this post in the MySQL forum for Fabric.

Install MySQL Fabric

My setup will have 4 MySQL instances, one for MySQL Fabric backing store and 3 servers for my HA group.

The setup will be as shown in picture on right side with our 4 MySQL instances running on port 63307 - 63309. Instance with port number 63306 will be used by MySQL Fabric to store internal meta data used by Fabric. MySQL instaces 63307 - 63309 will be part of a HA group called mygroup.

High-availability groups are very important part of MySQL fabric, these are collections of servers that shall work together to deliver a database service to the application that connects to MySQL Fabric.

In next post we will look at a python application and see how this works MySQL Fabric.

That was a bit of background but now we need to get started, first we need is 4 MySQL instances, I installed everything underneath /opt/fabric, see configuration options for my MySQL instances bellow.

mkdir /opt/fabric/mysql1;
mkdir /opt/fabric/mysql2;
mkdir /opt/fabric/mysql3;
mkdir /opt/fabric/mysql4;

mysql_install_db --datadir=
mysql_install_db --datadir=/opt/fabric/mysql2
mysql_install_db --datadir=/opt/fabric/mysql3
mysql_install_db --datadir=/opt/fabric/mysql4

mysqld --socket=/opt/fabric/mysql1/my.sock --port=63306 --datadir=/opt/fabric/mysql1 --server-id=1 \
/opt/fabric/mysql1/mysql.err --innodb_buffer_pool_size=64MB --gtid_mode=ON \
       --log-bin --log-slave-updates --enforce-gtid-consistency \
       --basedir=/path/to/mysql-src --innodb-flush-log-at-trx-commit=2 --log-bin \

       --performance-schema-instrument='%=ON' > /opt/fabric/mysql1/mysql.out & 2>&1

mysqld --socket=/opt/fabric/mysql2/my.sock --port=63307 --datadir=/opt/fabric/mysql2 --server-id=2 \
/opt/fabric/mysql2/mysql.err --innodb_buffer_pool_size=64MB --gtid_mode=ON \
       --log-bin --log-slave-updates --enforce-gtid-consistency \
       --basedir=/path/to/mysql-src --innodb-flush-log-at-trx-commit=2 --log-bin \

       --performance-schema-instrument='%=ON' > /opt/fabric/mysql2/mysql.out & 2>&1

And for instance 3 and 4 I used; port 63308 and 63309, server-id 3 and 4. 

Next step is to fetch MySQL Fabric code from here, MySQL Fabric is part of the latest release of MySQL Utilities called 1.4.1 alpha.
I opted for tar-file so used installation manual here, next step is to download and install Fabric aware connectors for Python, I again followed the manual here.

Now we are finally ready to start playing with MySQL fabric.
First we need to do some minor changes to MySQL Fabric configuration file (for me this was located at /etc/mysql/fabric.cfg), under section Storage you need to change address to your MySQL backing store (for me this is localhost:63306) and also see to if you have appropriate setting for user, password and database under the same section, you will need these in next step.

Lets configure our backing store for MySQL Fabric, I will use MySQL on port 63306 as backing store. All that's needed to do manually is creating the database with same name as in configuration above and also grants usage for the user created in configuration above.

mysql -uroot -proot -S /opt/fabric/mysql1/my.sock
mysql> create database fabric;
mysql> CREATE USER fabric@localhost IDENTIFIED BY 'fabric';
mysql> GRANT ALL ON fabric.* TO fabric@localhost;

Once this step is done we can proceed to initiate our backing store before we can finally start our fabric instance.

# Configure Fabric data node (63306)
bash$ mysqlfabric manage setup

# Check newly created tables
mysqlshow -ufabric -pfabric -h127.0.0.1 -P63306 fabric

# Start fabric
bash$ mysqlfabric manage start

Next step is to create our HA group and add our 3 MySQL instances.

# Create group and add my MySQL instances
bash$ mysqlfabric group create mygroup
bash$ mysqlfabric group add mygroup root root
bash$ mysqlfabric group add mygroup root root
bash$ mysqlfabric group add mygroup root root

# Control state of Fabric:
bash$ mysqlfabric group lookup_servers mygroup
bash$ mysqlfabric group check_group_availability mygroup

# Promote one of your servers at master, <UUID> is optional.
bash$ mysqlfabric group promote mygroup <UUID for one of the servers above>

# Activate automatic failover
bash$ mysqlfabric group activate mygroup

After this step you should see Fabric is running and one of your MySQL instances have been elected as primary and the other ones are slaves.

bash$ mysqlfabric group lookup_servers mygroup
Command :
{ success     = True
  return      = [['ef4593df-9e2b-11e3-8a45-5c260a4a26ad', '', True, 'PRIMARY'], ['ef49ce23-9e2b-11e3-8a45-5c260a4a26ad', '', False, 'SECONDARY'], ['ef49d0b6-9e2b-11e3-8a45-5c260a4a26ad', '', False, 'SECONDARY']]
  activities  =

We now have a running MySQL fabric and all our MySQL instances are ready to start working!

In the next blog post (Part II) we will create a example application and look at some failover scenarios.

See you soon!

lördagen den 22:e februari 2014

Boosting performance with covering indexes!

Covering indexes can speed up your queries a lot, how much will depend on if  your are CPU or disk bound. For those who are not familiar with covering indexes,  a covering index is a special type of index where the index itself contains all required data fields in statement. This is especially important for InnoDB which have a clustered primary key, and all secondary keys will contain the primary key at leaf node, using covering indexes means one less look-up fetching data which normally leads to less random I/O.

So, lets see how much performance improvements we can see by using covering indexes. Firstly we need to fire up a MySQL instance, I opted for a installed MySQL 5.6.12 with default settings and imported a copy of the world database 

 Next step is to create test table `big` from table City in world database and boost it up with a few more rows, see below.

mysql> source /home/ted/src/world_innodb.sql
mysql> create table big like City;
mysql> insert into big select * from City;
mysql> insert into big select NULL,Name,CountryCode,District,Population from big;
... run stmt above 5-6 times until table contains above 100.000 rows...
mysql> analyze table big;

Now we have a table big with a few 100.000 rows, my table contains just above 260.000 rows.

mysql> SELECT COUNT(*) FROM big;
| COUNT(*) |
|   261056 |

Structure of table big is shown below. We have a secondary index on column CountryCode and this index will be used in our test query.

Create Table: CREATE TABLE `big` (
  `Name` char(35) NOT NULL DEFAULT '',
  `CountryCode` char(3) NOT NULL DEFAULT '',
  `District` char(20) NOT NULL DEFAULT '',
  `Population` int(11) NOT NULL DEFAULT '0',
  KEY `CountryCode` (`CountryCode`)

Our test query for this purpose will be:

SELECT CountryCode, SUM(Population) FROM big GROUP BY CountryCode;

Running this query 10 times and picking out the average (on my laptop) resulted in average time of 0.28 seconds. All data is fitting in memory, no disk activity during execution of statement.

Next step, lets drop our old index and add a new  covering index on table big:

mysql> ALTER TABLE big DROP INDEX CountryCode;
mysql> ALTER TABLE big ADD INDEX cc_pop (CountryCode, Population);

And re-run test query, this time you will see that average query time is down to 0.09 seconds, wow an improvement with 3x compared to not using a covering index.

Conclusion: Using combined indexes can boost performance a lot for your most common queries. I recommend that you analyze your workload and see if you can add combined indexes for some of your more common queries. Be aware that this will consume more space and that your inserts will slow down somewhat but it should be worth investigating for frequent queries.

måndagen den 20:e januari 2014

MySQL track at OUGN (Oracle User Group in Norway) spring conference!

This year we have a dedicated track for MySQL at the Oracle User Group event in Norway.
This event is scheduled between 3rd of April and 5th of April.

We have 7 great session on MySQL at the event:
  • Next generation MySQL: News in MySQL 5.7
  • MySQL and NoSQL: Best of both worlds
  • Introduction to MySQL Database Development with MySQL Workbench
  • Quick Dive into MySQL
  • How are queries optimized? An introduction to the optimizer
  • MySQL Replication and MySQL Utilities – including demo!
  • MySQL EXPLAIN Explained
See full agenda here:

Registrer here:

See at the Oracle User Group event in Norway!

fredagen den 11:e oktober 2013

Sweden MySQL User Group SMUG are inviting you to join our next meeting on 28th of November!

Next SMUG Event in Stockholm, 28th of November!
We plan to start at around 16:00 CET

Agenda for meeting:
- MySQL Fabric, by Mats Kindahl, Senior Principal Software Developer MySQL
- Q&A MySQL fabric
Food & mingle

Register on LinkedIn here or on Facebook here

See you there!

onsdagen den 5:e juni 2013

MySQL 5.6 - a picture is worth a thousand words!

I feel obligated to use the classic quote from  Arthur Brisbane "One look is worth a thousand words". I recently received the picture below and the question; "guess when we upgraded to MySQL 5.6"?

The graph is from MySQL Enterprise Monitor (more information about MySQL Enterprise Monitor can be found here) and  is showing the amount of rows accessed via using indexes verses full table scan. So all blue stuff is where MySQL is not using indexes for statements, this is of course always a bad thing.

At the 4th of June you see that almost all "Rows Read via Full Scan" went away, this was the point in time when customer upgraded to MySQL 5.6, the only spike seen now is when they run mysqldump for backups. Main reason behind this improvement is from all our optimizer improvements in MySQL 5.6, we improved the optimizer in several areas in MySQL 5.6 but key for this customer was our optimization of subqueries.

If you want to read more about what our wonderful engineering team are doing our VP of engineering Tomas Ulin summarized it in a great blogpost here. If you want to keep up with all the great stuff our optimizer team is doing start following their blog.