måndag 2 mars 2015

MySQL Tech Tour in Oslo Norway on March the 17th!

MySQL Tech Tour: Out of the Box MySQL High Availability - Performance - Scalability
March 17, 2015, Oslo, Norway

Did you know that the new MySQL Fabric delivers High Availability with automatic failure detection and failover? And that MySQL Fabric also enables scale-out with automated data sharding? Do you know how to take advantage of the MySQL SYS Schema?

Join us for this free MySQL Tech Tour to learn straight from the source how you can benefit from Oracle’s latest MySQL innovations. Our technical experts will help you understand how to take advantage of the wide range of new features and enhancements available in MySQL Fabric, MySQL 5.6, MySQL Cluster and other MySQL solutions. They will share tips & tricks to help you get the most of your database. You will also discover what’s coming next in MySQL MySQL 5.7.

Agenda:
08:30 – 09:00 Registration & Welcome
09:00 – 09:30 Introduction and Latest News
                        Morten Andersen Oracle MySQL Technology Sales
                        Ted Wennmark Oracle MySQL Principal Sales Consultant
09:30 – 09:50 MySQL EE and Enterprise Monitor demo
                        Morten Andersen Oracle MySQL Technology Sales Rep
                        Ted Wennmark Oracle MySQL Principal Sales Consultant
10:00 – 10:40 MySQL Roadmap
                        Ted Wennmark Oracle MySQL Principal Sales Consultant
10:40 – 11:00 Coffee break
11:00 – 11:40 MySQL Fabric
                        Ted Wennmark Oracle MySQL Principal Sales Consultant
11:40 – 12:20 Why Oracle Linux for MySQL deployments?
                        Discover how you can benefit from Oracle Linux and Oracle VM
12:20 – 12:30 Q&A and Closing

Don’t miss this opportunity to learn from the experts. Join us at a location near you!

Register here

Quick bulk load of data into InnoDB

Some weeks back I helped a customer lower time to bulk-load data into MySQL, they where at the time using a MySQL dumpfile (containing SQL statements) to populate their tables during nightly jobs.

By using LOAD DATA INFILE command and creating secondary indexes after bulk-load of data load time went down by a factor of almost 2x.

My test environment:
DB: MySQL 5.6.23
OS: Ubuntu 14.04
HW: My Toshiba Portege laptop with 2 cores and SSD disk

Commands/tables used in tests:
CREATE TABLE t1PK (i BIGINT UNSIGNED PRIMARY KEY, art VARCHAR(32)) ENGINE=InnoDB;
CREATE TABLE t1 (i BIGINT UNSIGNED PRIMARY KEY, art VARCHAR(32) UNIQUE KEY) ENGINE=InnoDB;
LOAD DATA INFILE '/home/ted/labb/load-data/1000000' INTO TABLE test.t1 FIELDS TERMINATED BY ',';

I created a file named 1000000 that contains two columns and 1 millon rows:
$ head -5 1000000
0,test0
1,test1
2,test2
3,test3
4,test4
.....

$ wc -l 1000000
1000000 1000000

Next step is to run some test, if you do not have a BBWC on your disksystem you might consider setting innodb_flush_log_at_trx_commit to 2 during bulk-load of data.

First let's create the full table and import the 1000000 file to get a baseline:
mysql> set global innodb_flush_log_at_trx_commit=2;
mysql> CREATE TABLE t1 (i BIGINT UNSIGNED PRIMARY KEY, art VARCHAR(32) UNIQUE KEY) ENGINE=InnoDB;
mysql> LOAD DATA INFILE '/home/ted/labb/load-data/1000000' INTO TABLE test.t1 FIELDS TERMINATED BY ',';
mysql> set global innodb_flush_log_at_trx_commit=1;

Result: Loaded 1.000.000 rows in 4.3 seconds (average from 10 runs)

Next let's try to load the file into table with only PK and then add index afterwards:
mysql> set global innodb_flush_log_at_trx_commit=2;
mysql> CREATE TABLE t1PK (i BIGINT UNSIGNED PRIMARY KEY, art VARCHAR(32)) ENGINE=InnoDB;
mysql> LOAD DATA INFILE '/home/ted/labb/load-data/1000000' INTO TABLE test.t1PK FIELDS TERMINATED BY ',';
mysql> ALTER TABLE t1PK ADD INDEX(art);
mysql> set global innodb_flush_log_at_trx_commit=1;
Result: Loaded 1.000.000 rows in 2.1 seconds (average from 10 runs) and another 1.9 seconds to add secondary index, total time 4 seconds in average. This improvement is due to InnoDB Fast Index Creation

Conclusion:
In my environment it took almost 6 seconds to load a dumpfile (using innodb_flush_log_at_trx_commit=2) of table t1 into mysql, this will depend on how many keys you have and many other factors but as you can see using LOAD DATA INFILE command is a fast way to load data into MySQL!

fredag 20 februari 2015

Getting back the CREATE TABLE speed of MySQL 5.5 in MySQL 5.6!

I visited a customer some weeks back and saw some regression problem during an upgrade to MySQL 5.6. Problem was during initial setup of database, the CREATE TABLE statements was running much slower on MySQL 5.6 compared to MySQL 5.5.

I created a simple test case where I create one SQL file containing 1000 CREATE TABLE using the following statement syntax: CREATE TABLE TNNNN (i int, name VARCHAR(12))ENGINE=InnoDB;

Tested MySQL Versions:
  • MySQL 5.5.42
  • MySQL 5.6.22

OS: Ubuntu 14.04
HW: My Toshiba Portege laptop with 2 cores and SSD disk

MySQL 5.5.42 (Default settings)
Lets first get our baseline by running 10 runs: mysql test < /tmp/1000tables
Result: average execution time is 7.5 seconds

MySQL 5.6.22 (Default settings)
Lets first get our baseline by running 10 runs: mysql test < /tmp/1000tables
Result: average execution time is 23 seconds, more than 300% slower than MySQL 5.5

During this test we where mostly spending time working with disk-system, but why is MySQL 5.6 so much slower than MySQL 5.5 in creating tables?
One thing that changed in-between MySQL 5.5 and MySQL 5.6 that might impact performance of CREATE TABLE was InnoDB files per table, read more here.

Let's try going back to use one tablespace file for all tables and re-run test!

MySQL 5.6.22 (innodb_file_per_table=0)
Result: average execution time went down a bit, now at 16 seconds

This is stil far behind MySQL 5.5, something else is taking up resources during CREATE TABLE.
Another thing that was added into MySQL 5.6 was persistent optimizer statistics, read more here.

MySQL 5.6.22 (innodb_stats_persistent=0)
Result: average execution time went down a bit, now at 15.5 seconds

Lets try both options together!

MySQL 5.6.22 (innodb_file_per_table=0 and innodb_stats_persistent=0)
Result: average execution time is back at 7.5 seconds!

Conclusion
For most application I would not consider this as an huge problem, this is something done once and then you start working on the database. But for some applications where they CREATE/DROP tables as a part of normal work this might be important.
If you need to keep performance from MySQL 5.5 in your  CREATE TABLE statements and new features like InnoDB files per table and persistent optimizer statistics are not important disable these features and you have the performance from MySQL 5.5 back again!

torsdag 19 februari 2015

Sweden MySQL User Group meeting in Stockholm Thuesday 28th of April!

Hej alla SMUG:are!

Nu är det återigen dags för en ny träff, och detta är ju lagom i tiden för att prata lite om kommande 5.7-releasen.
Vi kommer att bjudas på två presentationer denna gången, en som hålls av Mattias Jonsson, utvecklare i InnoDB teamet, om partionering i 5.7. Här finns det många nyheter, ni kan kolla lite på server teamets blog, http://mysqlserverteam.com/category/partitioning/
Den andra presentationen hålls av Martin Hansson, utvecklare i MySQL server teamet. Ämnet är den nya kostnadsmodellen för planering av frågor i MySQL 5.7, detta är också en stor ändring som kommer i 5.7. Läs lite mer här: http://mysqlserverteam.com/optimizer-cost-model-improvements-in-mysql-5-7-5-dmr/

Denna gången är det King som bjuder på lokaler och lite käk, så om ni går runt med candy crush på telefonen så känns det säkert som hemma!, så stort tack till King!

Registrera er för eventet via facebook eller LinkedIn

Vi syns den 28:e!

torsdag 12 februari 2015

Creating a minimal MySQL installation for embedded system!

Over the last few MySQL releases the size of the MySQL package have increased in size and it looks like the trend is continuing.
  • 687M MySQL 5.5.42
  • 1,1G  MySQL 5.6.21
  • 1,3G  MySQL 5.7.4
  • 1,5G         MySQL 5.7.5
This guide is for a Linux installation. I have tested the instructions below to create a minimal installation on my Ubuntu 14.04 laptop. I know you can make it a bit smaller by running mysqld directly and only using one error message file but this would not affect total size much. I also added the mysql client to have some way of logging into MySQL.

MySQL configuration file used to start MySQL:
[mysqld_safe]
ledir = /home/ted/labb/mini-mysql/bin

[mysqld]
port = 63301
socket = /tmp/mysql63301.sock
basedir = /home/ted/labb/mini-mysql
datadir = /home/ted/labb/mini-mysql/data
tmpdir = /tmp
pid-file = /tmp/mysql66301.pid

Create MySQL basedir where you want to store all binaries:
mkdir /home/ted/labb/mini-mysql

Go to basedir:
cd /home/ted/labb/mini-mysql

Create folders for binaries and database (normally your datadir would not be located in your basedir):
mkdir bin
mkdir data

Download MySQL "Linux - Generic (glibc 2.5) (x86, 64-bit), Compressed TAR Archive" from http://dev.mysql.com/downloads/mysql/
Copy files needed to start MySQL daemon and client program:
cp ~5.6.22/bin/mysql bin/
cp ~5.6.22/bin/mysqld bin/
cp ~5.6.22/bin/mysqld_safe bin/
cp -fr ~5.6.22/share .

We need some binaries and files to create mysql internal database, these can be removed later on:
cp ~5.6.22/scripts/mysql_install_db bin/
cp ~5.6.22/bin/resolveip bin/
cp ~5.6.22/support-files/my-default.cnf .

Create MySQL internal database:
./bin/mysql_install_db --datadir=./data/

# Now we can remove tools for creating internal mysql schema:
rm bin/mysql_install_db
rm bin/resolveip
rm my-default.cnf
rm rm my-new.cnf

Start server:
./bin/mysqld_safe --defaults-file=./my.cnf &

Log into server:
./bin/mysql -uroot -P63301 -h127.0.0.1

Total size of installation:
mysql$ du -sh *
88M     bin
3,2M    share

This is good news, with some minimal work it's still possible to get the binary package down to below 100M.

torsdag 8 januari 2015

With latest version of MySQL (5.7.5) the optimizer picks covering index before non-covering indexes (Bug #18035906)

Covering Indexes not being chosen by optimizer

I noticed this problem about a year ago when writing this blogpost. In short problem is when adding a covering index and keeping old non-covering index the optimizer opted to use old non-covering index. Only solution was to FORCE optimizer to use covering index which meant you needed to modify your DML or remove old index.

Using the same test setup as in my old blogpost but when you add new covering index do not drop the old index.

So, instead of running:
mysql> ALTER TABLE big DROP INDEX CountryCode;
mysql> ALTER TABLE big ADD INDEX conPop (CountryCode, Population);

We run only statement for adding new covering index and do not remove old index:
mysql> ALTER TABLE big ADD INDEX conPop (CountryCode, Population);

With MySQL 5.6 you will see the following output from EXPLAIN:
mysql> EXPLAIN SELECT CountryCode, SUM(Population) from big group by CountryCode\G
           id: 1
  select_type: SIMPLE
        table: big
         type: index
possible_keys: CountryCode,conPop
          key: CountryCode         <-------- Wrong index
      key_len: 3
          ref: NULL
         rows: 259729
        Extra: NULL

Problem is that optimizer is not using our new covering index even though it would be the fastest way to access data. Only solution is to drop the old index or add keyword FORCE INDEX to your SQL.

With latest version of MySQL (MySQL 5.7.5) there is a small release note here stating: "If the optimizer chose to perform an index scan, in some cases it could choose a noncovering rather than a covering index. (Bug #18035906)".

Lets see output from EXPLAIN using the same scenario as described above with MySQL 5.7.5:
mysql> EXPLAIN SELECT CountryCode, SUM(Population) from big group by CountryCode\G
           id: 1
  select_type: SIMPLE
        table: big
   partitions: NULL
         type: index
possible_keys: CountryCode,conPop
          key: conPop             <--------- Covering index
      key_len: 7
          ref: NULL
         rows: 259729
     filtered: 100.00
        Extra: Using index
1 row in set, 1 warning (0,00 sec)

Great news, with MySQL 5.7.5 the optimizer is now picking the covering index and query is 3x quicker!!

lördag 4 oktober 2014

Our session at OOW 2014: MySQL@King

This session describes how MySQL empowers the games at King, particularly how the explosive
growth of Candy Crush Saga was handled and how the constant high load is served. It discusses what makes King’s applications scale, what problems the company has solved, what it is currently solving, and what it will be solving tomorrow. As the presentation describes King’s architecture, it highlights its strengths; weaknesses; major pain points; and, of course, greatest successes.

Download presentation here
http://www.slideshare.net/TedWennmark/mysqlking