måndag 1 april 2019

MySQL Error: Too many connections!

When your application get error "too many connections" underlying problem might be caused by multiple things. For a in-depth understanding of how MySQL handles connections read this blog.

In this blog we will investigate why and how to solve these problems when running MySQL installed on Debian/Ubuntu using systemd (default MySQL packages).

OS: Debian 9
MySQL Server version: 5.7.25 MySQL Community Server (GPL)
(will most likely be the same for MySQL 8.0 versions)

Goal is to have 10.000 working connections to MySQL!

The default value for max_connections is 151 connections so first step is to increase the max_connections variable to 10.000.
This is documented in the manuals here:
- https://dev.mysql.com/doc/refman/5.7/en/too-many-connections.html
- https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_max_connections

The max_connections is a dynamic setting so lets increase value to 10.000 and see what happens.

root@debian:~# mysql -uroot -proot -se "select @@max_connections"
@@max_connections
214

Hmmm, it looks like we only got 214 connections, lets look at the error log:

2019-04-01T06:29:48.301871Z 0 [Warning] Changed limits: max_open_files: 1024 (requested 50000)
2019-04-01T06:29:48.302000Z 0 [Warning] Changed limits: max_connections: 214 (requested 10000)
2019-04-01T06:29:48.302004Z 0 [Warning] Changed limits: table_open_cache: 400 (requested 2000)

Looks like we hit some resource limit.
Lets look in the MySQL manual for running MySQL under systemd, that can be found here.
Looks like we need to increase the number of allowed open files for MySQL, locate the systemd configuration folder for MySQL and create file /etc/systemd/system/mysqld.service.d/override.conf (file can be called anything ending with .conf). You can also create/modify the override file by using native systemctl command like: systemctl edit mysql

Add LimitNOFILE=10000 in file override.conf like:

root@debian:~# cat /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=10000

After this we need to reload the systmed daemon and restart the MySQL service like:

root@debian:~# systemctl daemon-reload
root@debian:~# systemctl restart mysql

MySQL is now saying we have 9190 connections:

root@debian:~# mysql -uroot -proot -se "select @@max_connections"
@@max_connections
9190

So, MySQL is using some files for additional work and we need to set this a bit higher to get 10.000 connections, lets set it to 11.000 and reload the systemd daemon and restart the MySQL service.

root@debian:~# mysql -uroot -proot -se "select @@max_connections"
mysql: [Warning] Using a password on the command line interface can be insecure.
@@max_connections
10000

Good, now we have 10.000 connections available according to MySQL.

Lets run our application and verify we can get 10.000 connections, I use this small perl script to open 10.000 connections.

Just below 5.000 connection I get a new error in the application "Can't create a new thread (errno 11)"
Lets have a look at the MySQL error log:

root@debian:~# tail -1 /var/log/mysql/error.log
2019-04-01T06:50:35.657397Z 0 [ERROR] Can't create thread to handle new connection(errno= 11)

I found this new limit by running command below when perl script was running:
watch -n.5 "mysql -uroot -proot -se'show status like \"%threads%\"'"

Strange, where is this new limitation just below 5.000 connections coming from?

Looking at resource limits for my MySQL daemon I should have close to 8000 processes:

root@debian:~# cat /proc/$( pgrep -o mysql )/limits
Max processes             7929                 7929                 processes
Max open files            11000                11000                files

Lets looks at status report for my MySQL service:

root@debian:~# systemctl status mysql | grep Tasks
    Tasks: 127 (limit: 4915)

There seem to be some additional limit on Tasks that limit me to 4915 connections.
Lets expand our override.conf configuration to cover for 11.000 tasks also.

root@debian:~# cat /etc/systemd/system/mysql.service.d/
[Service]
LimitNOFILE=11000
TasksMax=11000

(remember to reload systemd and restart MySQL service after each change in override.conf)

Now we got just under 8.000 connection and got the same error " Can't create a new thread (errno 11)" but this time it's because of the limit of max processes:

root@debian:~# cat /proc/$( pgrep -o mysql )/limits
Max processes             7929                 7929                 processes

Lets increase this limit to 11.000 in our override.conf:

root@debian:~# cat /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=11000
LimitNPROC=11000
TasksMax=11000

After reloading systemd configuration and restarting MySQL service I can now get 10.000 connections and the perl script runs without any errors!

Summary:
There are different limits when setting max_connections in MySQL:
- The default max connections is 151.
- At 214 connections you are limited by max open files.
- At 4915 you are limited by Max Tasks (systemd)
- Just below 8000 you are limited by max number of processes

By adding a override file (as showed above) you can overcome all these limits.

Remember to:
Look at error message in application and MySQL error logs.
Look at output from: cat /proc/$( pgrep -o mysql )/limits
Look at output from: systemctl status mysql
Test your application (or use my perl script) and monitor that it works!
Monitor how many connections you have: mysql -uroot -p -e'show status like "Threads_connected"'

onsdag 9 mars 2016

MySQL footprint less than 10MB


The MySQL binary package have been growing a lot the last GA versions. I wrote a blog post on creating a minimal installation a bit over a year ago and after this we have seen some interesting posts by Mablomy and Giuseppe on creating smaller installations of MySQL for different purposes.

I will continue where Mablomy left off and use UPX to try to shrink the MySQL binary even more. UPX (Ultimate Packer for Executables) is a free and open source executable packer supporting a number of file formats from different operating systems. If you want to read more I recommend the link above or the manual page once you have installed upx.

Using the default installation of MySQL 5.7 and then running strip(1) to remove symbol information the mysqld binary goes down in size from 240MB to 26MB.

Next step is to compress the MySQL binary using UPX.
I'm using Ubuntu 14.04 and installed UPX via Ubuntu repositories like: sudo apt-get install upx
After this run upx on mysqld binary to shrink it further.

bin$ upx mysqld
                       Ultimate Packer for eXecutables
                          Copyright (C) 1996 - 2013
UPX 3.91        Markus Oberhumer, Laszlo Molnar & John Reiser   Sep 30th 2013
        File size         Ratio      Format      Name
   --------------------   ------   -----------   -----------
  26649816 ->   9145508   34.32%  linux/ElfAMD   mysqld        
-rwxr-xr-x 1 ted ted   9145508 mar  9 17:05 mysqld

Size of mysqld binary is now 8.8MB!
By adding option --best to upx I was able to get binary down to 8.2MB.

After shrinking the binary everything works as before, I started MySQL using mysqld_safe and it worked as expected. You can use upx to shrink all kind of binaries.

EDIT: Great news, with MySQL 8.0.16 we now have minimal packages via our normal downloads, see more here: https://mysqlrelease.com/2019/05/the-amazing-shrinking-tarball/

torsdag 26 november 2015

Loss-less replication using MySQL semi-syncronous replication and MySQL Fabric!

In my last post playing with MySQL fabric here and here I used MySQL fabric version 1.4.1 and MySQL 5.6, i must say lot's of things have happened over the last 20 months! MySQL have released several new versions of MySQL Fabric and also a new version of the server with MySQL 5.7.

Current GA version of MySQL Fabric is 1.5.6 and there is also a MySQL Fabric 1.6.2 beta, both versions can be downloaded from here.
As I said earlier, lots of new things have happened, looks like focus has been on making it easier to work with MySQL Fabric but also making output from fabric commands much easier to understand. With latest version 1.6.2 (not yet GA) fabric also have support for running multiple fabric instances, more news can be found here.

Semi-synchronous replication
Semi-synchronous replication is an enhancement to normal replication making it possible to failover to a slave server with minimal loss of data (will depend of durability settings of MySQL). In MySQL 5.7 we have introduced some changes making this feature even better, more details can be found here . So, how can we use semi-synchronous replication together with MySQL Fabric? Well it's not that hard to configure, just a few commands as explained in our manual, but hey let's try to upgrade the old MySQL fabric environment and make it use semi-synchronous replication!

Upgrading environment to MySQL 5.7
Some things in installation/configuration had to be modified moving from MySQL 5.6 to 5.7:
  • installation process has changed a lot (more secure by default), if you want old behavior use --initialize-insecure to get back some old behavior.
  • server-id is now mandatory if you enble binary logging, more information here and here.
For more information on what new in MySQL 5.7 and some removed or deprecated functionality read more here and here.

MySQL Fabric
Next step is to install MySQL Fabric, I will not cover the steps as this was covered on my old posts and the procedure for latest version of MySQL fabric is explained in detail in our manual.

Scenario is, we have MySQL Fabric up and running and 3 stand alone MySQL 5.7 servers waiting to be used by MySQL Fabric.

First step is to create our group and populate it with our 3 MySQL servers. We also run promote command to elect a primary server, the activate command enables automatic failover is primary server dies.

mysqlfabric group create group1
mysqlfabric group add group1 127.0.0.1:63301
mysqlfabric group add group1 127.0.0.1:63302
mysqlfabric group add group1 127.0.0.1:63303
mysqlfabric group promote group1
mysqlfabric group activate group1

Let's look at current status of our group of servers:

ted@ted-PORTEGE-Z30-A:/opt/MySQL-fabricII$ mysqlfabric group lookup_servers group1
Fabric UUID:  5ca1ab1e-a007-feed-f00d-cab3fe13249e
Time-To-Live: 1
                         server_uuid         address    status       mode weight
------------------------------------ --------------- --------- ---------- ------
8066541b-9423-11e5-ab31-7c7a910307fa 127.0.0.1:63301   PRIMARY READ_WRITE    1.0
839f04ac-9423-11e5-ab4d-7c7a910307fa 127.0.0.1:63302 SECONDARY  READ_ONLY    1.0
8681ea65-9423-11e5-ad0f-7c7a910307fa 127.0.0.1:63303 SECONDARY  READ_ONLY    1.0

All looks fine, our servers are up and running and group is ready!

Enabling Semi-synchronous replication
This whole procedure can be done without restarting our secondary server, for safety we set the secondary server in SPARE mode so it might not be elected as primary on case of failover whilst we are working with the server.

On Primary Server load semi-sync plugin and enable semi-sync replication:

mysql> INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so';
mysql> SET GLOBAL rpl_semi_sync_master_enabled = 1;

On one or both of the secondary servers; load semi-sync plugin, enable semi-sync replication, set server in SPARE mode and restart replication channel.

mysql> INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so';
mysql> SET GLOBAL rpl_semi_sync_slave_enabled = 1;
mysqlfabric server set_status <server-uuid> SPARE
mysql> STOP SLAVE IO_THREAD; START SLAVE IO_THREAD;
mysqlfabric server set_status <server-uuid> SECONDARY

Verify semi-synchronous replication is up and running on slaves:

mysql> show status like  'Rpl_semi_sync_slave_status';
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| Rpl_semi_sync_slave_status | ON    |
+----------------------------+-------+

We are done, you now have semi-sync replication up and running in your group!

My MySQL Fabric wish list for the future is:
  • Feature request Semisynchronous replication support in MySQL Fabric.
  • Fabric GUI section in workbench for managing my groups of servers. I would also like to extend this functionality to handle basic monitoring and help to architect/deploy groups or failed servers.
  • Support for MySQL Group Replication in fabric, so we have three different ways of replicating data between servers in our HA-groups.

måndag 2 mars 2015

MySQL Tech Tour in Oslo Norway on March the 17th!

MySQL Tech Tour: Out of the Box MySQL High Availability - Performance - Scalability
March 17, 2015, Oslo, Norway

Did you know that the new MySQL Fabric delivers High Availability with automatic failure detection and failover? And that MySQL Fabric also enables scale-out with automated data sharding? Do you know how to take advantage of the MySQL SYS Schema?

Join us for this free MySQL Tech Tour to learn straight from the source how you can benefit from Oracle’s latest MySQL innovations. Our technical experts will help you understand how to take advantage of the wide range of new features and enhancements available in MySQL Fabric, MySQL 5.6, MySQL Cluster and other MySQL solutions. They will share tips & tricks to help you get the most of your database. You will also discover what’s coming next in MySQL MySQL 5.7.

Agenda:
08:30 – 09:00 Registration & Welcome
09:00 – 09:30 Introduction and Latest News
                        Morten Andersen Oracle MySQL Technology Sales
                        Ted Wennmark Oracle MySQL Principal Sales Consultant
09:30 – 09:50 MySQL EE and Enterprise Monitor demo
                        Morten Andersen Oracle MySQL Technology Sales Rep
                        Ted Wennmark Oracle MySQL Principal Sales Consultant
10:00 – 10:40 MySQL Roadmap
                        Ted Wennmark Oracle MySQL Principal Sales Consultant
10:40 – 11:00 Coffee break
11:00 – 11:40 MySQL Fabric
                        Ted Wennmark Oracle MySQL Principal Sales Consultant
11:40 – 12:20 Why Oracle Linux for MySQL deployments?
                        Discover how you can benefit from Oracle Linux and Oracle VM
12:20 – 12:30 Q&A and Closing

Don’t miss this opportunity to learn from the experts. Join us at a location near you!

Register here

Quick bulk load of data into InnoDB

Some weeks back I helped a customer lower time to bulk-load data into MySQL, they where at the time using a MySQL dumpfile (containing SQL statements) to populate their tables during nightly jobs.

By using LOAD DATA INFILE command and creating secondary indexes after bulk-load of data load time went down by a factor of almost 2x.

My test environment:
DB: MySQL 5.6.23
OS: Ubuntu 14.04
HW: My Toshiba Portege laptop with 2 cores and SSD disk

Commands/tables used in tests:
CREATE TABLE t1PK (i BIGINT UNSIGNED PRIMARY KEY, art VARCHAR(32)) ENGINE=InnoDB;
CREATE TABLE t1 (i BIGINT UNSIGNED PRIMARY KEY, art VARCHAR(32) UNIQUE KEY) ENGINE=InnoDB;
LOAD DATA INFILE '/home/ted/labb/load-data/1000000' INTO TABLE test.t1 FIELDS TERMINATED BY ',';

I created a file named 1000000 that contains two columns and 1 millon rows:
$ head -5 1000000
0,test0
1,test1
2,test2
3,test3
4,test4
.....

$ wc -l 1000000
1000000 1000000

Next step is to run some test, if you do not have a BBWC on your disksystem you might consider setting innodb_flush_log_at_trx_commit to 2 during bulk-load of data.

First let's create the full table and import the 1000000 file to get a baseline:
mysql> set global innodb_flush_log_at_trx_commit=2;
mysql> CREATE TABLE t1 (i BIGINT UNSIGNED PRIMARY KEY, art VARCHAR(32) UNIQUE KEY) ENGINE=InnoDB;
mysql> LOAD DATA INFILE '/home/ted/labb/load-data/1000000' INTO TABLE test.t1 FIELDS TERMINATED BY ',';
mysql> set global innodb_flush_log_at_trx_commit=1;

Result: Loaded 1.000.000 rows in 4.3 seconds (average from 10 runs)

Next let's try to load the file into table with only PK and then add index afterwards:
mysql> set global innodb_flush_log_at_trx_commit=2;
mysql> CREATE TABLE t1PK (i BIGINT UNSIGNED PRIMARY KEY, art VARCHAR(32)) ENGINE=InnoDB;
mysql> LOAD DATA INFILE '/home/ted/labb/load-data/1000000' INTO TABLE test.t1PK FIELDS TERMINATED BY ',';
mysql> ALTER TABLE t1PK ADD INDEX(art);
mysql> set global innodb_flush_log_at_trx_commit=1;
Result: Loaded 1.000.000 rows in 2.1 seconds (average from 10 runs) and another 1.9 seconds to add secondary index, total time 4 seconds in average. This improvement is due to InnoDB Fast Index Creation

Conclusion:
In my environment it took almost 6 seconds to load a dumpfile (using innodb_flush_log_at_trx_commit=2) of table t1 into mysql, this will depend on how many keys you have and many other factors but as you can see using LOAD DATA INFILE command is a fast way to load data into MySQL!

fredag 20 februari 2015

Getting back the CREATE TABLE speed of MySQL 5.5 in MySQL 5.6!

I visited a customer some weeks back and saw some regression problem during an upgrade to MySQL 5.6. Problem was during initial setup of database, the CREATE TABLE statements was running much slower on MySQL 5.6 compared to MySQL 5.5.

I created a simple test case where I create one SQL file containing 1000 CREATE TABLE using the following statement syntax: CREATE TABLE TNNNN (i int, name VARCHAR(12))ENGINE=InnoDB;

Tested MySQL Versions:
  • MySQL 5.5.42
  • MySQL 5.6.22

OS: Ubuntu 14.04
HW: My Toshiba Portege laptop with 2 cores and SSD disk

MySQL 5.5.42 (Default settings)
Lets first get our baseline by running 10 runs: mysql test < /tmp/1000tables
Result: average execution time is 7.5 seconds

MySQL 5.6.22 (Default settings)
Lets first get our baseline by running 10 runs: mysql test < /tmp/1000tables
Result: average execution time is 23 seconds, more than 300% slower than MySQL 5.5

During this test we where mostly spending time working with disk-system, but why is MySQL 5.6 so much slower than MySQL 5.5 in creating tables?
One thing that changed in-between MySQL 5.5 and MySQL 5.6 that might impact performance of CREATE TABLE was InnoDB files per table, read more here.

Let's try going back to use one tablespace file for all tables and re-run test!

MySQL 5.6.22 (innodb_file_per_table=0)
Result: average execution time went down a bit, now at 16 seconds

This is stil far behind MySQL 5.5, something else is taking up resources during CREATE TABLE.
Another thing that was added into MySQL 5.6 was persistent optimizer statistics, read more here.

MySQL 5.6.22 (innodb_stats_persistent=0)
Result: average execution time went down a bit, now at 15.5 seconds

Lets try both options together!

MySQL 5.6.22 (innodb_file_per_table=0 and innodb_stats_persistent=0)
Result: average execution time is back at 7.5 seconds!

Conclusion
For most application I would not consider this as an huge problem, this is something done once and then you start working on the database. But for some applications where they CREATE/DROP tables as a part of normal work this might be important.
If you need to keep performance from MySQL 5.5 in your  CREATE TABLE statements and new features like InnoDB files per table and persistent optimizer statistics are not important disable these features and you have the performance from MySQL 5.5 back again!

torsdag 19 februari 2015

Sweden MySQL User Group meeting in Stockholm Thuesday 28th of April!

Hej alla SMUG:are!

Nu är det återigen dags för en ny träff, och detta är ju lagom i tiden för att prata lite om kommande 5.7-releasen.
Vi kommer att bjudas på två presentationer denna gången, en som hålls av Mattias Jonsson, utvecklare i InnoDB teamet, om partionering i 5.7. Här finns det många nyheter, ni kan kolla lite på server teamets blog, http://mysqlserverteam.com/category/partitioning/
Den andra presentationen hålls av Martin Hansson, utvecklare i MySQL server teamet. Ämnet är den nya kostnadsmodellen för planering av frågor i MySQL 5.7, detta är också en stor ändring som kommer i 5.7. Läs lite mer här: http://mysqlserverteam.com/optimizer-cost-model-improvements-in-mysql-5-7-5-dmr/

Denna gången är det King som bjuder på lokaler och lite käk, så om ni går runt med candy crush på telefonen så känns det säkert som hemma!, så stort tack till King!

Registrera er för eventet via facebook eller LinkedIn

Vi syns den 28:e!