MySQL 5.6 Configuration OptimizationWe had a wonderful time during the Sept. 25 webinar, “MySQL 5.6 Configuration Optimization,” and I got a lot more questions than I could answer during the hour. So here is a followup with answers to the most interesting questions you guys asked. (You can also watch a recording of entire webinar here.)

Q: What is the impact of having innodb_stats_on_metadata=off in terms of seeing actual table size in information schema ?

A: In MySQL 5.6 this option if off by default. If you disable it in earlier version you will still see the actual table and index sizes as these are not the statistics which update this variable controls. It controls the index statistics in terms of index cardinality – same statistics as is usually updated with ANALYZE TABLE. I have found this to be very safe and good change.

Q: You recommend having large innodb_log_file_size values in certain cases, yet I get the error if I set innodb_log_file_size=4G

A: Only MySQL 5.6, and Percona Server 5.5 and MariaDB 5.5 supports combined Innodb Log files Size over 4GB. Check the software version you’re using. Note be careful to check the crash recovery time with large total Innodb Log File Size to make sure you can recover in the time you need.

Q: Which storage engine is good for OLAP workload

A: It depends. On Medium scale Innodb can perform quite well. TokuDB is a storage engine which works very good if you need high compression will constantly insert a lot of data and doing well indexed queries. For certain workloads Column storage engines as Infobright and InfiniDB make make sense.

Q: What parameters to use to get quick load times for 10TB+ database with mysqldump

A: Textual dumps can take quite a while to load in such conditions and large innodb_buffer_pool_size and innodb_log_file_size will be very important. Some people use extra large Innodb Log File to load data and when size it to be smaller for normal operation to facilitate faster crash recovery. Check out –innodb-optimize-keys options for mysqldump shipped with Percona Server – this option allows creating Innodb indexes after data has been loaded which can be 10x faster or more for large tables. Also increase innodb_sort_buffer_size. Checkmydumper which can dump and load data in parallel which can be a lot faster

Q: What are differences for settings for reads vs writes which you can set per session?

A: Most settings are not just about reads and writes but also complex read queries vs simple ones (traversing small data set). In MySQL 5.6 you can use START TRANSACTION READ ONLY to start multi statement read transaction and it will be handled optimally. You do not need it for auto-commit SELECTs as MySQL is able to figure it out on itself. sort_buffer_size is perhaps the most important variable you need to set per session for large queries as simply setting it to large value by default will impact performance of small queries negatively.

Q: Is 16G min requirement for database service as best practice?

A: There is no specific number of GB of memory you should have as best practice it depends on the database size, workloads, performance needs etc. If you have small database which already well fits in the buffer pool you unlikely will get much improvement from adding more memory. For small databases we frequently see MySQL being used on virtual instances having 2-4GB (it is hard to find physical hardware with so little memory these days). For high performance databases 256GB or so is cost efficient value these days – 16GB DIMM chips are cost efficient, larger cost a lot more $/GB at the time of this writing.

Q: What are the most critical parameters for multicore servers for example 32CPUs

A: MySQL 5.6 comes well tuned for multi core systems and I would suggest to watch for contentions before pursuing further changes. Some of the options you might need to change are innodb_buffer_pool_instances, metadata_locks_hash_instances, table_open_cache_instances, innodb_sync_array_size,innodb_spin_wait_delay. Check slides for more specific advice.

Q: Query Cache is turned off but we still see commits taking very long time around 50-60 seconds what could be the case?

A: The settings which impact commit performance the most is innodb_flush_log_at_trx_commit, sync_binlog and sync_relay_log (in MySQL 5.6) The impact can depend on a lot of factors including hardware, filesystem and workload. Generally if you want highly durable configuration (setting all these options to 1) you want to ensure to have RAID with write back cache or SSD. If you have slow storage you might need to use less secure settings of 2,0,0 (respectfully) which however may result at some of latest transactions being lost and binary log being inconsistent with database transactional state.

Q: Is there a tool to run against older version of database to check if your schema structure and data are compatible to upgrade

A: Schema itself is typically less of the problem compared to the database queries, though there have been compatibility issues with it. The quick check for schema would be to do mysqldump -d on the old version to dump schema only and when load it to the test server with newer version. You can just do full mysqldump and load it back to check whenever data is able to be loaded. Connecting the upgraded version as the slave and running pt-table-checksum from Percona Toolkit will show if data is logically consistent. Finally running pt-upgrade from the same tool set can help you to validate queries compatibility

Thank you for attending. I’m doing another Webinar – Running MySQL On Linux, this Wednesday (October 9). See you there!

2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Ernie Souhrada

Minor point, but the variable name referenced above is actually “metadata_locks_hash_instances”, not “meta_data_locks_hash_instances”. One less underscore.