April 17, 2014

How much overhead is caused by on disk temporary tables

As you might know while running GROUP BY and some other kinds of queries MySQL needs to create temporary tables, which can be created in memory, using MEMORY storage engine or can be created on disk as MYISAM tables. Which one will be used depends on the allowed tmp_table_size and also by the data which […]

How to reclaim space in InnoDB when innodb_file_per_table is ON

When innodb_file_per_table is OFF and all data is going to be stored in ibdata files. If you drop some tables of delete some data then there is no any other way to reclaim that unused disk space except dump/reload method. When Innodb_file_per_table is ON, each table stores data and indexes in it’s own tablespace file. […]

Spreading .ibd files across multiple disks; the optimization that isn’t

Inspired by Baron’s earlier post, here is one I hear quite frequently – “If you enable innodb_file_per_table, each table is it’s own .ibd file.  You can then relocate the heavy hit tables to a different location and create symlinks to the original location.” There are a few things wrong with this advice:

Finding what Created_tmp_disk_tables with log_slow_filter

Whilst working with a client recently I noticed a large number of temporary tables being created on disk.

Is disk Everything for MySQL Performance ?

I read very nice post by Matt today and it has many good insights though I can’t say I agree on all points. First there is a lot of people out where which put it as disk is everything. Remember Paul Tuckfield saying “You should ask how many disks they have instead of how many […]

TMP_TABLE_SIZE and MAX_HEAP_TABLE_SIZE

We all know disk based temporary tables are bad and you should try to have implicit temporary tables created in memory where possible, do to it you should increase tmp_table_size to appropriate value and avoid using blob/text columns which force table creation on the disk because MEMORY storage engine does not support them Right ? […]

Why MySQL could be slow with large tables ?

If you’ve been reading enough database related forums, mailing lists or blogs you probably heard complains about MySQL being unable to handle more than 1.000.000 (or select any other number) rows by some of the users. On other hand it is well known with customers like Google, Yahoo, LiveJournal,Technocarati MySQL has installations with many billions […]

Notes from the Newb

Notes from the Newb. I’m relatively new to MySQL having come from the world of embedded micro-databases, and though I’m pretty familiar with a number of database systems, I’ve discovered that I have a lot to learn about MySQL. As a new member to the Percona team, I thought I’d have an ongoing blog theme […]

Tools and tips for analysis of MySQL’s Slow Query Log

MySQL has a nice feature, slow query log, which allows you to log all queries that exceed a predefined about of time to execute. Peter Zaitsev first wrote about this back in 2006 – there have been a few other posts here on the MySQL Performance Blog since then (check this and this, too) but […]

Introducing backup locks in Percona Server

TL;DR version: The backup locks feature introduced in Percona Server 5.6.16-64.0 is a lightweight alternative to FLUSH TABLES WITH READ LOCK and can be used to take both physical and logical backups with less downtime on busy servers. To employ the feature with mysqldump, use mysqldump –lock-for-backup –single-transaction. The next release of Percona XtraBackup will […]