April 20, 2014

Why MySQL could be slow with large tables ?

If you’ve been reading enough database related forums, mailing lists or blogs you probably heard complains about MySQL being unable to handle more than 1.000.000 (or select any other number) rows by some of the users. On other hand it is well known with customers like Google, Yahoo, LiveJournal,Technocarati MySQL has installations with many billions […]

Finding out largest tables on MySQL Server

Finding largest tables on MySQL instance is no brainier in MySQL 5.0+ thanks to Information Schema but I still wanted to post little query I use for the purpose so I can easily find it later, plus it is quite handy in a way it presents information:

PHP Large result sets and summary tables.

We’re working with web site preparing for massive growth. To make sure it handles large data sets as part of the process we work on generation test database of significant size as testing your application on table with 1000 rows may well give you very dangerous false sense of security. One of the process web […]

Many-table joins in MySQL 5.6

I recently worked on an uncommon slow query: less than 100 rows were read and returned, the whole dataset was fitting in memory but the query took several seconds to run. Long story short: the query was a join involving 21 tables, running on MySQL 5.1. But by default MySQL 5.1 is not good at […]

How to monitor ALTER TABLE progress in MySQL

While working on a recent support issue as a Percona Support Engineer,  I got one question from a customer asking how to monitor ALTER TABLE progress. Actually, for MySQL 5.5 and prior versions, it’s quite difficult to ALTER the table in a running production environment especially for large tables (with millions records). Because it will rebuild and lock the table […]

How to recover table structure from .frm files with MySQL Utilities

Table structures are stored in .frm files and in the InnoDB Data Dictionary. Sometimes, usually in data recovery issues, we need to recover those structures to be able to find the lost data or just to recreate the tables. There are different ways to do it and we’ve already written about it in this blog. […]

InnoDB scalability issues due to tables without primary keys

Each day there is probably work done to improve performance of the InnoDB storage engine and remove bottlenecks and scalability issues. Hence there was another one I wanted to highlight: Scalability issues due to tables without primary keys This scalability issue is caused by the usage of tables without primary keys. This issue typically shows […]

How to reclaim space in InnoDB when innodb_file_per_table is ON

When innodb_file_per_table is OFF and all data is going to be stored in ibdata files. If you drop some tables of delete some data then there is no any other way to reclaim that unused disk space except dump/reload method. When Innodb_file_per_table is ON, each table stores data and indexes in it’s own tablespace file. […]

Want to archive tables? Use Percona Toolkit’s pt-archiver

Percona Toolkit’s pt-archiver is one of the best utilities to archive the records from large tables to another tables or files. One interesting thing is that pt-archiver is a read-write tool. It deletes data from the source by default, so after archiving you don’t need to delete it separately. As it is done by default, you […]

Percona MySQL Webinar: Really Large Queries: Advanced Optimization Techniques, Feb. 27

Do you have a query you never dared to touch? Do you know it’s bad, but it’s needed? Does it fit your screen? Does it really have to be that expensive? Do you want to do something about it? During the next Percona webinar on February 27, I will present some techniques that can be […]