July 29, 2014

Statistics of InnoDB tables and indexes available in xtrabackup

If you ever wondered how big is that or another index in InnoDB … you had to calculate it yourself by multiplying size of row (which I should add is harder in the case of a VARCHAR – since you need to estimate average length) on count of records. And it still would be quite […]

Why Index could refuse to work ?

Have you ever seen index which refused to be used even if there is every reason for it to work (from the glance view):

Why on the earth index would not be used you would think, even if MySQL is mentioning it in “possible keys” ? Should you try to force it ?

Indexes in MySQL

MySQL does not always make a right decision about indexes usage. Condsider a simple table:

; 250001 (V1)

; 83036 (V2) (execution time = 110 ms) That is index selectivity by condition (ID1=1) is V2/V1 = 0.3321 or 33.21% It is said (e.g. book “SQL Tuning”) if selectivity over 20% then a full table […]

Concatenating MyISAM files

Recently, I found myself involved in the migration of a large read-only InnoDB database to MyISAM (eventually packed). The only issue was that for one of the table, we were talking of 5 TB of data, 23B rows. Not small… I calculated that with something like insert into MyISAM_table… select * from Innodb_table… would take […]

Connecting orphaned .ibd files

There are two ways InnoDB can organize tablespaces. First is when all data, indexes and system buffers are stored in a single tablespace. This is typicaly one or several ibdata files. A well known innodb_file_per_table option brings the second one. Tables and system areas are split into different files. Usually system tablespace is located in […]

Analyzing air traffic performance with InfoBright and MonetDB

Accidentally me and Baron played with InfoBright (see http://www.mysqlperformanceblog.com/2009/09/29/quick-comparison-of-myisam-infobright-and-monetdb/) this week. And following Baron’s example I also run the same load against MonetDB. Reading comments to Baron’s post I tied to load the same data to LucidDB, but I was not successful in this. I tried to analyze a bigger dataset and I took public […]

To SQL_CALC_FOUND_ROWS or not to SQL_CALC_FOUND_ROWS?

When we optimize clients’ SQL queries I pretty often see a queries with SQL_CALC_FOUND_ROWS option used. Many people think, that it is faster to use this option than run two separate queries: one – to get a result set, another – to count total number of rows. In this post I’ll try to check, is […]