July 30, 2014

Handling big result sets

Sometime it is needed to handle a lot of rows on client side. Usual way is send query via mysql_query and than handle the result in loop mysql_fetch_array (here I use PHP functions but they are common or similar for all APIs, including C). Consider table:

Innodb redo log archiving

Percona Server 5.6.11-60.3 introduces a new “log archiving” feature. Percona XtraBackup 2.1.5 supports “apply archived logs.” What does it mean and how it can be used? Percona products propose three kinds of incremental backups. The first is full scan of data files and comparison the data with backup data to find some delta. This approach […]

How to obtain the “LES” (Last Executed Statement) from an Optimized Core Dump?

Ever ran into a situation where you saw “some important variable you really needed to know about=<optimized out>” while debugging?

Helgrinding MySQL with InnoDB for Synchronisation Errors, Fun and Profit

It is no secret that bugs related to multithreading–deadlocks, data races, starvations etc–have a big impact on application’s stability and are at the same time hard to find due to their nondeterministic nature.  Any tool that makes finding such bugs easier, preferably before anybody is aware of their existence, is very welcome.

MySQL net_write_timeout vs wait_timeout and protocol notes

In my previous post I mentioned you might need to increase net_write_timeout to avoid connection being aborted and now I think I should have better explained that. MySQL uses a lot of different timeout variables at different stages. For example when connection is just being established connect_timeout is used. When server waits for another query […]