April 19, 2014

Ultimate MySQL variable and status reference list

I am constantly referring to the amazing MySQL manual, especially the option and variable reference table. But just as frequently, I want to look up blog posts on variables, or look for content in the Percona documentation or forums. So I present to you what is now my newest Firefox toolbar bookmark: an option and […]

MySQL Partitioning – can save you or kill you

I wanted for a while to write about using MySQL Partitioning for Performance Optimization and I just got a relevant customer case to illustrate it. First you need to understand how partitions work internally. Partitions are on the low level are separate table. This means when you’re doing lookup by partitioned key you will look […]

Getting History of Table Sizes in MySQL

One data point which is very helpful but surprisingly few people have is the history of the table sizes. Projection of data growth is very important component for capacity planning and simply watching the growth of space used on partition is not very helpful. Now as MySQL 5.0+ has information schema collecting and keeping this […]

The story of one MySQL Upgrade

I recently worked on upgrading MySQL from one of very early MySQL 5.0 versions to Percona Server 5.1. This was a classical upgrade scenario which can cause surprises. Master and few slaves need to be upgraded. It is a shared database used by tons of applications written by many people over more than 5 years […]

Sharing an auto_increment value across multiple MySQL tables

The title is SEO bait – you can’t do it. We’ve seen a few recurring patterns trying to achieve similar – and I thought I would share with you my favorite two: Option #1: Use a table to insert into, and grab the insert_id:

Option #2: Use a table with one just row:

AUTO_INCREMENT and MERGE TABLES

How would you expect AUTO_INCREMENT to work with MERGE tables ? Assuming INSERT_METHOD=LAST is used I would expect it to work same as in case insertion happens to the last table… which does not seems to be the case. Alternatively I would expect AUTO_INCREMENT to be based off the maximum value across all tables, respecting […]

The ARCHIVE Storage Engine – does it do what you expect?

Sometimes there is a need for keeping large amounts of old, rarely used data without investing too much on expensive storage. Very often such data doesn’t need to be updated anymore, or the intent is to leave it untouched. I sometimes wonder what I should really suggest to our Support customers. For this purpose, the […]

InnoDB scalability issues due to tables without primary keys

Each day there is probably work done to improve performance of the InnoDB storage engine and remove bottlenecks and scalability issues. Hence there was another one I wanted to highlight: Scalability issues due to tables without primary keys This scalability issue is caused by the usage of tables without primary keys. This issue typically shows […]

utf8 data on latin1 tables: converting to utf8 without downtime or double encoding

Here’s a problem some or most of us have encountered. You have a latin1 table defined like below, and your application is storing utf8 data to the column on a latin1 connection. Obviously, double encoding occurs. Now your development team decided to use utf8 everywhere, but during the process you can only have as little […]

Adventures in archiving

One of our Remote DBA service clients recently had an issue with size on disk for a particular table; in short this table was some 25 million rows of application audit data with an on disk size of 345GB recorded solely for the purposes of debugging which may or may not occur. Faced with the task of […]