One data point which is very helpful but surprisingly few people have is the history of the table sizes. Projection of data growth is very important component for capacity planning and simply watching the growth of space used on partition is not very helpful.

Now as MySQL 5.0+ has information schema collecting and keeping this data is very easy:

And use this query to populate it:

I put it to the cron to run nightly as:

Though if you’re looking to keep it completely inside MySQL you can create appropriate event in MySQL 5.1+

Unless you’re having millions of tables this is something you can set it up and forget but when a year later when someone asks you about growth rate for individual table you will have it handy.

If you’re having large number of MySQL servers, especially in Sharded environment it makes sense to modify the script so you store data on one central server this allows you to do a lot of nice queries to see how evenly data is distributed among shards etc. In replicated environment you should put this cron on Master only. If you use statement based replication it will automatically pick up sizes and even different tables on master and slave.

13 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Klaus

Thanks Peter, what a helpful little tool, fully automated!

stewart

Makes me think of keeping this as a table stat to have better expansion of data files.

bob

Nice. Thanks.

Shantanu Oak

Very useful tip. Instead of inserting the records in a table, I would prefer saving it to a text file by adding the standard and error out to the cron entry.
… >> /home/mysql_log.txt 2>> /home/mysql_log_err.txt

When I need to know the number of records, I can now use grep
grep -w tbl_name mysql_log.txt

Rob Wultsch

“Unless you’re having millions of tables this is something you can set it up and forget but when a year later when someone asks you about growth rate for individual table you will have it handy.”

Depending upon what your environment looks like, the information_schema query can easily run a server out io (amongst other things) for long periods of time. I have had to clean up a rather large mess caused by a query like the one suggested to.

I suggest that the query above should only be used if there is a relatively small number of tables. I would probably not use it if there were more than a thousand tables on a server.

YMMV

Daniël van Eeden

Maybe using events could be nice instead of cron.

Morgan Tocker

I might suggest a small modification to cut out the noise from information_schema / mysql databases:

INSERT INTO stats.tables SELECT DATE(NOW()),TABLE_SCHEMA,TABLE_NAME,ENGINE,TABLE_ROWS,DATA_LENGTH,INDEX_LENGTH,DATA_FREE,AUTO_INCREMENT FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA NOT IN (‘mysql’, ‘information_schema’);

It’s not a big deal – but these infrequently change.

Baron Schwartz

Maatkit’s mk-find has features to iterate the tables and insert various information from them, just as you’ve described. Another advantage of this is that you won’t get false-positive checksum differences, which you’ll get if you let INSERT..SELECT replicate.

shazamo

very cool and helpful tool but the issue with that is table_rows field does not contain accurate numbers, it contains an estimate values

Ronald Bradford

Realize that for large InnoDB schemas (e.g. over 1TB, tables over 500GB etc), accessing the INFORMATION_SCHEMA.TABLES information for InnoDB tables will cause random index scans of the data for statistics purpose. This is one reason why these queries can take minutes to run.

Alex

Sad, but it doesn’t provide correct info for barracuda tables. Any workarounds?