As of MySQL 5.1 get MySQL slow query log logged in mysql.slow_log table instead of the file as you had in previous versions.
We rarely would use this feature as it is incompatible with our slow query analyses patch and tools
Fixing this is not trivial while staying 100% compatible to standard format as TIME type which is used to store query execution time and lock time does not store fractions of the second.

Today I’ve got some time to play with table based slow query log in production while tuning one of the systems. It is pretty nice to be able to work with data in SQL as it easy to filter all queries which happened within certain time interval (ie after I’ve done some changes) or filter out queries which you already looked at using LIKE statement.

As default table format for slow_log is CSV with no indexes typical operations like finding how many queries were accumulated in the logs or finding 10 last queries is not fast. The good thing however you’re allowed to change table to other storage engine and add extra indexes as you require. Doing so however may affect your performance – updating table with a lot of indexes may be significant overhead, so you can instead create analyze_low_log table of similar structure with needed index and populate it with insert… select statement as you need it.

One little gotcha which confused me first is the order query come in the log file. If you used to do tail -100 log-slow.log and now change it to SELECT * FROM slow_log ORDER BY start_time DESC LIMIT 100 you would find new queries coming to the top of the list rather than in the end. If you want to get new queries in the end you can do
SELECT * FROM (SELECT * FROM slow_log ORDER BY start_time DESC LIMIT 100) l ORDER BY start_time which shows last queries in the end similar to tail command.

If you’re lazy typing it over and over again you can create stored procedure, something like LAST_SLOW_QUERIES and use it instead, just remember unless you add extra indexes this table this can be rather time and resource consuming.

This is actually where being able to sort records by physical position quickly without need to have indexes would be quite handy

I also should tell log table implementation in MySQL is well though of in regards to avoiding trouble. For example you can’t ALTER log table while query is running, neither you can delete rows from it (or do any write queries) only read access is possible, with exception of TRUNCATE query. Which just recreates table which should be fast. Interesting enough however what would happen if one would use storage engine for log table which does not optimize TRUNCATE ? I guess it should be blocked as normal DELETE statements are blocked.

More than that. I found even if I convert table to MyISAM, disable concurrent inserts and run long reporting query on the log file query execution is still proceeds. I have not checked if concurrent insert is always forced for log table if it is allowed or if there is a buffer of slow log queries which is used if log table is locked.

One thing which would be rather handy is UDF or stored procedure to “normalize” query by removing comments and replacing constants with some placeholders as this would allow to aggregate log entirely in SQL finding most frequent slow query types or queries which load server the most.

The other handly feature would be the function which returns as as SET list of tables which query is using. This would allow for example to find all queries touching given table quickly which is now hard to do reliably.

I guess over time we would have more tools provided with these and similar features if not by MySQL than by community.

3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Barkat

Hi Peter,

how to implement your slow query logs patch on my server . i am using Mysql Database. Please guide me step by step.
It would be thanks.

Matt

According to MySQL 5.6 documentation mysql.slow_log can use only CSV and MyISAM tables so you can’t change it to storage engine that does not support TRUNCATE.

AndreyEx

thank you