What is often underestimated is impact of MySQL Performance by complex queries on large data sets(ie some large aggregate queries) and batch jobs. It is not rare to see queries which were taking milliseconds to stall for few seconds, especially in certain OS configurations, and on low profile servers (ie having only one disk drive) even if you just have one such query running concurrently.
Lets talk a bit how it is happening and how to prevent it.
Cache Wiping This is first reason for this to happen – query which crunches large amount of data set wipes data from your normal working set from OS cache. Operation Systems and MySQL Itself employs various strategies to attempt to minimize such effect but the truth is it still happens.
Disk Starvation As Cache efficiency drops more requests have to hit the disk, which may be 100% busy running your batch job query. This is especially bad when you only have one drive for database location and in this case single query really can keep it always busy. As drive gets busy requests have to wait to be scheduled. You may imagine it would not need to wait for long because there is only one query ripping disk apart, in fact however a lot of queries can get piled up because of reduced cache efficiency many queries may start waiting on disk at once. In some cases I’ve seen average time requests spends in the queue to be over 1 second, which is much more than few milliseconds you would expect disk IO to take.
Scheduling Issues OS Disk IO schedules may also “help” in this case. Number of them would try to optimize throughput before latency, meaning if you have query which is doing full table scan (sequential reads) its requests will be prioritized as they do not require disk head movement, compared to IO requests in random locations other threads are willing to take. Especially some older disk schedulers could be poor. I remember in Linux 2.4 times I could make single full table scan query to slow things that bad, so “ls” would take over 10 seconds for directory with couple of files. In recent Linux Kernels deadline or cfq IO schedulers should be better with this.
So what can you do about it ?
Do reporting on the slave Doing reporting on the slave is great idea if you have one. Even if you use Innodb tables and so do not suffer from table locks reporting queries can affect web site performance dramatically. Sometimes instead of using slave you may use database version from the backup or LVM snapshot of current database with same results. In some cases you can’t move it fully to the slave – for example if you load the data or build summary tables. In such cases you may still move some of the load to the server – you may read data from slave and write it to the master (selects often contribute most of the load) or you may prepare summary tables on the slave and when move them to the master using mysqldump, or even careful file transfer if you’re using MyISAM tables.
Chop it If you can’t execute this work on slave server, for example it is old data purge activity you may at least chop it, meaning do not do it as all one big simple query or as set of queries going one after another. Have sleep between them so none of them can take too much resources for too long time. If you delete things do DELETE … LIMIT 1000 and insert sleep 10; in between. By spreading load this way you make sure large portion of the cache will not be wiped out at once and also if few queries pile up because query took many resources they will have time to resolve before next portion of the query takes place.
Time it This is kind of obvious but I still mention it – if you have to do something intense do it during the lowest load. Do not just place it in the cron job for 4AM and forget about it, it may be still too bad and your web site users may be suffering when you can’t notice it. Make sure the load is low enough so your complex query or batch job do not slow things to bad.
In some environments which do not run 24/7 this is great solution as there is no one to suffer at certain times.
Hi Peter!
I’ve got a troubles with ICQ, so I can talk with you only here.
I see on one of my servers such interesting situation with InnoDB: http://madm1ke.livejournal.com/25503.html or http://community.livejournal.com/ru_mysql/149515.html
Please, if you have any ideas – let me know.
Thanks, madm1ke.
Mike, I’ve answered but using forums for unrelated questions would be better:
http://forum.mysqlperformanceblog.com/s/f/2/
I get notifications for forums posts anyway.
Ok, Peter, sorry, I have a very bad day, so I forgot about forums 🙁 I’ll reply you tomorrow.
In the future, I’ll try to use your forum firstly 😉
Mays be this helps. Try in order :
– optimize tables frenquently
– put indexes on query tables
– use EXPLAIN
– use temporary or better use HEAP type
– enlarge mysql cache
– cut big table and use MERGE
– use mysql RAID ability
Balluche,
First this post is not about optimizing queries. Surely you should do that but believe me there are queries/jobs which will take hours even perfectly optimized just because of their complexity. Now regarding you advice.
“optimize tables frequently”
Is typical mistake. Some people optimize tables daily even if these got just few updates, it is just wasting time. Only optimize when tables got fair amount of changes and was fragmented etc.
“use temporary or better use HEAP type”
This is wrong way to put it. You can create temporary table which can be any storage engine including HEAP or MySQL will automatically create temporary table for you for query execution which will often start as HEAP and will be converted to on disk if it grows too large. It is true ensuring your temporary table is not converted to the MyISAM is good way to improve performance.
“use mysql RAID ability”
In fact MyISAM “RAID” is dead. Do not use it unless you’re looking for trouble. Was not it disabled in recent MySQL versions at all by default?
[quote]
Chop it If you can’t execute this work on slave server, for example it is old data purge activity you may at least chop it, meaning do not do it as all one big simple query or as set of queries going one after another.
[/quote]
Hmm… do you think it’s really good idea in general case? Yes, we’ll spread the workload out among the time but we’ll have to do a bit more work at all (at the best case to parse more queries only and more worse – we’ll purge query cache [for some tables] every time small update will be processed).
But yes, sometimes batch jobs are really painfull… It’s probably worth to mention an another pain — ‘thandering horde’ effect, no? I mean the situation when a long-long-time query locking a wide range of data finished (amen) there probably exists not-one-hundred of queryes waiting for lock. And when lock is released buffers grows up very quickly. ( this is related not only to mysql itself, i’ve seen frontends with only apache running dies for a few seconds after batch task has been accomplished on backend…)
As most of solution it is great in some particular cases. It is like drugs you need to take them when you want to solve particular problem.
Right you may end up doing more and having more work done due to many things like parsing worse cache hit rate, transaction commits etc. In some cases however these are less important compared to simply irresponsive web application when you run some complex queries…
You may not really care about batch job response time but mainly about response time of interactive queries.
As someone who has to build reporting queries that run on Oracle, MySQL and PostgreSQL, I can say that the biggest change you need to make is one of attitude.
On Oracle and PostgreSQL, you can write some truly monstrous queries and expect the query planner to make the right decision about 90% of the time, and if it doesn’t, you can tweak a few parameters to make it do the right thing.
In MySQL, you just need a different approach. Funnily enough, a lot of the things you can do in MySQL will also help in PostgreSQL or Oracle, like creating summary tables, etc… They just seem to be more necessary in MySQL a little more quickly.
If you’ve got a subquery that runs fast alone, and an outer query that runs fast when you put the dozen or so results from the subquery into it’s in() clause, but together they’re slow, then just dump the inner query to a summary table, and use table for the inner query, or use it to build a new query. It’s not necessarily as elegant, but it works, and it can work very fast.
Right,
In general MySQL has quite different ideology compared to Oracle or PostgreSQL. It is kind of changing a bit with MySQL 5.0 as it allow more complex constructs on MySQL side but in general you let MySQL to do simple things which is done well.
I have to join my transaction table with 8-master tables
pls. give an example of sql query