August 1, 2014

Using MyISAM in production

There were recently number of posts about MyISAM, for example Arjen wrote pretty nice article about MyISAM features so I thought I would share my own view on using MyISAM in production.

For me it is not only about table locks. Table locks is only one of MyISAM limitations you need to consider using it in production.
Especially if you’re comming from “traditional” databases you’re likely to be shocked by MyISAM behavior (and default MySQL behavior due to this) – it will be corrupted by unproper shutdown, it will fail with partial statement execution if certain errors are discovered etc.

You should not think about this however as about bugs, as many MySQL features it is designed for particular load cases when it shines and it might not be good choice for others.

In 1999 for my production application (storing billions of rows in tens of thousands tables) Innodb was better choice mainly because of thouse other behaviors … well table locks was the problem at very early stage but it was solved by using this significant number of tables.

I still have the same view on Storage Engines – Innodb is my oppinion is better choise for general purpose storage engine – it better matches what you would expect from database server and saves you from a lot of gotchas – this might be more important than performance for small application. As load increases you might convert certain tables to MyISAM and other storage engines for performance reasons…. of course keeping all limits in mind.

So here is my list of items you need to keep into account while using MyISAM tables.

Recovery. MySQL was running stable for us, giving us false sense of security but when it crashed (or was it power failure?) It took many hours to check recover our tables. If this happened make sure you have decent myisam_sort_buffer_size and large myisam_max_sort_file_size otherwise recovery may be done by key_cache rather than sort which can take even longer.

Be careful with myisam_recover. This is great option to automate
recovery but it can give you couple of nice surprises. First – if you have many tables which are being repaired, each may allocate
myisam_sort_buffer_size and MySQL could crash or go swapping. Second – table will be locked while repair is going. If this is frequently used
table you may have all your connections become busy waiting on this table to become available effectively bringing down. Much better
solution for us was to move out all databases out of MySQL directory to other location, check them by myisamchk prioritizing more important
databases first an then rename them back to MySQL database directory. This way accesses to non-checked tables fail with table does not exist
error rather than wait forever.

Hidden corruptions. If could be bad memory OS or MySQL bugs but corruption may happen and go for long unnoticed with MyISAM storage engine. This hidden corruption may later cause crashes wrong query results and further data corruption.

Partial updates. MyISAM is does not have transactions this is well understood but it also does not have atomic statement execution, if you run UPDATE which updates 1000000 rows after crash only 500000 of rows may end up being updated, and you might not know which ones.

Concurrency. MyISAM uses table locks and has concurrent inserts which can go concurrently with selects. This is sometimes presented as great concurrency for inserts but in reality it means only one INSERT is
allowed to happen at the same time. It can happen concurrently to select statements but it has to be one insert at the time. Happily inserts in MyISAM are rather fast so it rarely is the problem The other misconception about table locks is if you have 95% reads table locks are
not going to be the problem. It might be truth if you only have short reads or writes but you’re in great trouble if you need to run some bulk
operations. For example running full table scan query to compute some stats is often impossible in production as it would block all updates to
the table. Even worse with bulk updates queries. You do not have to have a lot of such queries to get into trouble. Just one is enough.

Lock priorities. By default MySQL treats updates as higher priority operations. You can use SELECT HIGH_PRIORITY or UPDATE LOW_PRIORITY to adjust that or you can simply set low_priority_updates option. Anyway default behavior means any UPDATE statement which is blocked by long running select will also block further selects from this table – they will have to wait until UPDATE is executing which is waiting on SELECT to complete. This is often not accounted for and people think – “OK. I write my script so it does short updates so it will not block anything” – it still may cause total block if there are long selects running.

Fragmentation. This is common problem for pretty much all storage engines. It is however also different. MyISAM has in-row fragmentation which means single row may be stored in many pieces. In extreme cases I’ve seen over 10 pieces in average per row – this is when rows contain some data which constantly growths in size. It does not happen for all applications and there are ways to fight it but still watch out.

Lack of row cache. MyISAM tables only have indexes cached in
key_buffer while data is cached in OS cache. It is performance issue as system call is needed to get data from Operation System even when it is in cache but it is only part of the problem. The other problem is – it is hard to manage resources, especially if you have some other processes going on the same server. Backup process may go ahead and wipe OS cache
which you hoped for causing unexpected slowdowns.

delay_key_writes Whatever way you enable this option – globally, for table or for set of statements (by using LOCK TABLES/UNLOCK TABLES) be careful. This option may improve performance dramatically in certain cases by avoiding flushing dirty index blocks from key_buffer to disk, but it also comes at great danger if MySQL Server crashes or power goes down. In case crash happens without this option is enabled in most cases table corruption will be mild, especially on low loaded servers. Many users do not even know you need to check MyISAM tables after crash and have been running for years survining many crashes. If you enable delay_key_writes it drastically changes. Now in case of crash your index and data will likely be very much out of sync and if you do not repair the table you will very likely observe very serious corruption with queries failing to run, wrong result sets or crashes.

Here is my list what I think MyISAM tables are good to be used for. This list is not inclusive and every system has its own risk tolerance factor and performance requirements, not to mention different load:

Log tables. Now Archive storage engine can be even better.

Read only data, especially packed with myisampack. This can’t be corrupted or updated and as you see this is where our problems reside.

Cache/Session tables (you can throw them away if it server crashes). You can use multiple of cache tables to avoid concurrency issues.

Temporary tables used in batch processing and other means

Exported data If you export data from OLTP system for example to perform data analyses – MyISAM will be great fit. Even if you get the crash during export you normally can start over and most of the time data is read only anyway.

Tables which contain data which is quick to regenerate

Generally tables which you can throw away without any trouble.

Tables with data you do not need to be 100% correct and always available. For example statistical data.

Data on slave servers. If it crashes you can simply re-clone it. Make sure however to have at least one proper master to fall back to. Also be worried about replication with different storage engines.

Then performance improvements are worth the risk.

About Peter Zaitsev

Peter managed the High Performance Group within MySQL until 2006, when he founded Percona. Peter has a Master's Degree in Computer Science and is an expert in database kernels, computer hardware, and application scaling.

Comments

  1. Adrian says:

    Hi Pieter,

    Thank you guys for a great book which is worth every cent and a great site.
    We’ve been puzzling over concurrent inserts for a while while using v5.0.15 and we’ve discovered that concurrent inserts don’t seem to be allowed with a spatial key defined on our spatial column. (waiting for write ‘concurrent insert lock’ is always shown in mysqladmin debug while a select is running) Removing just the spatial key frees up concurrent inserts to work as they should by default (concurrent_insert=1)

    Although I may have missed something I have looked everywhere for a reference to this issue in the docs and web. Could this be a bug or is there perhaps a good reason why MySQL does this?

    Thanks
    Adrian

  2. P. Kuiper says:

    Hi Peter,

    “In 1999 for my production application (storing billions of rows in tens of thousands tables) Innodb was better choice mainly because of thouse other behaviors … well table locks was the problem at very early stage but it was solved by using this significant number of tables.”

    Were all those tens of thousands tables stored on the same server? Did this have any impact on performance? Just wondering since there isn’t much info about these very very large data sets (which I happen to be interested in)

    Regards,

    Peter

  3. peter says:

    Paul,
    We had I guess about 10.000 tables per server and 20-30 servers. There are few things you need to consider dealing with such large number of tables

    1) Your table cache should be large to accomodate most frequently used tables. I target for opened_tables growing no more than 1 per second on average.
    2) You need to make sure you accounted for memory table_cache uses – each entry may use 4-8K depending on table structure
    3) If using Innodb tables be especially careful as Innodb has its own table cache which is unlimited in size – so over time you likely to have all tables metadata cached in data dictionary. This may be about 4K per table as well.

    You may also wonder how to select number of tables you going to chop your data to – my approach usually to keep data managable, so if you need to run check table or alter table to add the index you would not need to plan for major downtime but just have it done in 10 seconds with few users noticing it. There are of course other things to consider.

  4. Hi Peter,

    thanks for the insights. I was wondering about the row fragmentation. You mention you’ve seen an average of 10 fragments per row at one time.

    How can I find out how much fragmentation there is on the average?

    Thanks in advance,

    Roland.

  5. peter says:

    Right. 10 fragments per row are seen in rather rare cases. Mainly if you store some serilized data, such as XML and its size is always growing.

    Here is how to see it:

    # myisamchk -dvv table.MYI

    MyISAM file: table.MYI
    Record format: Packed
    Character set: utf8_general_ci (33)
    File-version: 1
    Creation time: 2006-06-17 3:29:19
    Recover time: 2006-06-17 3:29:38
    Status: open,changed
    Data records: 46749 Deleted blocks: 0
    Datafile parts: 46793 Deleted data: 0
    Datafile pointer (bytes): 4 Keyfile pointer (bytes): 4
    Datafile length: 50836264 Keyfile length: 861184
    Max datafile length: 4294967294 Max keyfile length: 4398046510079
    Recordlength: 796

    table description:
    Key Start Len Index Type Rec/key Root Blocksize
    1 777 4 multip. long 1 323584 1024
    2 792 4 multip. unsigned long 1 647168 1024

    Field Start Length Nullpos Nullbit Type
    1 1 1
    2 2 765 1 1 no endspace
    3 767 10 blob
    4 777 4 no zeros
    5 781 11 blob
    6 792 4

    Look at data file parts / data records – this will be average fragentation ratio which is very low for this table. Some rows however can be fragmented more than others.

    If you have major fragmentation running OPTIMIZE table will help.

  6. Some thoughts:

    Just use delay_key_write on slave servers. And only ever do INSERT/UPDATE on your master. If you loose a slave just take it out of production as it probably won’t work anyway.

    You can increase concurrency by keeping your INSERT/UPDATE statements small so they exec quick and don’t keep open table locks.

    The row cache issue is a bad one. I’d like to see some way to compute filesystem cache efficiency as a stat I can log. I haven’t found a way yet.

  7. peter says:

    Kevin,

    Yes Idea of using delay_key_write on slave servers is good one. You want to do insert/update only on master anyway as otherwise it will not be replicated if you do not use bi-directional replication.

    I also agree with your recommendation on taking slave out of production if it crashes. Actually with any crash slave can become out of sync with master so if you really care about you data you will need to reclone it (or verify they are in sync)

    Concurrency delete by chopping update statements is good approach. For example I quite frequently used DELETE … LIMIT 100; in the loop to avoid long table locks. You should however still watch out as if you have any long running selects you can get stuck.

    The row cache problem is actually partialy solved by using MMAP in MySQL 5.1 but you still can’t configure its size or measure hit ratio. If you’re on Solaris “dtrace” is probably something you could tune to handle the job.

  8. ananth says:

    Hi peter,

    I came across your performance blog while searching through performance forum on MySQL site
    It is a great blog with lot of good information.

    We are having a weird performance problem in production where we are only using MyISAM tables.
    Here is some background on our environment:

    H/W: Dell 2.4GH 2CPU servers with 3 internal disks using h/w RAID 5
    S/W: RedHat linux 2.4
    Application: Java jetty server with Hibernate
    MySQL version = 4.1.19

    We have been running application in production for almost a year with no problems
    Our biggest transaction tables are only 90MM rows with 10-10GB in disk space
    We have around 30 tables which are transactions related but some TEMP tables which are transient in nature for 3 weeks

    Here are symptoms:

    We found that MySQL server freezes for 30-60 seconds and during this time
    application DB thread pool grows in size doing nothing resulting in large number of db connections
    Once MySQL unfreezes, all of threads complete and everything is fine again

    During these freeze times, even simple ‘SELECT 1′ from our DB monitoring takes 30-40 seconds

    we captured some system stats during this time and noticed that there is one mysql thread taking up all CPU 25% (as we are running hyper-threading)
    Otherwise, system is idle, low load and no IO
    We captured show processlist and don’t see any thing happening as most fo processes are SLEEP mode

    We are suspecting ,based on some thread on MyISAM engine forum, that it may be some sort of “garbage collector” problem while flusing KEY-CACHE

    we have 1GB key cache on server
    we reduced it to 750MB to see if it makes any difference but still happening

    We are not able to recreate the problem in our integration environment where we don’t have load of transaction.
    We do have full data in integration though

    Can you shed some light on this and comment on what might be happening?

    I’d love to provide more information if you need.
    I also posted this on forum and here is the link
    http://forums.mysql.com/read.php?21,85701,96186#msg-96186

    Thanks in advance

    ananth

  9. James Day says:

    ananth,

    Please do as Ingo suggests. Ingo is the programmer who handles most MyISAM storage engine problems.

  10. P. Kuiper says:

    [offtopic]
    Its nice to see alot of the people from MySQL support here! I recommend everybody using MySQL in big (or even smaller) production setups to get a support contract. Its very very good! Its faster en more statisfying then posting your questions in forums, hoping to get an answer. I do recommend the Platinum support, we have it too and its nice to experienced people to fall back on in case you run into weird problems like ananth described above.
    [/offtopic]

    Keep up the good work guys!

    Peter

  11. ananth says:

    We don’t have support contract for MySQL and we are looking at buying support contract to suit our environment
    Meanwhile, Is it possible to buy consuting from MySQL gurus w/o having support contract to look at these performance problems we have been having with MySQL in production?

    ananth

  12. peter says:

    Peter,

    Thank you for warm words and your recommendations :)

    Ananth – our policy of helping non paying users in most case is the following – we do it publically – on forums, blogs, IRC channels, mailing lists so others would benefit and we do it on time permitting basics. If this does not work for you and you still would like to get help before you have support contract you may contact your Sales representative and try to convince them to provide pre-sales help.

    Working for MySQL we should follow company policies.

  13. peter says:

    Ananth,

    My strong felling is we might be looking at MySQL bug here. Key buffer at the start and its size does not change so there is no “garbage collector”. MySQL may need to flush dirty key blocks every so often – however this shoul be disk IO bound not CPU bound as you’re observing.

    First question – is it possible for you to try lates MySQL 4.1 version ? It would be great way to ensure it is not bug which is fixed long ago.

    Now it seems to happen with inserts going in the table. What if you use –skip-concurrent-insert option ? This would show if it is related to concurrent inserts at all.

    Also MySQL running in the tight loop is actually very good thing for troubleshooting. You may use oprofile to check where this time goes – this should point to the functions there the time is spent and possibly point out the problem. This is probably something I would start with.

    I also noticed in your case you have very small table_cache – this could be possible cause of the problems as removing table from table cache requires key blocks flush. Try higher values, something like 1024.

    Few more things you can keep in mind – MySQL 4.1 has multiple key caches – you can create separate key cache allocated to this table and possibly relax pressure on it. You also can disable key_cahce by setting it to 0 and see if it solves the problem.

    Again I guess it is some kind of bug – “system lock” should not stall system for so long, so the question is finding workaround and hunting a bug to fix it.

  14. ananth says:

    Thanks Peter for this information
    We are running on 4.1.19 and latest 4.1 is 4.1.20
    We’ll try to get 4.1.20 and see if problem goes away.

    Regarding using oprofile tool to profile the the thread, do we need debug version for that or can we run regular version?

    Thanks again

    ananth

  15. peter says:

    Ananth,

    4.1.19 is recent enough. I do not think changing to 4.1.20 will fix the problem.

    o_profile needs symbols to show where exactly time is spent inside of MySQL. You do not need “debug” mysqld for it however only debugging symbols which can be installed separately – they are provided in “debuginfo” package. It does not affet MySQL server speed.

  16. ananth says:

    Hi Peter,

    We were able to recreate problem in our integration environment by stressing application and mysql.
    Here is the stack trace

    0x808d903 handle_segfault + 423
    0x82e8d98 pthread_sighandler + 184
    0x81234db insert_into_free_memory_sorted_list__11Query_cacheP17Query_cache_blockPP17Query_cache_block + 95
    0x81233b1 insert_into_free_memory_list__11Query_cacheP17Query_cache_block + 53
    0x81231fd free_memory_block__11Query_cacheP17Query_cache_block + 93
    0x812279f free_query__11Query_cacheP17Query_cache_block + 219
    0x81226a4 free_old_query__11Query_cache + 84
    0×8123025 allocate_block__11Query_cacheUlcUlc + 137
    0x81227f1 write_block_data__11Query_cacheUlPcUlQ217Query_cache_block10block_typeUic + 69
    0×8121632 store_query__11Query_cacheP3THDP13st_table_list + 454
    0x809da46 mysql_execute_command__FP3THD + 1258
    0x80a223f mysql_parse__FP3THDPcUi + 211
    0x809c6ef dispatch_command__F19enum_server_commandP3THDPcUi + 1547
    0x809c0d8 do_command__FP3THD + 188
    0x809b7a7 handle_one_connection + 615
    0x82e654c pthread_start_thread + 220
    0x830fe0a thread_start + 4

    We looked at source code and the problem is related to query_cache
    We have allocated 500MB of query cache and looks like lot of trashing of query cache going on

    Here are our query cache variables and status
    +——————————+———–+
    | Variable_name | Value |
    +——————————+———–+
    | have_query_cache | YES |
    | query_cache_limit | 1048576 |
    | query_cache_min_res_unit | 4096 |
    | query_cache_size | 524288000 |
    | query_cache_type | ON |
    | query_cache_wlock_invalidate | OFF |
    +——————————+———–+

    +————————-+———–+
    | Variable_name | Value |
    +————————-+———–+
    | Qcache_free_blocks | 5303 |
    | Qcache_free_memory | 486708416 |
    | Qcache_hits | 2845741 |
    | Qcache_inserts | 2623234 |
    | Qcache_lowmem_prunes | 0 |
    | Qcache_not_cached | 2638244 |
    | Qcache_queries_in_cache | 26168 |
    | Qcache_total_blocks | 57725 |
    +————————-+———–+

    As a workaround we experimented by changing query_cache = 0 on one server and the problem went away but we saw server load go up as there was lot of IP (as expected)
    I was going through the documentation about tuning query cache and they have a suggestion to tune
    | query_cache_min_res_unit | 4096 |
    this parameter to lower if result sets are smaller in size
    We have not experimented with that yet

    I am not sure if this is a BUG or not to report in as BUG since we have not truned any query cache other than allocating lot of query cache under the impressions that more the better but learned that it is not the case under heavy load.

    ANy suggestions?

    Thanks
    ananth

  17. peter says:

    Hi Ananth,

    The stack trace you’re showing is actually crash. If MySQL did not exited there could be problem in crash handling. (You mentioned you had one of threads taking all CPU)

    This is obviously a bug as you get the crash with query cache enabled. If you can share your test case please report it as a bug so developers can fix it.

    In your case number of queries and number of blocks in cache are small so changing query cache parameters are not expected affect things, however they may cause bug not to happen.

    Myself I’m not bug fan of query cache and I would normally advice caching things in application – this can be done much more efficiently especially if there are many updates. Query cache invalidation is too coarse. Using memcached is one of most commonly used approaches, however thare are other libraries available as well.

  18. Alex says:

    We ran into exactly the same query cache problem recently, i.e. all queries hang (even simple ones) for no reason. According to MySQL, it’s caused by a design problem with the query cache when invalidating records (Peter; you can have a look at MySQL network ticket #9498 for details).

    As you did, we fixed it by disabling the query cache entirely. You could try re-enabling it with a smaller size (128MB or so) to see if the problem comes back.

  19. peter says:

    Alex,

    I’m afraid I can’t take a look at your issue any more as I’ve left MySQL and starting up my own MySQL Consulting company right now, specializing in MySQL Performance.

    Does it have any bugs associated with it – this would be interesting to check out.

    Generally query cache requires locks for invalidation and it can stall queries for whole process of invalidation, however if it stalls forever that is a bug.

    In general I would ask yourself why do you bump into this problem – there are sites with much higher load – how do they handle it. The answer probably would be – most of them will not use query cache using more efficient cache methods.

  20. James Day says:

    The qury cache uses a sequential search through a list of free blocks when a single query is being removed from the cache. Regularly consolidating the free space with FLUSH QUERY CACHE may or may not help, I haven’t benchmarked it.

    See bugs:

    http://bugs.mysql.com/bug.php?id=21074
    http://bugs.mysql.com/bug.php?id=21051

    Because of the time it takes to remove queries it can be faster to have a small cache and anyone going above 100M should be alert for the possibility that it could become a performance penalty rather than improvement. It’s actually worth being alert for that at any size, since it does always add a little overhead; it just grows with the size.

    James Day
    Support Engineer, MySQL AB

  21. peter says:

    Thanks for clarification James,

    Sequential search through list of free blocks on each query removal is a serious problem, especially as single table update may remove very large number of queries from query cache.

    Pretty interesting, this problem was only discovered recently. On other hand Query Cache is mostly used by light duty applications, serious applications use other caching techniques.

  22. James McKenzie says:

    Peter,

    I was wondering if you have any experience running MyISAM in a ridiculously HUGE environment. Right now I am working with a team of DBA’s and developers to launch a MYSQL implementation using a Sun E25K (72 X 1.2GHZ dual core procs with 200GB RAM). We are stuck with very poor insert rates and I was hoping you would have some suggestions. The most we get out of a single mysqld process running on the server is around 700-900 rec/sec and we are trying to find a way to increase the rec/sec and decrease CPU utilization.

    Any ideas?

    Thanks in advance,
    James
    System Admin way over his head with all this database stuff

  23. peter says:

    James, What you do not specify is how many concurrent threads do you have, how many tables you insert to and how large is data size.

    Single thread will only use single CPU plus MyISAM has table locks so inserts in same table are always serialized.

    For in-memory data 800 records/sec is still low but a lot depends on complexity of your data – if you have a lot of indexes this may well load one CPU fully.

  24. inanc says:

    hello,

    in our production environment several nodes inserts on two tables. that is going to be transferred to the slave. we are currently using innodb and mostly getting lock wait time out and deadlock problems. by switching to myisam no locks occur on the tables and the selects which are summarizing the tables with write-intensive operations relieved free.

    i think myisam is a way to go for write-intensive summarizing-read-intensive-to-slave apps.

  25. peter says:

    inanc,

    You oversimplify things – in your case you get deadlocks because your data inserts are conflicting in some form – MyISAM serializes inserts and so cures deadlocks, but it is not parallel. What is better depends on the application and amount deadlocks you have.

    In general the approach is simple – test what works for you, think what is important for you and make decision.

  26. inanc says:

    peter,

    I agree with you on some points. But with innodb we use repeatable-read concurrency with update if not rows been updated then insert kind of transactions. At the same time a cluster of nodes actively writing to the same tables which are read by another different cluster members. Whatever technique we tried nothing was as succesful as with MyISAM even it is doing its job not by parallel ( but inserts are faster works and/or seems like parallel ).

    I am going to watch mysql webinars about scale-out write intensive applications maybe that can point a path.

  27. peter says:

    inanc,

    May be Innodb does not work well for your particular task (set of queries) but it is not enough to say it does not work for broad set of tasks:)

    You can use GetLock (for example) to serialize data for Innodb as many people do.

    Note for writes MyISAM quite often faster than Innodb but for large data sets recovery time can often be unacceptable.

  28. howa says:

    Hi,

    Some questions:

    1. Is it now InnoDB still suffer from the problems of poor performance when running in SMP server?

    2. If using build in default settings, seems MyISAM perform better in most suitation.

    3. If InnoDB corrupt, it is 0% recoverable?

  29. peter says:

    1) Innodb does suffer however MyISAM often suffers even worse problems… even for read only workloads.
    2) Do not use default settings for real world use.
    3) It is recoverable though there is no REPAIR TABLE to make it trivial.

  30. howa says:

    Hi Peter,

    Thanks for the reply.

    2. Yes, but say for example, MyISAM default insert speed is great, but InnoDB default insert speed is poor. (I have seen your InnoDB optimization presentation, and done the experiments)

    3. This is the main point, we just afraid we will have total loss of data since we can’t even repair the table.

  31. Santiago says:

    Hi Peter!

    I’m from Argentina, sorry for my english.

    Can you give me any hint to transform all my MyISAM tables to INNOdb? The database is currently used in production, we have roughly 100 tables and 100,000 rows. The idea is to avoid the dead time of the site.

    Thanks a lot for your book and your site, it’s a great help!

  32. peter says:

    Santiago

    Use replication, convert slave to Innodb and when switch.

  33. Santiago says:

    Peter, thanks very much. I did this:
    (Original Table = transferido. This is the MyISAM table).

    – START SCRIPT
    USE mydatabase;
    CREATE TABLE transferido_inno (

    – SOME FIELDS

    PRIMARY KEY (id)
    ) ENGINE=INNODB DEFAULT CHARSET=latin1;

    SET UNIQUE_CHECKS=0;
    START TRANSACTION;

    – SOME MULTIPLE INSERTS KIND OF
    INSERT INTO transferido_inno(
    SELECT * FROM transferido LIMIT 0,100000
    );
    INSERT INTO transferido_inno(
    SELECT * FROM transferido LIMIT 100000,100000
    );
    –ETC.

    COMMIT;
    SET UNIQUE_CHECKS=1;

    ALTER TABLE transferido RENAME TO transferido_myisam;

    ALTER TABLE transferido_inno RENAME TO transferido;
    OPTIMIZE TABLE transferido;

    – END SCRIPT

    It worked really good.

    I started optimizing this database a month ago (my first real database challenge). This blog and your book (along with Pro MySQL) helped me very much. The site is now running reaaaaally good. The server load decreased from 90% to 20% with some tunning, new indexes and this new altered table(we needed MVCC desperately).

    Thanks a lot!

  34. Adam says:

    Hi Peter,

    I’m a web app developer and have been using MySql for a few years now, but I’m not a “database person”, so do not fully understand all the different types of tables available and when to use one in preference over the other. Once upon a time I decided that innodb would be best, instead of the usually-default MyISAM. That has worked well in general, but sometimes I do wonder if I could be using a better engine in some cases, especially because I commonly hear that “MyISAM is quicker”.

    Therefore, I realise it’s an old article now, but I wanted to say thanks for the original article that gives some insight into when MyISAM *could* be a better option.

    All the best.
    Adam

  35. Hans-Henrik Stærfeldt says:

    Good insights. However, I do prefer my statistical data to be correct :)

  36. Michael says:

    “3) It is recoverable though there is no REPAIR TABLE to make it trivial.”

    I’ve been burned once by people saying such things ..

    a hard drive filled up and crashed the database and took down the WHOLE mysql instance (not just a couple of currupt tables) and I could not find a way to quickly the the site up and running .. it was down for a FEW DAYS
    I had backups – with myisam I could just drop the tables and rebuild them and it could have been up ad running within a few hours (and stuff not using the corrupt tables would have still worked anyway)

    I searched and searched and could not find a way to fix.. the only way I could get mysql to even start was to disable innodb
    so ever since I just stuck to myisam …
    where avoiding the risk of extended downtime is important, innodb is dangerous!

    but

    that still leaves me with the old problem of tables being locked much of the time on a busy website ,,,

Speak Your Mind

*