August 28, 2014

The MySQL ARCHIVE storage engine – Alternatives

In my previous post I pointed out that the existing ARCHIVE storage engine in MySQL may not be the one that will satisfy your needs when it comes to effectively storing large and/or old data. But are there any good alternatives? As the primary purpose of this engine is to store rarely accessed data in disk space efficient way, I will focus here on data compression abilities rather then on performance.

The InnoDB engine provides compressed row format, but is it’s efficiency even close to the one from that available in archive engine? You can also compress MyISAM tables by using myisampack tool, but that also means a table will be read only after such operation.

Moreover, I don’t trust MyISAM nor Archive when it comes to data durability. Fortunately along came a quite new (open source since April 2013) player into this field – TokuDB! It seems to provide an excellent compression ratios, but also it’s fully ACID compliant, and does not have any of the limitations present in Archive, so it’s functionality is much more like InnoDB! This may allow you also to store production data on SSD drives, which disk space cost is still higher then on traditional disks, where otherwise it could be too expensive.

To better illustrate what choice do we have, I made some very simple disk savings comparison of all the mentioned variants.
I have used an example table with some scientific data fetched from here (no indexes):

ARCHIVE storage engine

TokuDB engine, default compression

TokuDB engine, highest compression

(btw, did you notice how the file name changed after altering with different compression?
It’s no longer reflecting the real table name, so quite confusing :( )

InnoDB engine, uncompressed

InnoDB engine, compressed with default page size (8kB)

InnoDB engine, compressed with 4kB page size

MyISAM engine, uncompressed

MyISAM engine, compressed (myisampack)

Compression summary table

EngineCompressionTable size [MB]
InnoDB none 2272
InnoDB KEY_BLOCK_SIZE=8 1144
InnoDB KEY_BLOCK_SIZE=4 584
MyISAM none 1810
MyISAM compressed with myisampack 809
Archive default 211
TokuDB ZLIB 284
TokuDB LZMA 208

So the clear winner is TokuDB, leaving InnoDB far behind. But this is just one test – the results may be very different for your specific data.

To get even better idea, let’s compare several crucial features available in mentioned storage engines

FeatureArchiveMyISAM (compressed)InnoDBTokuDB
DMLonly INSERTsnoyesyes
Transactionsnonoyesyes
ACIDnonoyesyes
Indexesnoyesyesyes
Online DDLnonoyes *yes **

* – since version 5.6, with some limitations
** – supports add/drop indexes, add/drop/rename columns and expand int, char, varchar and varbinary data types

Summary

TokuDB seems to be an excellent alternative when it comes to disk space usage efficiency, but this is not the only reason why you should try it perhaps.
You may want to check these articles too:

About Przemysław Malkowski

Przemek joined Support Team at Percona in August 2012.
Before that he spent over five years working for Wikia.com (Quantcast Top 50) as System Administrator where he was a key person responsible for seamless building up MySQL powered database infrastructure. Besides MySQL he worked on maintaining all other parts of LAMP stack, with main focus on automation, monitoring and backups.

Comments

  1. Worth noting that TokuDB does not support foreign keys.
    On compression it’s worth noting that ALTER TABLE into TokuDB with high compression is way faster than ALTE RTABLE into innodb with mild compression.

    Last, though I perfectly understand the topic and cause of this post (finding an engine with good compression), calling TokuDB “an ARCHIVE alternative” is so amusing (“what? no ARCHIVE left on the shelf? Gosh, I guess I’ll have to do with the supported, faster, smaller, transactional, acid compliant, online-ddl, open source, cheap wannabe engine called TokuDB”)…

    Last-last, worth noting that mixture of InnoDB & TokuDB requires carefull memory allocations.

  2. Normann says:

    im waiting for an Percona Server 5.6.x with TokuDB GA to test it :-)

  3. Nice writeup/analysis, just a few clarifications about TokuDB:

    - Yes, file names are obfuscated after a slow alter operation, but you can always get a mapping of database/table to file name using information_schema.tokudb_file_map
    - TokuDB also supports online DDL for “expanding” integer and char/varchar/varbinary column types.

    Lastly, strictly comparing raw compression without measuring it’s impact on performance can be misleading. Do you have performance numbers for this test?

  4. Justin Swanhart says:

    Shlomi,

    For some use-cases it is a MyISAM or an InnoDB replacement. For others, where FK are an absolute deal breaker, it can not be an InnoDB alternative.

    So discussing TokuDB in the context of ARCHIVE replacement does not mean it is only good as an ARCHIVE replacement. As you noted, we’ve talked about (and I imagine will continue to talk about) TokuDB in the context of many workloads.

    It should of course be noted that TokuDB does not support the WORM (write-once, read many) properly of ARCHIVE.

  5. Przemysław Malkowski says:

    @Schlomi, @Justin, thank you for valuable notes. Indeed I didn’t want to go into much details, since this post was meant to focus mostly on compression aspect.

    @Tim, thank you for clarifications, however I could not find the details about online expanding data types in TokuDB, could you point me to the documentation?
    Unfortunately I did not take a notes for this particular test performance numbers as it was not the goal of this post, but at least I can confirm altering to TokuDB was much faster then to compressed InnoDB. Besides there were posts on that already here, like: http://www.mysqlperformanceblog.com/2013/08/29/considering-tokudb-as-an-engine-for-timeseries-data/

  6. We refer to it as “hot column expansion” and it’s covered in section 3.4 of our documentation (we are currently reworking our documentation, after which I’ll be able to give a URL to the exact section), for now I’ll just past in the relevant section text:

    “Hot column expansion operations are only supported to char, varchar, varbinary, and integer data types. Hot column expansion is not supported if the given column is part of the primary key or any secondary keys.”

  7. Przemysław Malkowski says:

    @Tim, thank you, I’ve updated the post accordingly.
    Btw. the ability to online expand int to bigint of an auto_increment PK would be awesome to have.

  8. Justin Swanhart says:

    It would have been nice if InnoDB has added the “change log” for their ALTER TABLE as an open interface in the server than any engine could use. As it stands, TokuDB could probably create a changelog table similar to the FlexCDC log tables, then apply the changes after the ALTER completed. New transactions might have to read from the log too. This would be very flexible and similar to the InnoDB change buffer.

  9. What can be the reason for storing data both compressed and on SSD? If you store data compressed, probably, you don’t need it very often (because decompression will take CPU cycles). So, you can store it on a slower storage like regular HDD, right?

  10. Nils says:

    You’d probably want to check out the TokuDB documentation, especially tokudb_cache_size if you just want to use it for archiving, as it would reserve half of physical memory otherwise.

  11. Przemysław Malkowski says:

    @Vladislav, TokuDB uses compression by default and according to Tokutek, you don’t have to make a compromise between speed and disk space savings. So my point was that this engine may be a good choice not only for archiving old data, but also for usual workloads. Check out this for instance: http://www.tokutek.com/2012/09/three-ways-that-fractal-tree-indexes-improve-ssd-for-mysql/

    @Nils, you are right, and I expect everyone at least reads documentation before using a new toy in a production environment ;)

  12. Rick James says:

    Which 5.x version did you measure against? Oracle recently made some significant improvements in InnoDB compression, both in speed and compressibility.

    MariaDB already includes TokuDB, so you don’t need to wait for Percona. https://mariadb.com/kb/en/mariadb-5536-release-notes/

  13. @RickJames, MySQL 5.6 can reduce the performance hit of compression by via dynamic padding, but keep in mind that any padding of the 16K block itself will eat into the InnoDB cache and thus mean less cache hits. It’s a trade-off. Also, the only improvement to compressibility that I’ve seen is that you can now define the zlib level, but increasing it will increase CPU consumption and likely lower performance, again a trade-off.

    As for TokuDB’s availability in MariaDB right now, that is true. There is partial TokuDB functionality in MariaDB 5.5, and full functionality in MariaDB 10. See their knowledge base for more information at http://mariadb.com/kb/en/tokudb-differences/

  14. Przemysław Malkowski says:

    @Rick, this was on InnoDB 5.5.30 and Tokudb 7.1.0. The same table on InnoDB 5.6.16 is innodb_compression_level=6 (default):
    1072MB with KEY_BLOCK_SIZE=8
    592MB with KEY_BLOCK_SIZE=4
    Changing compression level to 9 did not change anything in result size in this case.

  15. Paul Kamp says:

    @Przemysław, nice write up. Have you looked at other database engines and how they compare across a number of different metrics? InnoDB and MyISAM are fairly mature.

    I started working with WiredTiger recently and they are showing some interesting results with their preliminary testing with MySQL when compared to InnoDB and LevelDB.

    http://wiredtiger.com/products/performance/

    I’d like to learn a bit more about how performance dynamics may change across applications and functions. What have you seen?

Speak Your Mind

*