Jay Pipes continues cache experiements and has compared performance of MySQL Query Cache and File Cache.
Jay uses Apache Benchmark to compare full full stack, cached or not which is realistic but could draw missleading picture as contribution of different components may be different depending on your unique applications. For example for application containing a lot of clode but having only couple of queries to MySQL parsing may be performance bottleneck, assuming PHP opcode cache is not used. Also different applications may have different cache hit ratios which also needs to be factored in estimating improvement for real application.
So instead of following his route, especially as Jay is going to publish his comparison of all caches anyway, I decided to check peak performance of all caches compared to MySQL Server, by measuring just the time it takes cache to return the data. In the real life applications performance is likely to be lower due to less CPU cache usage efficiency larger object size and other reason.
So what my test does ? Simply we perform 10.000 of get requests from cache, which was previously populated to contain value and measuring how long does it take.
Test was done on my home test box (2Ghz AMD Sempron CPU) using MySQL 4.1 and PHP 5.0. For memcached access memcache extension from pecl was used. All applications were running on same system.
I used two baselines for comparison. First is speed of PHP Associative array. This is to show kind of peak speed possible at all. Furthermore this type of caching is rather helpful for some of applications, which tend to access same data read from database multiple times. Examining MySQL full query logs from many applications seeing several exactly same queries executed during page load is not an exception. For caching these associative array can be considered.
Second baseline was selecting from MySQL table. Query vas very simple – lookup by primary cache, so this is kind of peak performance MySQL can provide. Of course for your real queries cost of database access will normally be larger.
Results I got from my envinronment are:
Cache Type | Cache Gets/sec |
Array Cache | 365000 |
APC Cache | 98000 |
File Cache | 27000 |
Memcached Cache (TCP/IP) | 12200 |
MySQL Query Cache (TCP/IP) | 9900 |
MySQL Query Cache (Unix Socket) | 13500 |
Selecting from table (TCP/IP) | 5100 |
Selecting from table (Unix Socket) | 7400 |
Note: The test measures peak performance so I did not do much of error control or other precausions like string escaping which you will need in real application. Also due to the same reason you should make sure all caches are working as expected while testing it, for example you may set apc.enable_cli=1 if you’re running script from command line otherwise APC cache will not work and results will be wrong.
So what is about results and how we can use them for MySQL Performance tuning ? Not surpising associative array cache performs the best, almost 4 times faster than APC shared memory cache – closest competitor. In real life performance difference can be evel larger as there will be some syncronization contention while accessing shared memory cache which does not happen in this case.
File cache really does great, even though it is over 3 times slower than APC. The catch with file cache is – there are actually two very different cases – when cached data set fits in memory well and so served from OS cache, and when it does not. In case it does – APC Cache perhaps will give you better performance. If it does not fit in cache well – you will get disk IO which is very compared to performance of all in memory caches and so you might be better off storing your data on network with memcached.
Memcache performs worse than file cache (even though it is run on localhost in this case) – of course copying data from OS cache is going to be faster than retriving it via TCP/IP socket, It is however very interesting when it comes to compare it to MySQL Query Cache – It performs faster than Query Cache if TCP/IP socket is used, but if Unix Socket is used to connect to MySQL MySQL Query Cache will be faster. The explanation for this is also pretty simple – in both cases logic of the process is rather simple – to get result from the cache and ship it back, so large overhead of TCP/IP protocol compared to Unix Socket plays critical role here. Thinking which cache to use I would not however forget about other benefits of Memcache – distributed caching, support of time to live (so you do not get cache invalidated with each update) and ability to cache composed objects which may correspond to multiple MySQL queries.
On Selecting from the table I should note MySQL is rather fast selecting from the table as well – on this pretty low end box we’re getting over 7000 queries/sec and it is almost doubled if result sets are cached from query cache. Pretty impressive.
So what my recommendations would be about using these caches for your application ?
Cache per script data in associative array. If you have some data read from database which you need several time during page creation cache it locally do not depend on any other types of caches.
use APC Cache for single node applications If you have just one web server or your data set is small so it fits in memory in each of the servers APC Cache may be the most efficient way for you to cache the data.
Use Memcached for large scale applications If local node cache is not large enough to cache good amount of data caching on network by using memcached is very good way to go.
File Cache is good for long term storage If you need something stored long term or something which needs to be cached but does not fit even in distributed memory you can use file cache (ie on shared storage).
Query Cache is good when there is no other cache If you do not do any other caching for certain object or if you cache on different level (ie single object constructed from multiple query results) MySQL Query Cache may improve performance of your application.
Multiple layers of caching may do well Same as CPUs have multiple layer of caching and then same data may be stored in OS file cache, than SAN Cache you may implement multiple levels of caching for your application. In different circumstances different layering may make sense. For example you might wish to use APC Cache as L1 cache and File cache as L2 cache if you have large amount of data in long term cache. If you need something like this you might take a look at Eaccelerator which is APC alternative which supports caching user data both on disk and in shared memory.
Appendinx:
If you would like to repeat my benchmark or experiment with more caches here are source files and other requirements:
1) for file cache to work you need file named “test” containing “MyTestString”
2) You need to create table test.test for MySQL Cache to work
1 2 3 4 5 6 7 | CREATE TABLE `test` ( `k` varchar(60) NOT NULL default '', `val` varchar(255) default NULL, PRIMARY KEY (`k`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; INSERT INTO `test` VALUES ('test','MyTestString'); |
3) You need all caches to work, ie query cache enabled, memcached running, apc enabled for cli mode etc.
Main PHP File:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 | <?php require_once "global.php"; /* Array Implementation */ $arr=array(); $arr[$key]=$data; function cache_array() { global $arr; global $key; return $arr[$key]; } echo ("Array Cache "); benchmark("cache_array"); /* APC Implementation */ $rc=apc_store($key,$data,3600); function cache_apc() { global $key; return apc_fetch($key); } echo ("APC Cache "); benchmark("cache_apc"); /* File Cache benchmark */ /* IT assumes file is already created which contains data we need */ function cache_file() { global $key; return file_get_contents($key); } echo ("File Cache "); benchmark("cache_file"); /* MemCacheD Implementation */ $memcache=new Memcache; $memcache->pconnect('localhost',11211); $memcache->set($key,$data,0,3600); function cache_memcached() { global $key; global $memcache; return $memcache->get($key); } echo ("Memcached Cache "); benchmark("cache_memcached"); /* MySQL QC Implementation */ function cache_mysql_qc() { global $key; global $mysqli; $r=$mysqli->query("select val from test.test where k='$key'"); $row=$r->fetch_row(); if ($row) $ret=$row[0]; $r->close(); return $ret; } $mysqli=new mysqli('127.0.0.1','root'); echo ("MySQL Query Cache (TCP/IP) "); benchmark("cache_mysql_qc"); $mysqli->close(); $mysqli=new mysqli('localhost','root'); echo ("MySQL Query Cache (Unix Socket) "); benchmark("cache_mysql_qc"); $mysqli->close(); /* MySQL Direct Table Implementation */ function cache_mysql_table() { global $key; global $mysqli; $r=$mysqli->query("select sql_no_cache val from test.test where k='$key'"); $row=$r->fetch_row(); if ($row) $ret=$row[0]; $r->close(); return $ret; } $mysqli=new mysqli('127.0.0.1','root'); echo ("MySQL Table (TCP/IP) "); benchmark("cache_mysql_table"); $mysqli->close(); $mysqli=new mysqli('localhost','root'); echo ("MySQL Table (Unix Socket) "); benchmark("cache_mysql_table"); $mysqli->close(); ?> |
global.php file with benchmark function:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | <?php $key="test"; $data="MyTestString"; $rounds=100000; function microtime_float() { list($usec, $sec) = explode(" ", microtime()); return ( ((double)$usec + (double)$sec)*1000000.0 ); } function benchmark($func) { global $rounds; $t1=microtime_float(); for($i=0;$i<$rounds;$i++) $func(); $t2=microtime_float(); $t=$t2-$t1; $persec=($rounds/$t)*1000000; echo("Time: $t Gets/Sec: $persec\n"); } ?> |
Enjoy 🙂
Excellent stuff! Thanks for also sharing your PHP code!
Can we assume based on above results that storing static data directly on the harddrive is better than storing it through mysql even if mysql has query cache enabled ?
I am thinking of a situation such as a search engine which stores its chunked bitvectors as LZF compressed blobs in a table in mysql.
Would it then be better to store those bitvectors in the filesystem such as “/path/[chunkid]/[wordid]/vec.txt” instead of a myisam table for instance which gets the data with “SELECT vector FROM t1 WHERE wordid = xxx AND chunkid = yyy;” ?
I mean I know its better to store say gif files in the filesystem than in mysql, but this case is like something in between (and I think there can be many more similar cases specially regarding the above results)…
Apachez,
Absolutely. Storing data in the database will be slower than storing it on file as its adds up extra layer and extra processing. Query Cache eliminates a lot of overhead but not all of it – there still will be planty of copying especially for the blobs as in your case plus – if you use standard protocol blob will need to be unescaped on a client. Binary protocol and prepared statements do no need this… but they do not support query cache ether.
This is why pretty much all major search engines store index data on hardware not database.
One bad thing might be that storing it as plain files (as “/path/[chunkid]/[wordid]/vec.txt� for example) would then hit the max open files limit which exists in many operating systems while storing it as blobs in mysql would only use 3 file handles (or how many mysql uses per myisam table).
Another thing is that many filesystems becomes somewhat “unhappy” if they suddently gets something like 100.000 files in a single directory or more.
A third thing is that storing in mysql you can have relations between the tables and use inner join for something like:
SELECT t2.searchvector FROM t1 INNER JOIN t2 ON (t1.wordid = t2.wordid) WHERE t1.searchword = ‘abc’;
which I could imagine might compensate for the overhead that storing in mysql otherwise might be…
Something I would like to see in this blog (if I can make a wish :P) is how one can optimize searches using LIKE on a table with singlewords?
In my case my search engine has extracted all unique searchwords into its own searchword table. Searching with “WHERE searchword = ‘abc'” is no problem along with “WHERE searchword LIKE ‘abc%'”, but how to boost searches which are something like “WHERE searchword LIKE ‘%abc%'” ?
I tried to create a reversed table but that will just boost “LIKE ‘%abc'” searches (by changing them into “LIKE ‘cba%'”), but %abc% is still tricky I think… Since its a single word in each row I dont think mysql fulltext will be any useful in this case… any ideas which might be a blog topic ? 🙂
Apachez,
With you adhock implementation you do not have to use file per chunk – you could implement your own space allocator which uses large files, and which is good for your application. You even can reuse code from MyISAM storage engine if it fits well for your needs – it is GPL. You also can select file system of your choice – I played with ReiserFS testing 3.000.000 of files in the same directory – it worked great and file open/close speed was just great.
Relationships between data and being able to have non trivial queries for the data this is when DBMS comes in play. You still can always implement it more efficient in your special application than in general purpose database. Database are used as these save on development costs tremendously and offer good enough performance.
It is similarry to why many new developments are done in Perl, PHP, Ruby or Java – you can typically get faster code by using C/C++… yes but it costs more to develop while other languages offer good enough performance.
Speaking about like “%abc%” optimization – what you need is substring index which typically stored in memory as special kind of tree so. It will be best for selective like statements.
Yeah thats what I like with my current implementation, just perl and mysql is needed – no additional mumbojumbo elsewhere 😛
But just exchanging that second lookup for the vectors into looking for them on the disk wouldnt be that huge change in the sourcecode.
So hopefully I will try this shortly (more like within weeks than within days) and get back with results.
Regarding ‘%abc%’ optimization I tried to create a permuted table (for example word “abcd” is stored as “abcd”, “bcd”, “cd” and “d” in this permuted table) but that didnt gave me the boost I was hoping for. More info at http://forums.mysql.com/read.php?24,107503,107522#msg-107522
Yes. Such modified table is yet another approach. It however increases data size dramatically which is yet another reason why it could be slower, especially with large number of keywords. You however did not do exact comparison – as you mentioned you had to add another group by…
Also think if you really need to go away from MySQL with your search engine – is it worth it ? To what scale are you expecting to use it ? I guess with moderate data sizes it has reasonable speed already. With large data sets, for example with 100.000.000 documents + bit vectors will become so huge this search method will stop being efficient anyway 🙂
I would be curious anyway however.
The search engine uses chunked bitvectors where each chunk is 10kbit (1250 bytes which compressed by LZF gets in average 33 bytes). So the data in the searchvector table is “wordid, chunkid, searchvector”.
There is also a bitvector for each searchword which tells the engine which chunks the word exists in. This way a “pre-evaluation” is performed to find out if the search words the client has sent in all exists in the same chunk or not. If they dont have any chunk in common the search is halted and the engine reports “0 hits”.
Currently it is only indexing the forum of http://www.tbg.nu which today has approx 1.07 million posts. The engine itself is currently available at http://www.tbg.nu/cgi-bin/news_search.cgi and works like google (AND search) with the addition that it is also supporting wildcards. There is also a description available at http://www.tbg.nu/tbgsearch but that is not up2date (will fix that once I have fixed liveupdate of the searchindex, currently I use a manual chunkbased update “perl tbgsearch_insert.pl “). Will most likely fix this by using some kind of “collection” table which stores metadata such as which docid was last indexed, when it was last indexed etc…
What I have found currently is that wildcard searches are those which are somewhat slow (usually more than 5 seconds to complete – specially those with a leading wildcard, probably thats why google doesnt support wildcards? :P). So thats why I’m searching for a way to optimize “LIKE ‘%abc%'” searches against the wordtable.
This is quite offtopic from Cache Performance Comparison discussion but anyway,
We’re now working on FullText Search Engine comparison for MySQL and test MySQL Build in Full Text Search, Siena, Sphinx, Lucene indexing Wikipedia database and performing appropriate queries. If you’re interested we can give tbgsearch a try as well. Surely it lacks any relevance at all but it is still OK for some of applications.
Also I have little database with over 150 millions of posts which I could help you to test tbgsearch with.
I will setup a small readme and send you the files during the weekend over email.
Is your email by the way available somewhere on this blog ?
Yes I just send you an email 🙂
I have just emailed you a reply 🙂
In case someone else is curious the code is available at http://www.tbg.nu/tbgsearch/tbgsearch.zip and the readme with information is available at http://www.tbg.nu/tbgsearch/readme.txt
The “root” page at http://www.tbg.nu/tbgsearch is not yet updated but I will take care of that in next couple of weeks.
In case someone has suggestions or improvements then you can contact me on the email address which is available in the bottom of each page at http://www.tbg.nu or through Peter 🙂
Hi, why didn’t you test Memcached through Unix Socket ?
First, because version I’ve tested did not support them. I’ve seen talks about adding such support on the list but I’m not sure if appropriate versions of memcached and php api for it are available.
Second – I do not think it makes much sense. Memcached should be used for _distributed_ memory caching. If you’re running it on localhost you’re using wrong solution. Correctly implemented shared memory cache will be faster than memcached even if connection is done via unix socket as it does not require processing switch to other process etc.
File caching will fail on higher concurrency levels.
Sergey,
Why would it ? You just need to make sure to provide adequate locking so file is not being read at the same time it is being written. Even if locking is not available you can create temporary files and do atomic rename.
i’ve just compared apc with memcached using unix sockets. runnig in vmware, apc reached around 100’000 gets/sec and memcached around 1/5 of that. it makes sense because memcached is lockless! i’m having big performance problems when caching lots of date with apc…
Which problems do you have with APC ?
I have APC simply locking up for some use patters while trying to save stuff to the cache. This however seems like some kind of bug rather than design issue. It stays unfixed for too long though.
i wasd heavily caching using apc (~200k items, 300MB size, up to 250 page-req/sec, several apc-gets per page), and after restarting the server (clear cache), cpu usage was rising instead of falling… removing the code that generated more than 50% of the items, cpu usage was normal.
so i suspected some algorithm used by apc might get slow with so many items, and contacted rasmus lerdorf directly (who else should know if he doesnt) 🙂 he answered:
“I personally wouldn’t put more than a dozen or so things in the user cache. Lock contention is going to kill you at the levels you are putting in there. You need to rethink what you are doing.”
thats why i tried memcached… the production server does ~25k memcache-gets(unix-socket)/sec, compared to ~100k apc-gets/sec.
after a few hours, it seems that very little has changed (load/cpu usage) (i start to consider that my code somehow didn’t work well)… but i’m confident that memcache _can_ handle all the caching i throw at it (better than apc).
I recovered yours files and i made new tests on a dell PowerEdgeTM 1950
2 giga of memory, 2 disks SAS 140 giga in RAID 1
I obtain the following results:
Array Cache Time: 146546 Gets/Sec: 682379.594121
eaccelerator 0.9.5 Time: 384131 Gets/Sec: 260327.856903
apc Cache 3.0.12 Time: 538942 Gets/Sec: 185548.723239
File Cache Time: 1755910 Gets/Sec: 56950.5270771
I added eaccelerator
/* eaccelrator Implementation */
eaccelerator_put($key,$data,3600);
function cache_eaccelaror()
{
global $key;
return eaccelerator_get($key);
}
echo (“eaccelerator Cache “);
benchmark(“cache_eaccelaror”);
config standard in php.ini for APC and eaccelerator
extension=”eaccelerator.so”
eaccelerator.shm_size=”160″
eaccelerator.cache_dir=”/tmp/”
eaccelerator.enable=”1″
eaccelerator.optimizer=”1″
eaccelerator.check_mtime=”1″
eaccelerator.debug=”0″
eaccelerator.filter=””
eaccelerator.shm_max=”0″
eaccelerator.shm_ttl=”0″
eaccelerator.shm_prune_period=”0″
eaccelerator.shm_only=”0″
eaccelerator.compress=”1″
eaccelerator.compress_level=”9″
eaccelerator.keys = “shm”
eaccelerator.sessions = “shm”
eaccelerator.content = “shm”
apc.enabled=1
apc.shm_size=250
apc.optimization=0
apc.num_files_hint=1000
apc.gc_ttl=3600
apc.cache_by_default=1
apc.file_update_protection=2
apc.gc_ttl=”1M”
apc.ttl=0;
apc.enable_cli=0;
config:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Xeon(R) CPU 5130 @ 2.00GHz
stepping : 6
cpu MHz : 1995.060
cache size : 4096 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
bogomips : 3993.56
Linux version 2.6.16-2-686 (Debian 2.6.16-16bpo1) (gcc version 3.3.5 (Debian 1:3.3.5-13))
Laurent,
Thank you for taking a time to run benchmark. So eaccelerator looks faster. Interesting.
What is about XCACHE – would be interesting to see its numbers as well.
hi
for Xcache i wait the Xcache php api for test it
http://trac.lighttpd.net/xcache/wiki/XcacheApi
but At the moment, It is not possible to store resources, callbacks or objects using xcache_* functions in php.
I tested multiples configurations for eaccelerator.
original eaccelerator.compress=â€1″ and eaccelerator.content = “shm”
eaccelerator 0.9.5 Time: 384131 Gets/Sec: 260327.856903
without compression eaccelerator.compress=â€0″ and eaccelerator.content = “shm”
eaccelerator Cache Time: 361963 Gets/Sec: 276271.3316
not great differences in performance between compress=â€0″ and compress=â€1″
without compression eaccelerator.compress=â€0″ and eaccelerator.content = “disk_only”
eaccelerator Cache Time: 363953 Gets/Sec: 274760.752075
not great differences in performance between disk_only and shm.
latest tests
1 . Array Cache Time: 140287 Gets/Sec: 712824.424216
2 . eaccelerator Cache Time: 364028 Gets/Sec: 274704.143637
3 . apc Cache Time: 536591 Gets/Sec: 186361.679566
4 . File Cache Time: 1765141 Gets/Sec: 56652.6979998
5 . Memcached Cache Time: 3511784 Gets/Sec: 28475.5554442
in final good surprise about eaccelerator
Laurent,
I think there is something wrong with the test – may be disk_only does not work or it is still cached.
You can check strace for the benchmark script to see if it really reads it from the file all the time. I smell something fishy here.
By the way APC also can be used in two modes with Shared memory and mmaped cache which one did you use ?
exact you must write
eaccelerator.keys = “disk_only”
eaccelerator.sessions = “disk_only”
eaccelerator.content = “disk_only”
and not just eaccelerator.content = “disk_only”
the results are
Array Cache Time: 150292 Gets/Sec: 665371.410321
eaccelerator disk_only Cache Time: 1724556 Gets/Sec: 57985.9395694
File Cache Time: 1769136 Gets/Sec: 56524.7668919
Memcached Cache Time: 3515025 Gets/Sec: 28449.2997916
I use APC with mmaped
Looks right. eaccelerator disk only is close to file cache.
The only thing I should notice – eaccelerator stores all stuff in single directory which may give problems if you have very many keys you’re storing.
for store stuff in cache disk, eaccelerator create sub-directory under the principal directory.
/tmp/eaccelerator
/tmp/eaccelerator/0
/tmp/eaccelerator/1
/tmp/eaccelerator/etc….
Thanks Laurent,
Might be it was fixed now. I’ve looked into it something like year ago and I ended up having 10.000+ files in the same directory which is not quite fun.
Are there any sense/performance gain in using MySQL MEMORY engine for cache data?
Dmitry,
you may cache some result sets in memory tables via insert…select but for general object cache I think memcached or other solutions is much better. do not forget MEMORY tables do not even support dynamic length rows.
Well, currently there is a mysqlnd branch which implements a TTL query cache. It’s however alpha, not well tested but I have seen 6x-10x speed up (in PHP), not from ab/http_load. The cache is per process, currently. Cache access is instant because every fetch from the cache doesn’t need to allocate memory (emalloc()) for the result set, because the result set is there. Also the zvals (the Zend VALue containers), which are the PHP variables, have been already created are there. But once again, it’s experimental, proof of concept code.
Andrey,
When you’re saying cache is per process, you mean each apache child has its own cache right ? But following same queries in the other PHP requests handled by the same apache process will be able to use the cache ?
How do you handle transactional context and how do you specify TTL for given query ? Do handle Security properly (ie you have to be connected to same database with same user) ?
WHat’s an Array Cache? You mean a cache that’s basically an array? How would you use this in a real world PHP program — as an include file with records stored in an array? If so, I think pitting it against eAccelerator or MySQL query cache is pathetic. Including a file and refering to array elements will be highly inefficient beyond, say, 20000 array elements. That’s why “WHERE some = value” is so optimized in databases.
Array cache is array, you’re right.
It is actually handy with some bad code which otherwise would have same SQL queries (or memcache requests) requested many times per page request.
I mainly gave it for comparison reasons.
Fair enough, if you wanted to include it for comparisons. But did you at least make sure that even from the array, the values you retrieve are randomized? If you go deep into the array of 10,000 elements, you’ll find that the performance is not amazing, unless the array is stored in memory, which it is not unless you are using eaccelerator or something. Which further makes me wonder — if you’re running this test on a machine with all those caching tools installed, how do you make sure that the array itself is NOT loaded by any cache?
The array is simply PHP array which is read number of times during single script execution.
PKHUnter: Array cache is not meant to be persistent. It’s not stored anywhere, it’s working in one script only, so it’s always in memory.
Thanks for sharing your performance results. I’m going to try the file-based CacheEngine from Jay in my first PHP program. It will be database intensive so the cache should help a lot.
What a great post, I came across this looking for benchmarks for caching. I just rebuilt my homeserver, it’s running WordPress/Mysql5/PHP5; now utilizing Varnish for HTTP accel, MySQL Query Cache (Unix Socket) on the database, and then the WP-Cache plugin for WordPress. Seems I need to throw eAccelerator into the loop, and I should be all set! Thanks for the post, and thanks to Lauren for all the detail on using eAccel.
Great thread!
In the original posting (using a different machine) we can see that
array-performance = 13.5 * file-performance
array-performance = 30 * memcached-perfonce
however in laurent’s latest posting #28
array-performance = 0.08 * file-performance
array-performance = 0.04 * memcached-performance
i.e. memcached is 2 times faster than any of its closest competitors (file,eaccelerator).
is it only the machines that are different that is making this huge performance differences?
what is making memcached go so much faster?
Mauricio,
You seems to miss it – the gets/sec which is performance is second number. In comment #28 eaccelerator is measured in disk_only mode which makes it very close in speed to file cache.
Hi Peter!
I just had someone reference this article so I thought I would leave some notes.
1) Why put tcp/unix socket records in the same table? This leads people to draw wrong impressions (in this case with memcached and MySQL).
2) Why didn’t you use prepared statements?
3) You are accessing MySQL via 127.0.0.1, are you sure it used TCP/IP? MySQL short circuits TCP/IP for local connections.
Cheers,
-Brian
Brian,
This is old post. Sure it can be presented better and I’d do more benchmarks now too. Indeed using prepared statements as well as trying out mysqlnd driver for PHP could be done now. I’m quire sure we used TCP/IP for the test when it is stated so. Otherwise you would not get similar results.
What I really would do now is to do some local test (via Unix Socket) and Network test via TCP/IP this would be more relevant.
I would like to see some concurrent testing results, those are most probably single process serial oriented tests. It would be nice to see the performance if you try few tests with parallel execution of 2 or 4 or 8 processes/threads on client side or multiple computers for tcp oriented tests.
Thank you for running benchmarks and for the COOL article!
There is one thing I don’t understand, however: What is an “APC Cache” ?
APC Is extension for PHP which does opcode caching bur can cache user objects as well.
I have done some performance measurements on the system that is more close to live production. I’m not sure about hardware characteristics of the server (certainly not outdated). Results are lower which is normal for server that is on live production and loaded.
I got this as results.
———————————————————————–
Object cache – Iterative execution of object property getter 244908 req/s
eAccelerator – Iterative execution of $cache->load() method 45681 req/s
MemcacheD – Iterative execution of $cache->load() method 1989 req/s
Query cache: 806 req/s
Query with caching disabled: 579 reqs/s
Does anyone have some numbers on real distributed server architecture. Are there some optimal performance numbers to compare with as some minimum to care of when comes to the cache performance.
Thanks in advance.
Ivan
Hi Peter,
Great Blog !
I could not resist to appreciate the blog.
Being a NewBie I have a question.
-> Could you suggest what is the best alternative to memcached for session clustering which would integrate seamlessly with the older versions ?
Hey,
why you compare local caches (the same machine) with remote-caches (in the network). I mean you test is excellent, but you can’t compare this two different cache-types.
The remote-cache is always slower! But when you save the cache-files into the local machine and this machine is very, very slow (too many requests) than is the remote – machine a lot of quicklier with the same cache-save-type.
I was considering memcache and APC/EA/XCache for a single server and believe that network cache should be slower, but how much. I’m glad to found this benchmark.