Few days ago I wrote about testing writing to many files and seeing how this affects sequential read performance. I was very interested to see how it shows itself with real tables so I’ve got the script and ran tests for MyISAM and Innodb tables on ext3 filesystem. Here is what I found:

The fragmentation we speak in this article is filesystem fragmentation or internal table fragmentation which affects performance of full table scan. Not all queries are going to be affected same way, for example point select reading single page should not be significantly affected – ie you may not be affected as bad as we show here.

Benchmarks were done using this script:
The benchmark run with following simple shell script:

The script creates specified amount of tables and does specified number of inserts going to random tables. I used default MySQL settings for MyISAM (table_cache=64) and set innodb_buffer_pool_size=8G innodb_flush_logs_at_trx_commit=2 innodb_log_file_size=256M innodb_flush_method=O_DIRECT for Innodb.

The tables were sized so they are considerably larger than amount of memory in box so full table scan will be IO bound.

As you can see from MyISAM results (above) the insert speeds does not degrade that badly until going from 1000 to 10000 tables, even though table_cache was just 64. I expect this is because updating index header (most complex part of opening and closing MyISAM table) can happen by OS in background and flushing 1000 pages each 30 seconds is not big overhead for this server configuration.

Going to 10000 tables however insert speed dropped 20 times. This could be because ext3 does not like so many files in directory or because random updates to 10000 distinct pages for index header updates not to mention modification time update is a lot of overhead. During this last test box felt really sluggish responding 10+ seconds for as simple command as “ls” even though loadavg was about 1. In the process list I could see some single value insert statements taking over 5 seconds… So it does not work very well.

Note: As I checked later contrary to my expectation this filesystem was created without dir_index option which should add significant overhead for insert with many tables.

The read performance, which is the main measurement for this benchmark suffered quite as expected – with 10000 tables it was 40 times worse than with single table! Looking at IOSTAT I could see average read size of being just 4K which means ext3 does horrible job in this case of doing extent allocation. Note however even 100 tables are enough to drop performance 20 times.

Innodb in single tablespace mode showed following results:

As you can see insert speed starts slower but degrades less, even though drop from 1000 to 10000 tables is dramatic as well. The read speed is also slower (expected as table was larger for same amount of rows) though it drops at different rate. Interesting enough it dropped just 2 times and was about same for 10 100 and 1000 tables which could be because of extent allocation for rather large tables. For 10000 tables we had just 1000 of 4K rows in the table which caused too much space allocated as single pages. I expect if we would use larger amount of rows read performance for 10000 tables would be close.

Innodb with innodb_file_per_table=1 had the following results:

Insert performance is close, the difference is perhaps explained by the fact files needed to be constantly extended (meta data updates) and reopened for more than 100 tables. Read performance starts close but degrades less for 10 and 100 tables and when better again for 10000 tables. I can’t explain why it is a bit worse for 1000 tables thought as I did only one run (It took more than 24 hours) it also could be some activity spike.

A bit better performance in this case can be perhaps explained but a bit larger increment how tablespaces are allocated comparted to internal allocation from single tablespace.

Summary: There are few basic things we can learn from these results
– Concurrent growth of many tables causes data fragmentation and affects table scan performance nadly
– MyISAM suffers worse than Innodb
– Innodb extent allocation works (perhaps would be good option for MyISAM as well)
– Innodb suffers fragmentation less if it stores different tables in different files.

26 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Simetrical

Were you using dir_index for the filesystems?

Diego

was this a typo?
>>4K which means ext2 does horrible job<< (ext2 or was it ext3?)

PaulM

Nice article Peter,

We had a similar issue with the number of files per directory on a migration project I worked on in the past. A perl script was taking articles from a legacy db and dumping each as individual files onto a linux FS (ext3). After a fast start, as you found it slowed down dramatically.
Our solution was to add an additional step to the dump article script to split the files into 1000 per directory. The performance was then stable throughout the process. The funny thing was I offered a slab of beer to the IT portion of the company for someone who could solve it (got many more people interested).

Maybe you should add another recommendation:
No more than 1000 files per directory on ext3

Have Fun
Paul

Kevin Burton

I might be wrong but if you have table_cache set to a large value then dir_index won’t really make much difference once the file is opened.

This will be amortized over the entire length of the DB server.

Kevin Burton

Peter,

If you’re interested in testing the performance of open() then you should do this in a dedicated benchmark.

If you test two things you’re going to get different results for different filesystems on different OSes.

If you just test fragmentation you would get more and similar results.

Fragmentation with MyISAM InnoDB can happen with just two tables, each taking INSERT load in round robin fashion.

Kevin

Kevin Burton

Peter.

I agree with your worst case scenario. This is what I tried to point out in my previous comment. Though maybe I didn’t do a good job expressing myself 🙂

The point I was trying to make is that in that situation there’s not much the filesystem CAN do.

It could TRY to pre-allocate both files in larger chunks but then you’d have angular velocity kick in on the HDDs.

InnoDB’s grow factor (which by default is 8M I believe) is a good balance.

Apachez

How was the partition created and which flags were used for mounting it?

Things like dir_index etc but also things like noatime.

Could a new test be performed on the same box using for example noatime on the mount and see how it (if any) changes?

paul

Hi,
you should retroy this benchmark with XFS (noatime) and ReiserFS (noatime, notail).
My findings when I tried it with our workload was that XFS gets really slow as soon as you have a very many files, and ReiserFS works great for such a workload.
Well I didn’t check MySQL Performance at all, just creating folders with empty files in it to see how much the filesystem affected the performance with a lot of folders and a fixed file count in them.

It would be nice to see if the difference is seen as clearly with actual data in the files 😉

paul

We use it in production, as XFS slowed up on us and mysql became just slow because we had a lot of users on it. The speed difference was like the difference in O(e^n) vs O(n), but as we have a quite uncommon workload it is somewhat our own problem. We have a lot of Databases which are quite tiny.
You might still be interessted in testing it, as my experience shows that a query on an idle server with xfs and 100K Databases takes way longer than a query on the same server with reiserfs.

Nate

I am a newbie to MySQL and am not getting the high insert throughput as in this benchmark. Could you post the my.cnf file used and the computer hardware specs that produced this benchmark. I am interested in general, innodb, and myisam settings and installation configuration of the machine. I am interested in how many processors and processor type, processor cache, RAM size, front side bus speed, harddrive rpms, harddrive max write speed, and operating system. Also are you doing multiple inserts per transaction.

here is data from my tests:
computer 1:

My.ini
[client]
port=3306
[mysql]
default-character-set=latin1
[mysqld]
port=3306
basedir=”C:/Program Files/MySQL/MySQL Server 5.0/”
datadir=”C:/Program Files/MySQL/MySQL Server 5.0/Data/”
default-character-set=latin1
default-storage-engine=INNODB
#default-storage-engine=MyISAM
sql-mode=”STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION”
max_connections=100
query_cache_size=0
table_cache=256
tmp_table_size=93M
thread_cache_size=8
#*** MyISAM Specific options
myisam_max_sort_file_size=100G
myisam_max_extra_sort_file_size=100G
myisam_sort_buffer_size=185M
key_buffer_size=157M
read_buffer_size=64K
read_rnd_buffer_size=256K
sort_buffer_size=256K
#*** INNODB Specific options ***
innodb_additional_mem_pool_size=20M
innodb_flush_log_at_trx_commit=0
innodb_log_buffer_size=4M
innodb_buffer_pool_size=304M
innodb_log_file_size=152M
innodb_thread_concurrency=8

RAM: 1GB
Processor: Intel P4 2.253GHz
Cache size: 512KB
Front side bus: 530 MHz
Harddrive: WDC WD400BB-75DEA0
Max Burn Rate: 100MB/s tested 8MB/s
RPMs: 7200

using multiple inserts per transaction for 1 innodb table (25 inserts or .2 seconds of data to be inserted) I got 1121, 1133, and 1166 inserts per second into that one table.
using autocommit for innodb inserts I got 1125, and 1158 inserts per second into the table
using myisam I got 1186 and 1177 inserts per second into the table
the average data length was 231B. It seemed the writing of the insert is what was taking all the time. Data was coming in much faster than it was able to insert. The data queue was getting quite long. I ran the test for 5 minutes. The inserts per second was pretty constant the little variation arrose in how fast the data was coming in which was no more than 3000 packets of data per second. The processor would go up to about 80% during tests.

please give me feed back. I am trying to get the data to be inserted faster than it is coming in. Please post the my.cnf file that you were able to get 9000 inserts per second and the computer specs for that test.

Thanks.

Shai

From the sample code, your read statitic is bogus.
you do many more SELECT on many table than one table so, of course, you will get slower time, but in reality if you did your calculation correctly you will get a much faster read when you have more tables with less data in each table. here is the problem:

insteed of
$number_of_records/(microtime(1)-$t_temp);
do this:
$number_of_tables/(microtime(1)-$t_temp);

and if you do want to keep it with number_of_record try this
($number_of_records*$number_of_tables)/(microtime(1)-$t_temp);

Gregor

This Script is really dangerous !

Because there is a “bug” in MySQL InnoDB which don’t shrink the file ibdata1 after a DROP Database without dropping the Table – If you have the defalut setting and not innodb_file_per_table enabled..

For more Information see here:
http://bugs.mysql.com/bug.php?id=15748

http://bugs.mysql.com/bug.php?id=1287
http://bugs.mysql.com/bug.php?id=1341
http://bugs.mysql.com/bug.php?id=36943

Here a solution, but completly reimport is not that fun for a production enviroment….
http://crazytoon.com/2007/04/03/mysql-ibdata-files-do-not-shrink-on-database-deletion-innodb/

best regards gregor

Gregor

Peter,

no didn’t run on production. But there is no note about this scary design, this is just the point where i want to mention.
And it wouldn’t happend if you delete each table by own drop table – it’s just happend when you drop the database with the data inside.

– it’s always good to know…

best regards

Gregor

Peter,

sorry my fault – damn.
I wrote the drop database by myself…*arg*

KevG

Sorry if this is a bit late. But is there a newer version of this script. I seem to be getting negative numbers from values.

Trial 1:

tables: 1; total records: 10000000; write rows per sec: 19153382.199996 , reads rows per sec: –
13280177.210685sec.
Content-type: text/html
X-Powered-By: PHP/4.3.9

tables: 10; total records: 10000000; write rows per sec: 53662175.142607 , reads rows per sec:
-10424734.977175sec.
Content-type: text/html
X-Powered-By: PHP/4.3.9

tables: 100; total records: 10000000; write rows per sec: 31758735.240128 , reads rows per sec:
28556416.055559sec.
Content-type: text/html
X-Powered-By: PHP/4.3.9

tables: 1000; total records: 10000000; write rows per sec: 58727492.688427 , reads rows per sec
: -37967090.126279sec.
Content-type: text/html
X-Powered-By: PHP/4.3.9

tables: 10000; total records: 10000000; write rows per sec: -18732414.94547 , reads rows per sec: 37017842.600133sec.

Thanks