I wrote about Intel 320 SSD write performance before, but I was not satisfied with these results.

Somewhat each time on Intel 320 SSD I was getting different write performance, so it made me looking into this with details.

So let’s run experiment as in previous post, this is sysbench fileio random write on different file size, from 10GiB to 140GiB with 10GiB step. I use ext4 filesystem, and I perform filesystem format before increasing filesize.

The results are pretty much as in previous post, the throughput drops as we increase filesize:

However, there is when interesting stuff begin. Now when we run the same iterations again, the result will look like:

As you see, second time the throughput is much worse, even on medium size files. Just after 50GiB size, throughput gets below 40MiB/sec And this is with the fact, that I perform filesystem format before each run.

This leads me to conclusion that write performance on Intel 320 SSD is decreasing in time, and actually it is quite unpredictable in each given point of time. Filesystem format does not help, and only secure erase procedure allows to return to initial state. There are commands for this procedure for reference.

hdparm --user-master u --security-set-pass Eins /dev/sd$i
hdparm --user-master u --security-erase Eins /dev/sd$i

Discussing this problem with engineers working with Intel 320 SSD drives I was advised to use artificial space provisioning, about 20%. Basically we create partition which takes only 80% of space.

So let’s try this. The experiment the same as previously, with difference that I use 120G partition, and max filesize is 110GiB.

You can see that throughput in first iteration is basically the same as with full drive, but second iteration performs much better. Throughput never drops below 40MiB/sec, and stays on about 50MiB/sec level.

So, I think, this advise to use space provisioning is worth to consider if you want to have some kind of protection and maintain throughput on some level.

Raw results and used scripts as always you can find on our Benchmarks Launchpad



8 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
bradvoth

Vadim,

Out of curiosity did you do any tests with > 1 thread? I’d be curious to see if that would make any difference at all in the results. I’d run it myself, but I don’t have any SSDs available for testing.

Thanks,

Brad

Peter Zaitsev

Vadim,

I’m wondering if you did multiple rounds, like 10, 20 ? the question is if performance with given file size settles on something or it is going to be constantly dropping ?

I think it is also interesting in this case not just the file size but also how much data have you written during the test. I would assume when you start clean with the drive it should not do any garbage cleaning and can write at the full speed, when you create have written full drive capacity worth of data cleaning needs to start. Having large file size for test can impact test two ways, first there is less space available for garbage collection and second creating this file you have written more data to the drive. This especially should make a difference for short write tests when amount of data written during creation might be most of the data written.

MT

Have you applied TRIM ATA-command after each iteration?

Dave

Your graphs are great on this post and the last one! Off-topic question of the day: What did you use to do them? Thanks! (Sorry–a little off-topic…)

asye288

This article is very informative. Thank you Vadim. Good work.