I have mentioned few times Innodb caches data in pages and even if you have working set consisting of relatively few rows your working set in terms of pages can be rather large. Now I came to do a little benchmark to show it in practice. I’m using standard “sbtest” with 10mil rows with data file of 2247098368 which gives us 224 bytes of gross storage per row, including all overhead etc. Actual row
size in this table is smaller but lets use this number for our math. For benchmark I’m using set number of random IDs which are repeatedly selected in random order, which would illustrate data set with
some randomly distributed “hot” rows. I read every row in the set once before timing, so when there is enough memory to cache every single row there should not be any disk reads in benchmark run itself.


I’m using 128M buffer pool for this test, which should fit roughly 500K of rows 224 bytes in size. Lets see what Benchmark really shows:

As we see in this case database can really fit only somewhere between 6400 and 12800 different rows which is about 1/50 of “projected size”. This number is very close to what I would have estimated –
With 224 bytes per row we have some 70 rows per page so with random distribution you would expect up to 70 times data which have to be fetched to the database than you need.

I’m wondering if any over storage engine can show better results in such benchmark. Falcon with plans for row cache would fair better, so I would expect better results with PBXT. I also should check with
smaller page sizes available in Percona Server and my expectation is with 4K page size I can fit 4x more distinct rows in my cache.

8 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
tobi

A nice performance trick is to reassign the PK values of such a table from time to time in order to group hot rows together. That way the buffer cache is utilized highly.

Pavel Shevaev

Peter,

In case it’s possible to fit 4x more distinct rows with 4K pages what are the possible cons of a lesser page size?

Pavel Shevaev

Peter,

In case it’s possible to fit 4x more distinct rows with 4K pages what are the possible cons of a lesser page size?

tobi

A nice performance trick is to reassign the PK values of such a table from time to time in order to group hot rows together. That way the buffer cache is utilized highly.