Shard-Query is an open source tool kit which helps improve the performance of queries against a MySQL database by distributing the work over multiple machines and/or multiple cores. This is similar to the divide and conquer approach that Hive takes in combination with Hadoop. Shard-Query applies a clever approach to parallelism which allows it to significantly improve the performance of queries by spreading the work over all available compute resources. In this test, Shard-Query averages a nearly 6x (max over 10x) improvement over the baseline, as shown in the following graph:

One significant advantage of Shard-Query over Hive is that it works with existing MySQL data sets and queries. Another advantage is that it works with all MySQL storage engines.

This set of benchmarks evaluates how well Infobright community edition (ICE) performs in combination with Shard-Query.

Data set

It was important to choose a data set that was large enough to create queries that would run for a decent amount of time, but not so large that it was difficult to work with. The ontime flight performance statistics data, available online from the United States Bureau of Transportation Statistics (BTS) made a good candidate for testing, as it had been tested before:
Another MPB post
Lucid DB testing

The raw data is a completely denormalized schema (single table). In order to demonstrate the power of Shard-Query it is important to test complex queries involving joins and aggregation. A star schema is the most common OLAP/DW data model, since it typically represents a data mart. See also: “Data mart or data warehouse?”. As it is the most common data model, it is desirable to benchmark using a star schema, even though it involves work to transform the data.

Star schema

Transforming the data was straightforward. I should note that I did this preprocessing with the MyISAM storage engine, then I dumped the data to tab delimited flat files using mysqldump. I started by loading the raw data from the BTS into a single database table called ontime_stage.

Then, the airport information was extracted:

After extracting flight/airline and date information in a similar fashion, a final table ontime_fact is created by joining the newly constructed dimension table tables to the staging tables, omitting the dimension columns from the projection, instead replacing them with the dimension keys:

The data set contains ontime flight information for 22 years, which can be confirmed by examining the contents of the date dimension:

The airport dimension is a puppet dimension. It is called a puppet because it serves as both origin and destination dimensions, being referenced by origin_airport_id and destination_airport_id in the fact table, respectively. There are nearly 400 major airports included in the data set.

The final dimension is the flight dimension, which contains the flight numbers and air carrier hierarchies. Only the largest air carriers must register and report ontime information with the FAA, so there are only 29 air carriers in the table:

Each year has tens of millions of flights:

Test environment

For this benchmark, a test environment consisting of a single commodity database server with 6 cores (+6ht) and 24GB of memory was selected. The selected operating system was Fedora 14. Oracle VirtualBox OSE was used to create six virtual machines, each running Fedora 14. Each of the virtual machines was granted 4GB of memory. A SATA 7200rpm RAID10 battery backed RAID array was used as the underlying storage for the virtual machines.

Baseline:
The MySQL command line client was used to execute the a script file containing the 11 queries. This same SQL script was used in the Shard-Query tests. For the baseline, the results and response times were captured with the T command. The queries were executed on a single database schema containing all of the data. Before loading, there was approximately 23GB of data. After loading, ICE compressed this data to about 2GB. The test virtual machine was assigned 12 cores in this test.

Scale-up:
Shard-Query was given the following configuration file, which lists only one server. A single schema (ontime_one) contained all of the data. The test virtual machine was assigned 12 cores for this test. The same VM was used as the previous baseline test. This VM was rebooted between tests. A SQL script was fed to the run_query.php script and the output was captured with the ‘tee’ command.

Scale-out
In addition to adding parallelism via scale-up, Shard-Query can improve performance of almost all queries by spreading them over more than one physical server. This is called “scaling out” and it allows Shard-Query to vastly improve the performance of queries which have to examine a large amount of data. Shard-Query includes a loader (loader.php) which can be used to either split a data into multiple files (for each shard, for later loading) or it can load files directly, in parallel, to multiple hosts.

Shard-Query will execute queries in parallel over all of these machines. With enough machines, massive parallelism is possible and very large data sets may be processed. As each machine examines only a small subset of the data, performance can be improved significantly:

In this configuration, each shard has about 335MB-350MB of data (23GB raw data, compressed to about 2GB total data. then spread over six servers). Due to ICE limitations, the data was split before loading. The splitting/loading process will be described in another post.

Complex queries

Shard-Query was tested with the simple single table version of this data set in a previous blog post. Previous testing was limited to a subset of Vadim’s test queries (see that post). As this new test schema is normalized, Vadim’s test queries must be modified to reflect the more complex schema structure. For this benchmark each of the original queries has been rewritten to conform to the new schema, and additionally two new test queries have been added. To model real world complexity, each of the test queries feature at least one join, and many of the filter conditions are placed on attributes in the dimension tables. It will be very interesting to test these queries on various engines and databases.

Following is a list of the queries, followed by a response time table recording the actual response times for each query. The queries should be self-explanatory.

Performance, at a glance

You will notice that Shard-Query is faster in nearly every case. The performance of the queries is improved significantly by scaling out, even more so than scaling up, because additional parallelism is added beyond what can be exploited by scale up. Scale up can improve query performance when there is enough resources to support the added parallelism, and when one of the the following are in used in the query: BETWEEN or IN clauses, subqueries in the FROM clause, UNION or UNION ALL clauses. If none of those features are used, then parallelism can’t be added. Q9 is an example of such a query.
.

Query details:


The next queries show how performance is improved when Shard-Query adds parallelism when “subqueries in the from clause” are used. There are benefits with both “scale-up” and “scale-out”, but once again, the “scale-out” results are the most striking.


The performance of the following queries depends on the size of the date range:


Finally, Shard-Query performance continues to improve when grouping and filtering is used. Again, notice Q9. It doesn’t use any features which Shard-Query can use to add parallelism. Thus, in the scale up configuration it does not perform any better than the baseline, and actually performed just a little worse. Since scale out splits the data between servers, it performs about 6x better as the degree of parallelism is controlled by the number of shards.


Conclusion

The divide and conquer approach is very useful when working with large quantities of data. Shard-Query can be used with existing data sets easily, improving the performance of queries significantly if they use common query features like BETWEEN or IN. It is also possible to spread your data over multiple machines, scaling out to improve query response times significantly.

These queries are a great test of Shard-Query features. It is currently approaching RC status. If you decide to test it and encounter issues, please file a bug on the bug tracker. You can get Shard-Query (currently in development release form as a checkout from SVN) here: Shard-Query Google code project

Full disclosure

Justin Swanhart, the author of this article is also the creator and maintainer of Shard-Query. The author has previously worked in cooperation with Infobright, including on benchmarking. These particular tests were performed independently of Infobright, without their knowledge or approval. Infobright was, however, given the chance to review this document before publication, as a courtesy. All findings are represented truthfully, transparently, and without any intended bias.

23 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
VladimirasRussl

I do not understand how do you handle count(distinct *) in your shard queries?

James Day

Good to read. Should be interesting.

James Day

Justin, I think you missed my point. I was wondering how a well configured schema and plain InnoDB would do. That might give some indication of when it’s worthwhile considering tools like these.

Jeff Kibler

Thanks, Justin, for posting your findings; I find them very interesting. I’m curious to see what happens if we denormalize the dataset and condense to just one or potentially two tables.

I know you mention Star Schema is the “proper benchmark,” but I would want to understand the boost in performance by just structuring the data in the best format for the datastore.Let me know if you’re interested in trying Shard on a denormalized set.

All the best, and thanks again for getting this info posted!

Jeff
Infobright Community Manager for ICE

David

I had wondered what it would be like to do something like this, but now you have written it. Thanks for your work!

Rick James

Some of those test cases are a good examples of why _not_ to normalize dates.

Anyway, bravo to your scaling technique.

Peter Zaitsev

Justin,

Thanks for flagging this. I assumed infinidb community edition does not have scale out features but is available under GPL instance. Now checking out “Calpont Open Source Licence Agreement” I see calling it Open Source is a joke. This is a lot more close to Microsoft’s Shared Source where you could look at the code but not do any useful stuff with it.

James Day

What I wonder about are these things:

1. Which storage engine was used. Looks like InnoDB from past tests, and using partitioning.
2. What part of the benefit came from compressing the data to one tenth of its original size and how InnoDB’s compression would have compared.
3. Which indexes were present on the table, because the queries look like natural ones for very effective covering indexes. Not always possible but it would be interesting to see how some general tuning work did in comparison because many applications have only a few queries to tune, rather than being truly any column at any time queries.
4. Why virtual servers were used instead of multiple instances of MySQL on one, which generally has lower overhead.
5. What the benefit would have been from splitting the data further into shorter rows to improve cache hit rates. I tend to grimace when I see 100 character varchars in frequently queried tables without covering indexes. StateName VARCHAR(100) in a commonly queried table? A bad joke for anyone who cares about performance of queries. :) Makes me wonder how much of the InfoBright size reduction was effectively easy normalisation.

To some extent these questions are missing the point about generic star schema queries that you’re really looking at but it’s interesting to see what can be done without having to move data around, just by normal tuning.

Maybe someone else at Percona would have some fun using those alternative approaches so it’s a friendly bit of internal competition between different ways of optimising things?

Peter Zaitsev

Justin,

These are great performance gains. I’m wondering how far this can scale can we get say 50 EC2 instances and scale the data against them ?

I’m also wondering how it compares in terms of performance with things like InfiniDB and Hive

Jaimie Sirovich

@Justin

This looks very interesting, but one question. How much latency/overhead does Gearman add? In the case of a query that takes 20 seconds (or minutes or hours), I could see how parallelizing it in this way in coordination with Gearman would be awesome for the BI space where something that’s slow would then become almost instant.

But what about a 2 second query you want to speed up to 0.2s? That would be more akin to the retail space where some sort of query just can’t be solved in a single-threaded space and you need to throw hardware at it? So why not just spawn N queries and block until they all return?

Is this reduced form of Shard-Query possible?

jrsmith

hello Swanhart
i successfully have written data using write.php
but i can not read data using read.php

i print ‘php read.php’ in command line
output show program is blocked
why?

output:
PHP Notice: Undefined property: ShardQuery::$broadcast_query in /home/libo/shard-query-read-only/include/shard-query.php on line 145

Notice: Undefined property: ShardQuery::$broadcast_query in /home/libo/shard-query-read-only/include/shard-query.php on line 145
PHP Strict Standards: Resource ID#28 used as offset, casting to integer (28) in /usr/local/lib/php/Net/Gearman/Client.php on line 169

Strict Standards: Resource ID#28 used as offset, casting to integer (28) in /usr/local/lib/php/Net/Gearman/Client.php on line 169
PHP Strict Standards: Resource ID#28 used as offset, casting to integer (28) in /usr/local/lib/php/Net/Gearman/Client.php on line 173

Strict Standards: Resource ID#28 used as offset, casting to integer (28) in /usr/local/lib/php/Net/Gearman/Client.php on line 173
PHP Strict Standards: Resource ID#28 used as offset, casting to integer (28) in /usr/local/lib/php/Net/Gearman/Client.php on line 247

Strict Standards: Resource ID#28 used as offset, casting to integer (28) in /usr/local/lib/php/Net/Gearman/Client.php on line 247
PHP Strict Standards: Resource ID#28 used as offset, casting to integer (28) in /usr/local/lib/php/Net/Gearman/Client.php on line 169

Strict Standards: Resource ID#28 used as offset, casting to integer (28) in /usr/local/lib/php/Net/Gearman/Client.php on line 169
PHP Strict Standards: Resource ID#28 used as offset, casting to integer (28) in /usr/local/lib/php/Net/Gearman/Client.php on line 173

Strict Standards: Resource ID#28 used as offset, casting to integer (28) in /usr/local/lib/php/Net/Gearman/Client.php on line 173
PHP Strict Standards: Resource ID#28 used as offset, casting to integer (28) in /usr/local/lib/php/Net/Gearman/Client.php on line 247

Strict Standards: Resource ID#28 used as offset, casting to integer (28) in /usr/local/lib/php/Net/Gearman/Client.php on line 247

write.php script’s content:
array(
‘host’=>’192.168.1.211’,
‘user’=>’root’,
‘password’=>”,
‘db’=>’test’,
‘port’=>3306
),
‘shard2’ => array(
‘host’=>’192.168.1.101’,
‘user’=>’root’,
‘password’=>”,
‘db’=>’test’,
‘port’=>3306
)
);
$params = array(‘gearman’=>’localhost:4730’);

$mapper = new HashShardKeyMapper($shards);
$mapper->add_column(‘col1’);

$shard_query = new ShardQuery($shards,$params);

$shard_query->set_partition_info(‘col1′, array($mapper,’map’));

$sql = “INSERT INTO t1 VALUES(1,2),(2,3),(3,4),(4,5),(5,6),(6,7),(7,8),(8,9)”;
$shard_query->query($sql);
?>

read.php script’s content:
array(
‘host’=>’192.168.1.211’,
‘user’=>’root’,
‘password’=>”,
‘db’=>’test’,
‘port’=>3306
),
‘shard2’ => array(
‘host’=>’192.168.1.101’,
‘user’=>’root’,
‘password’=>”,
‘db’=>’test’,
‘port’=>3306
)

);
$params = array(‘gearman’=>’localhost:4730’);

$mapper = new HashShardKeyMapper($shards);
$mapper->add_column(‘col1’);

$shard_query = new ShardQuery($shards,$params);

$shard_query->set_partition_info(‘col1′, array($mapper,’map’));

$sql = “select col1,col2 from t1 where col1 in (1,2)”;

$stmt = $shard_query->query($sql);

if(!$stmt || $shard_query->error) {
print_r($shard_query->error);
}

if($stmt) {
while($row = $shard_query->my_fetch_assoc($stmt)) {
print_r($row);
}
}
?>