Apache Hadoop is commonly used for data analysis. It is fast for data loads and scalable. In a previous post I showed how to integrate MySQL with Hadoop. In this post I will show how to export a table from  MySQL to Hadoop, load the data to Cloudera Impala (columnar format) and run a reporting on top of that. For the examples below I will use the “ontime flight performance” data from my previous post (Increasing MySQL performance with parallel query execution). I’ve used the Cloudera Manager v.4 to install Apache Hadoop and Impala. For this test I’ve (intentionally) used an old hardware (servers from 2006) to show that Hadoop can utilize the old hardware and still scale. The test cluster consists of 6 datanodes. Below are the specs:

PurposeServer specs
Namenode, Hive metastore, etc + Datanodes2x PowerEdge 2950, 2x L5335 CPU @ 2.00GHz, 8 cores, 16G RAM, RAID 10 with 8 SAS drives
Datanodes only4x PowerEdge SC1425, 2x Xeon CPU @ 3.00GHz, 2 cores, 8G RAM, single 4TB drive

As you can see those a pretty old servers; the only thing I’ve changed is added a 4TB drive to be able to store more data. Hadoop provides redundancy on the server level (it writes 3 copies of the same block to all datanodes) so we do not need RAID on the datanodes (need redundancy for namenodes thou).

Data export

There are a couple of ways to export data from MySQL to Hadoop. For the purpose of this test I have simply exported the ontime table into a text file with:

(you can use “|” or any other symbol as a delimiter) Alternatively, you can download data directly from www.transtats.bts.gov site using this simple script:

Load into Hadoop HDFS

First thing we will need to do is to load data into HDFS as a set of files. Hive or Impala it will work with a directory to which you have imported your data and concatenate all files inside this directory. In our case it is easy to simply copy all our files into the directory inside HDFS

 Create external table in Impala

Now, when we have all data files loaded we can create an external table:

Note the “EXTERNAL” keyword and LOCATION (LOCATION points to a directory inside HDFS, not a file). The impala will create a meta information only (will not modify the table). We can query this table right away, however, impala will need to scan all files (full scan) for queries.

Example:

(Note that “group by” will not sort the rows, unlike MySQL. To sort we will need to add “ORDER BY yeard”)

Explain plan:

As we can see it will scan 45G of data.

Impala with columnar format and compression

The great benefit of the impala is that it supports columnar format and compression. I’ve tried the new “parquet” format with “snappy” compression codec. As our table is very wide (and de-normalized) it will help alot to use columnar format. To take advantages of the “parquet” format we will need to load data into it, which is easy to do when we already have a table inside impala and files inside HDFS:

Then we can test our query against the new table:

As we can see it will scan much smaller amount of data: 3.95 (with compression) compared to 45GB

Results:

And the response time is much better as well.

Impala complex query example

I’ve used the complex query from my previous post. I had to adapt it for use with Impala: it does not support “sum(ArrDelayMinutes>30)” notation but “sum(if(ArrDelayMinutes>30, 1, 0)” works fine.

The query is intentionally designed the way it does not take advantage of the indexes: most of the conditions will only filter out less than 30% of the data.

Impala results:

15.28 seconds is significantly faster than original MySQL results (15 min 56.40 sec without parallel execution and  5 min 47 with the parallel execution). However, this is not “apple to apple comparison”:

  • MySQL will scan 45G of data and Impala with parquet will only scan 3.5G
  • MySQL will run on a single server, Hadoop + Impala will run in parallel on 6 servers.

Nevertheless, Hadoop + Implala shows impressive performance and ability to scale out the box, which can help a lot with the large data volume analysis.

Conclusion

Hadoop + Impala will give us an easy way to analyze large datasets using SQL with the ability to scale even on the old hardware.

In my next posts I will plan to explore:

As always, please share your thoughts in the comments.

15 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Peter Zaitsev

Alexander,

I would wonder what numbers would InfiniDB deliver here. From the looks of it they have quite interesting offering being able to speak MySQL protocol to the data which is already in Hardoop (HDFS) as well as its own parallel column store engine.

Mark Callaghan

How big are the database files (with or without 3X replication)?

Andrew Aksyonoff

Would be interesting to compare Vertica on the same HW/data too. They’re free upto 1 TB.

Mark Callaghan

1TB isn’t big data

Big Data Analytics

Can’t you do the same thing in platforms such as hunk or datameer without coding?

Andrew Aksyonoff

Big Data is any thing which is crash Excel (c) https://twitter.com/DEVOPS_BORAT/

Paul Carlucci

I realize it’s a bit off topic, but since you’re comparing speed versus MySQL, then when you’re done with the run could you try converting the tables to TokuDB and give it another spin please?

david

saw similar post on Stack Overflow but yeah quite impressive data.

mark

Great article. I tried to issue the command to create the parquet tables using Impala (CDH5 VM) and got the following error message

analysisexception: syntax error in line 1: set parquet_compression_codec=snappy ^ encountered: set expected: alter, compute, create, describe, drop, explain, insert, invalidate, load, refresh, select, show, use, values, with caused by: exception: syntax error

mark

I realize that my problem was because I issued the command on the Impala GUI query and not on the Impala shell. But how to start the impala shell?

mark

Ignore my previous 2 comments. The shell is started with impala-shell command. But the command
create table ontime_parquet_snappy LIKE ontime_parquet_snappy STORED AS PARQUET;
should copy from the csv file.

Michael

Hi Alexander,
Now try to compare running queries on Impala on dataset that not fit into the memory (swap)
and you will see the real performance impact. So you if query same size of data on mysql using indexes the performance should be better.

Luca Candela

Let me make your download script for the flight info a little less tedious:

for y in {1988..2013}
do
for i in {1..12}
do
u=”http://www.transtats.bts.gov/Download/On_Time_On_Time_Performance_${y}_${i}.zip”
wget $u -o ontime.log
unzip -o On_Time_On_Time_Performance_${y}_${i}.zip
rm On_Time_On_Time_Performance_${y}_${i}.zip
done
done

Gerard

A question if I may.

I am doing the Cloudera Live Lesson 2 on Impala.
When I load the data and follow the course notes I get to see the data.
The if I do a sqoop incremental load with newer data, I cannot see the newer data unless I recreate the external table. Odd.

Any ideas?