July 25, 2014

ScaleArc: Real-world application testing with WordPress (benchmark test)

ScaleArc recently hired Percona to perform various tests on its database traffic management product. This post is the outcome of the benchmarks carried out by me and ScaleArc co-founder and chief architect, Uday Sawant. The goal of this benchmark was to identify ScaleArc’s overhead using a real-world application – the world’s most popular (according to […]

Sample datasets for benchmarking and testing

Sometimes you just need some data to test and stress things. But randomly generated data is awful — it doesn’t have realistic distributions, and it isn’t easy to understand whether your results are meaningful and correct. Real or quasi-real data is best. Whether you’re looking for a couple of megabytes or many terabytes, the following […]

Benchmark: SimpleHTTPServer vs pyclustercheck (twisted implementation)

Github user Adrianlzt provided a python-twisted alternative version of pyclustercheck per discussion on issue 7. Due to sporadic performance issues noted with the original implementation in SimpleHTTPserver, the benchmarks which I’ve included as part of the project on github use mutli-mechanize library, cache time 1 sec 2 x 100 thread pools 60s ramp up time 600s […]

Tips on benchmarking Go + MySQL

We just released, as an open source release, our new percona-agent (https://github.com/percona/percona-agent), the agent to work with Percona Cloud Tools. This agent is written in Go. I will give a webinar titled “Monitoring All MySQL Metrics with Percona Cloud Tools” on June 25 that will cover the new features in percona-agent and Percona Cloud Tools, […]

ScaleArc: Benchmarking with sysbench

ScaleArc recently hired Percona to perform various tests on its database traffic management product. This post is the outcome of the benchmarks carried out by Uday Sawant (ScaleArc) and myself. You can also download the report directly as a PDF here. The goal of these benchmarks is to identify the potential overhead of the ScaleArc […]

Before every release: A glimpse into Percona XtraDB Cluster CI testing

I spoke last month at linux.conf.au 2014 in Perth, Australia, and one of my sessions focused on the “Continuous Integration (CI) testing of Percona XtraDB Cluster (PXC)” at the Developer,Testing, Release and CI miniconf. Here is the video of the presentation: Here is the presentation itself: Percona XtraDB Cluster before every release: Glimpse into CI […]

TokuDB vs InnoDB in timeseries INSERT benchmark

This post is a continuation of my research of TokuDB’s  storage engine to understand if it is suitable for timeseries workloads. While inserting LOAD DATA INFILE into an empty table shows great results for TokuDB, what’s more interesting is seeing some realistic workloads. So this time let’s take a look at the INSERT benchmark.

Testing Intel, Samsung & SanDisk SATA SSD

While working on the service architecture for one of our projects, I considered several SATA SSD options as the possible main storage for the data. The system will be quite write intensive, so the main interest is the write performance on capacities close to full-size storage. After some research I picked several candidates (I show […]

MySQL and Percona Server in LinkBench benchmark

Around month ago Facebook has announced the Linkbench benchmark that models the social graph OLTP workload. Sources, along with a very nice description of how to setup and run this benchmark, can be found here. We decided to run this benchmark for MySQL Server 5.5.30, 5.6.11 and Percona Server 5.5.30 and check how these servers […]

Benchmarking Percona Server TokuDB vs InnoDB

After compiling Percona Server with TokuDB, of course I wanted to compare InnoDB performance vs TokuDB. I have a particular workload I’m interested in testing – it is an insert-intensive workload (which is TokuDB’s strong suit) with some roll-up aggregation, which should produce updates in-place (I will use INSERT .. ON DUPLICATE KEY UPDATE statements […]