July 23, 2014

Sysbench Benchmarking of Tesora’s Database Virtualization Engine

Tesora, previously called Parelastic, asked Percona to do a sysbench benchmark evaluation of its Database Virtualization Engine on specific architectures on Amazon EC2. The focus of Tesora is to provide a scalable Database As A Service platform for OpenStack. The Database Virtualization Engine (DVE) plays a part in this as it aims at allowing databases […]

ScaleArc: Benchmarking with sysbench

ScaleArc recently hired Percona to perform various tests on its database traffic management product. This post is the outcome of the benchmarks carried out by Uday Sawant (ScaleArc) and myself. You can also download the report directly as a PDF here. The goal of these benchmarks is to identify the potential overhead of the ScaleArc […]

Quality Assurance: Percona Server Development Now Monitored by Automated Sysbench Performance Regression Checks!

Continuous integration of new features and bug fixes is great – but what if a small change in seemingly insignificant code causes a major performance regression in overall server performance? We need to ensure this does not happen. That said, performance regressions can be hard to detect. They may hide for some time (or be […]

Intel 520 SSD in MySQL sysbench oltp benchmark

In my raw IO benchmark of Intel 520 SSD we saw that the drive does not provide uniform throughput and response time, but it is interesting how does it affect workload if it comes from MySQL. I prepared benchmarks results for Sysbench OLTP workload with MySQL running on Intel 520. You can download it there.

New distribution of random generator for sysbench – Zipf

Sysbench has three distribution for random numbers: uniform, special and gaussian. I mostly use uniform and special, and I feel that both do not fully reflect my needs when I run benchmarks. Uniform is stupidly simple: for a table with 1 mln rows, each row gets equal amount of hits. This barely reflects real system, […]

Sysbench with support of multi-tables workload

We just pushed to sysbench support for workload against multiple tables ( traditionally it used only single table). It is available from launchpad source tree lp:sysbench . This is set of LUA scripts for sysbench 0.5 ( it supports scripting), and it works following way: – you should use –test=tests/db/oltp.lua to run OLTP test i.e. […]

Intel Nehalem vs AMD Opteron shootout in sysbench workload

Having two big boxes in our lab, one based Intel Nehalem (Cisco UCS C250) and second on AMD Opteron (Dell PowerEdge R815), I decided to run some simple sysbench benchmark to compare how both CPUs perform and what kind of scalability we can expect.

SysBench – benchmark tool

Sysbench is benchmark developed by Alexey Kopytov (software engineer @ MySQL AB) – http://sysbench.sourceforge.net/ and I want to write a short intro about this tool as sysbench is one of software for my everyday use. For example, SUN published their Solaris vs RedHat stuff based on sysbench’s results (Peter and me provided performance consutling for […]

Sysbench evaluation of iSCSI performance

Partha Dutta posted pretty interesting post about iSCSI vs SCSI performance using SysBench. This is nice to finally see some iSCSI benchmarks done with MySQL – something we were planning to do for a while but never ended up doing, mainly due to lack of hardware available for tests. It is also good to see […]

Why %util number from iostat is meaningless for MySQL capacity planning

Earlier this month I wrote about vmstat iowait cpu numbers and some of the comments I got were advertising the use of util% as reported by the iostat tool instead. I find this number even more useless for MySQL performance tuning and capacity planning. Now let me start by saying this is a really tricky and deceptive number. Many […]