April 24, 2014

Percona Toolkit 2.2.2 released; bug fixes include pt-heartbeat & pt-archiver

During the Percona Live MySQL Conference & Expo 2013 the week before last, we quietly released Percona Toolkit 2.2.2 with a few bug fixes: pt-archiver –bulk-insert may corrupt data pt-heartbeat –utc –check always returns 0 pt-query-digest 2.2 prints unwanted debug info on tcpdump parsing errors pt-query-digest 2.2 prints too many string values Some tools don’t […]

Serious build and testing automation

Here at Percona we’ve spent a lot of time improving our development and testing practices. Why? Because constant innovation keeps us ahead and more productive. We want to work smarter, not harder. One of the tools we use is the Jenkins Continuous Integration server. We use Jenkins pretty heavily to help with out development processes […]

Quality Assurance: Percona Server Development Now Monitored by Automated Sysbench Performance Regression Checks!

Continuous integration of new features and bug fixes is great – but what if a small change in seemingly insignificant code causes a major performance regression in overall server performance? We need to ensure this does not happen. That said, performance regressions can be hard to detect. They may hide for some time (or be […]

Automation: A case for synchronous replication

Just yesterday I wrote about math of automatic failover today I’ll share my thoughts about what makes MySQL failover different from many other components and why asynchronous nature of standard replication solution is causing problems with it. Lets first think about properties of simple components we fail over – web servers, application servers etc. We […]

The Math of Automated Failover

There are number of people recently blogging about MySQL automated failover, based on production incident which GitHub disclosed. Here is my take on it. When we look at systems providing high availability we can identify 2 cases of system breaking down. First is when the system itself has a bug or limitations which does not […]

The perils of uniform hardware and RAID auto-learn cycles

Last night a customer had an emergency in selected machines on a large cluster of quite uniform database servers. Some of the servers were slowing down in a very puzzling way over a short time span (a couple of hours). Queries were taking multiple seconds to execute instead of being practically instantaneous. But nothing seemed […]

The ARCHIVE Storage Engine – does it do what you expect?

Sometimes there is a need for keeping large amounts of old, rarely used data without investing too much on expensive storage. Very often such data doesn’t need to be updated anymore, or the intent is to leave it untouched. I sometimes wonder what I should really suggest to our Support customers. For this purpose, the […]

Adventures in archiving

One of our Remote DBA service clients recently had an issue with size on disk for a particular table; in short this table was some 25 million rows of application audit data with an on disk size of 345GB recorded solely for the purposes of debugging which may or may not occur. Faced with the task of […]

MySQL Indexing Best Practices: Webinar Questions Followup

I had a lot of questions on my MySQL Indexing: Best Practices Webinar (both recording and slides are available now) We had lots of questions. I did not have time to answer some and others are better answered in writing anyway. Q: One developer on our team wants to replace longish (25-30) indexed varchars with […]

Innodb Table Locks

Innodb uses row level locks right ? So if you see locked tables reported in SHOW ENGINE INNODB STATUS you might be confused and rightfully so as Innodb table locking is a bit more complicated than traditional MyISAM table locks. Let me start with some examples. First lets run SELECT Query:

As you can […]