I have run into a number of cases recently that all had a similar look and feel. In most of these cases, the symptoms were very complicated, but they boiled down to just a few problems that can be prevented in very simple ways.

If you are not doing any of the following three simple things, you probably should. These are essential practices for building successful applications, which everyone should be doing. And the best part is how easy it is to do them — read on for the details.

1. Archive or Purge Your Database

Data archiving is crucial. There is a difference between data that your application needs now, and data that your application used to need. Do not co-mingle the two. Plan from the outset to archive or purge data that is no longer needed. Everybody wants to know how to deal with an application that is growing. The simplest way is to prevent it from growing. Archiving and purging unnecessary data is probably one of the highest value activities that you could ever do.

At a more technical level, archiving reduces the size of your workload’s “working set” — the data that the server really needs to keep in memory to function well. That’s because unused rows can bloat and dilute your indexes and tables. Storage engines such as InnoDB also don’t have features such as midpoint insertion algorithms for cache control, so a single table scan can flush really interesting data out of your LRU list and replace it with a bunch of stuff that’s needed only once for the table scan. Keeping those indexes small enough to fit in memory is a big step towards better performance, and archiving can help do that.

2. Monitor Your Servers

Everybody should have an automated monitoring system that is watching key elements of their systems. This can be very simple, even just an external service such as Pingdom or PagerDuty. Or it can be your own in-house hosted Nagios installation. The point is, your users should not be telling you when the service is down. Be the first to know, and have enough information to help you find the root of the problem.

Setting up monitoring can seem to be a daunting task, in part because of the steep learning curve that comes with most tools worth using. Another problem is the plethora of well-meaning but misguided advice and plugins for such tools, which will lead you to believe that you should monitor every ratio and counter there is in your systems — a sure recipe for an inbox flooded with useless noise, a monitoring screen that looks like the scene of a crime, and ultimately people who are numb to the monitoring system and ignore it when something really does go wrong. Keep it simple and monitor things that matter to your business: does the website load in the expected time and contain the expected text? Does the database server respond okay to a sample query that your application actually uses? Is the disk getting full?

3. Capture and Store Historical Data

Likewise, it is vital to have something recording metrics over time. A lot of people use an RRDTool-based system, such as Cacti. Again, it doesn’t really matter which one you use. At some point in the future, you are going to be very grateful that you have historical metrics to understand changes in your application, or understand what happened just prior to a crash. Here’s an example from a customer: the server crashed, and we could see in the log when that happened, but not why. The Cacti graph for disk space had a sharp curve — it showed the disk space suddenly filling up at the rate of gigabytes per minute. We looked at the disk and found it’d filled up with files due to an infinite loop in some code. How long would it have taken us to notice this otherwise, while checking dozens of possible causes?

Unlike with monitoring, you should measure and store everything you can, for as long as you can. You will never stay awake at night kicking yourself for measuring too much about your application.

All of the above are quite simple and fast to do once you’ve gone through the process a few times. I’d say that a practiced consultant can do a pretty comprehensive job in most applications in 8 hours or less. If it’s your first time through, it will take longer, and some good books might help. Either way, it is an investment with a very high ROI.

5 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Lee Mallabone

Nice summary article, like it.

Archiving strategies could fill a huge article in itself though. In particular, if you have a dataset that needs archiving that spans a set of tables with foreign keys, the set of queries to successfully archive the data (and maintain foreign key constraints) can be challenging at best.
I’d love to see some follow-up on what (if any) tips and tricks are available to MySQL users for quickly and efficiently archiving complex datasets.

Other tools I’ve found useful for monitoring the simple question of “is the server up?” include:
http://montastic.com
http://awaremonitoring.com/

xianyuezhihua

Yea.Much better Preaction than fire fighting.

Dave Muller

I think there needs to be a note on the archiving plan. We have a client with only a few hundred thousand records in some tables with bad performance – all the blame of a dreadful design + MyISAM. My predecessor suggested he archive data, not fix the app, which is in even more strife now.

Reanne

Very informative article, thanks! I completely agree on the importance of monitoring – there is a great new and free monitoring tool on the market, called Monitance, that is working like a charm for me. You can download it from http://www.monitance.com if you are interested!