When running InnoDB you are able to dig into the engine internals, look at various gauges and counters, see past deadlocks and the list of all open transactions. This is in your reach with one simple command —

. On most occasions it works beautifully. The problems appear when you have a large spike in number of connections to MySQL, which often happens when several transactions kill the database performance resulting in very long execution times for even simplest queries, or a huge deadlock.

In such rare cases

often fails to provide the necessary information. The reason is that its output is limited to 64000 bytes, so a long list of transactions or a large deadlock dump may easily exhaust the limit. MySQL in such situation truncates the output so it fits the required size and obviously this is not good since you may lose some valuable information from your sight.

With large deadlocks you can actually deal by intentionally creating a new tiny deadlock which will replace the previous one in the output thus reducing the space occupied by that section of InnoDB status. Baron once wrote an article on how to do this.

There is not such easy way for the long transaction list, but fortunately there are some alternatives to the limited MySQL command output.

First one is that you can have

option set in your my.cnf. This will make InnoDB to write the full status output into

file located in MySQL data directory. Unfortunately this is a startup time parameter, so unless you set it early, it will not be available in an emergency situation.

Other possibility is to create a special InnoDB table called

:

Creating it causes the full status to be periodically printed into MySQL error log. You can later disable logging by simply dropping the table. The problem I faced many times is that people do not configure their error log at all, so the messages disappear into nothingness. So what then?

I discovered that InnoDB will still create the status file on disk even if you do not specify innodb-status-file option. The file is actually used for every

call, so whenever someone runs the command, InnoDB writes the output to that file first and then the stored information is read back to user. To make things more difficult, MySQL keeps the file deleted, so it is not possible access it with a simple

or any other command through the file system. However on many systems such as Linux or Solaris, but possibly also others, there is a relatively simple way to access deleted but not-yet-closed files (file is physically removed only after it is no longer open by any process).

First be sure to run

at least once. Then see what MySQL process ID is:

In my case the process ID is 11886, so I will be using it in the examples, but you should of course use whatever

returned to you.

Now you can use

to see all the file descriptors that are being kept open by the process. So go to

and list all the files that were deleted.

The entries are presented as symbolic links from file descriptor number to a real path as in

.

One of these entries is what you are looking for, it’s often (always?) the file with the lowest file descriptor number, so in my case it should be 5. But of course you can try reading a few first bytes from every such file to discover where InnoDB status is. You can also help yourself with

tool available for many platforms:

The 4th column contains file descriptor numbers and in the 7th column there are the file sizes. This makes it obvious that InnoDB status has to be under file descriptor 5 since it’s the only non-zero length file.

So while you are still in

directory, you can try looking into that file:

Keep in mind the file will only be refreshed when you run

command.

4 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Arjen Lentz

I should check again, but I believe that it’s the mysql cmdline client that truncates the output, not the server. So it is possible to get it.

We could tweak so that the innodb-status-file option is enabled by default… doesn’t make much sense not to, if it writes to a file first anyway…. but I prefer a solution that can be accessed from the client side (although slow query log can’t be either in 5.0).

Baron Schwartz

I think it’s the server. The code is in InnoDB. There are a couple places it happens; in one place it actually checks how big the output is and stops outputting any more txns.

The best tweak is a) my patches, which reduce the verbosity, b) Google’s patches, which increase the allowed size and move the deadlock and other less-useful stuff to the end of the output, so if it gets truncated at least you get the important stuff.

Arjen Lentz

Ok, so why don’t we just take out the limiter. If people ask for it, they want it. Indeed we can do this inside Baron’s patch so people can still control what they want to see.