Innodb/XtraDB tables do benefit from being reorganized often. You can get data physically laid out in primary key order as well as get a better feel for the primary key and index pages, and so use less space, it’s just that MySQL OPTIMIZE TABLE might not be the best way to do it.
Why you shouldn’t necessarily optimize tables with MySQL OPTIMIZE
If you’re running Innodb Plugin on Percona Server with XtraDB you get the benefit of a great new feature – ability to build indexes by sort instead of via insertion. This process can be a lot faster, especially for large indexes which would get inserts in very random order, such as indexes on UUID column or something similar. It also produces a lot better fill factor. The problem is, OPTIMIZE TABLE for Innodb tables does not get the advantage of it for whatever reason.
Let’s take a look at little benchmark I did by running OPTIMIZE for a second time on a table which is some 10 times larger than the amount of memory I allocated for buffer pool:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | CREATE TABLE `a` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `c` char(64) DEFAULT NULL, PRIMARY KEY (`id`), KEY `c` (`c`) ) ENGINE=InnoDB AUTO_INCREMENT=12582913 DEFAULT CHARSET=latin1 mysql> select * from a order by id limit 10; +----+------------------------------------------+ | id | c | +----+------------------------------------------+ | 1 | 813cf02d7d65de2639014dd1fb574d4c481ecac7 | | 2 | 62960f5d5d50651e5a5983dacaedfa9a73a9ee87 | | 3 | cea33998792ffe28b16b9272b950102a9633439f | | 4 | 8346a7afa0a0791693338d96a07a944874340a1c | | 5 | b00faaa432f507a0d16d2940ca8ec36699f141c8 | | 6 | 8e00926cf6c9b13dc8e0664a744b7116c5c61036 | | 7 | f151fe34b66fd4d28521d5e7ccb68b0d5d81f21b | | 8 | 7fceb5afa200a27b81cab45f94903ce04d6f24db | | 9 | 0397562dc35b5242842d68de424aa9f0b409d60f | | 10 | af8efbaef7010a1a3bfdff6609e5c233c897e1d5 | +----+------------------------------------------+ 10 rows in set (0.04 sec) # This is just random SHA(1) hashes mysql> optimize table a; +--------+----------+----------+-------------------------------------------------------------------+ | Table | Op | Msg_type | Msg_text | +--------+----------+----------+-------------------------------------------------------------------+ | test.a | optimize | note | Table does not support optimize, doing recreate + analyze instead | | test.a | optimize | status | OK | +--------+----------+----------+-------------------------------------------------------------------+ 2 rows in set (3 hours 3 min 35.15 sec) mysql> alter table a drop key c; Query OK, 0 rows affected (0.46 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> optimize table a; +--------+----------+----------+-------------------------------------------------------------------+ | Table | Op | Msg_type | Msg_text | +--------+----------+----------+-------------------------------------------------------------------+ | test.a | optimize | note | Table does not support optimize, doing recreate + analyze instead | | test.a | optimize | status | OK | +--------+----------+----------+-------------------------------------------------------------------+ 2 rows in set (4 min 5.52 sec) mysql> alter table a add key(c); Query OK, 0 rows affected (5 min 51.83 sec) Records: 0 Duplicates: 0 Warnings: 0 |
That’s right! Optimizing table straight away takes over 3 hours, while dropping indexes besides primary key, optimizing table and adding them back takes about 10 minutes, which is close than 20x speed difference and more compact index in the end.
So if you’re considering running OPTIMIZE on your tables consider using this trick, it is especially handy when you’re running it on the Slave where it is OK table is exposed without indexes for some time. Note though nothing stops you from using LOCK TABLES on Innodb table to ensure there is not a ton of queries starting reading table with no indexes and bringing the box down.
You can also use this trick for ALTER TABLE which requires a table rebuild. Dropping all indexes; doing ALTER and when adding them back can be a lot faster than straight ALTER TABLE.
P.S I do not know why this was not done when support for creating an index by sorting was implemented. It looks very strange to me to have this feature implemented but a majority of high-level commands or tools (like mysqldump) do not get the advantage of it and will use the old slow method of building indexes by insertion.
More resources on performance:
Posts
- Is 80% of RAM how you should tune your innodb_buffer_pool_size?
- MySQL server memory usage troubleshooting tips
- MySQL 5.7 performance tuning immediately after installation
- Ten MySQL performance tuning settings after installation
Webinars
- Top most overlooked MySQL performance optimizations
- Percona Server for MySQL 5.7: key performance algorithms
Presentations
Free eBooks
- InnoDB Performance Optimization
- Practical MySQL Performance Tuning: section 1 – understanding query parameter tuning and MySQL optimization
- Practical MySQL Performance Tuning: section 2 – troubleshooting performance issues and optimizing MySQL
- Practical MySQL Performance Tuning: section 3 – query optimization
Tools
Note: In MySQL 5.5, OPTIMIZE TABLE does not take advantage of “InnoDB Fast Index Creation” feature. This limitation is documented in the MySQL 5.5 official documentation.
From: https://dev.mysql.com/
OPTIMIZE TABLE
for anInnoDB
table is mapped to anALTER TABLE
operation to rebuild the table and update index statistics and free unused space in the clustered index. This operation does not use fast index creation. Secondary indexes are not created as efficiently because keys are inserted in the order they appeared in the primary key.
Percona Server 5.5.11 and higher allows utilizing fast index creation feature for “ALTER TABLE” and “OPTIMIZE TABLE” operations, which can potentially speed them up greatly. This feature is controlled by the expand_fast_index_creation system variable which is OFF by default. This variable was implemented in Percona Server – 5.5.16-22.0.
More Information:
https://www.percona.com/blog/
https://www.percona.com/doc/
MySQL 5.6 introduced the online DDL feature which provides support for in-place table alterations. As of MySQL 5.6.17, OPTIMIZE TABLE can be performed in-place for rebuilding regular and partitioned InnoDB tables which makes “OPTIMIZE TABLE” operation much faster.
Table 14.13 Online DDL Support for Table Operations
Operation | In Place | Rebuilds Table | Permits Concurrent DML | Only Modifies Metadata |
---|---|---|---|---|
Optimizing a table | Yes* | Yes | Yes | No |
Optimizing a table
OPTIMIZE TABLE tbl_name;
Performed in-place as of MySQL 5.6.17. In-place operation is not supported for tables withFULLTEXT
indexes. The operation uses theINPLACE
algorithm, butALGORITHM
andLOCK
syntax is not permitted.
What does mysqldump have to do with creating indexes? It doesn’t create indexes at all….do you mean when importing an export made by mysqldump? And is that different regardless of
Also, how big was this table on disk? I’m in the middle of doing some XtraDB table optimizations, they take 3 hours for a 57G .ibd file; after they’re done I’ll try this method.
According to my calculations, your test table is 528Mb….(charset is latin1, sha(1) is 40 chars + 1 byte to indicate length of the varchar, int is 4 bytes, so that’s 45 bytes per row. Autoincrement is 12582913, so you have max 12582912 rows:
12582912*45/1024/1024=540.000
Is that accurate? (even if it’s more than 1 byte for varchar length, it doesn’t change the result that much…)
Also did you test a 2nd time by dropping the indexes first and optimizing and re-adding, and then running OPTIMIZE TABLE, to make sure it wasn’t influenced somehow by the first OPTIMIZE TABLE’s defragmentation/rebuild?
er, sorry, my sentence was cut off — “And is that different regardless of whether or not ALTER TABLE…DISABLE KEYS is used?”
Sheeri, this bug: http://bugs.mysql.com/bug.php?id=49120
There’s also:
http://bugs.mysql.com/bug.php?id=57583
Morgan — thanx, that makes more sense re: mysqldump. Something like –indexes-after-for-innodb or something, so that in the mysqldump the indexes will be added after the table data is inserted. Gotcha.
Sheeri,
Yes the point is mysqldump could be fixed so it supports creating indexes after data is loaded which would make it a lot faster for Innodb tables. Or Innodb could be fixed to support enable keys/disable keys which mysqldump already includes in the dump.
The table is not small but buffer pool in this case is also just 128M – this is my test box.
I mentioned this is Second OPTIMIZE TABLE run just to make sure it is “null” operation it should just rebuild the same table.
Besides the UTF-8-bug in Fast Index Creation it is good idea to do drop and create indexes.
The InnoDB plugin documentation says that altering foreign keys on tables will cause a table copy instead of fast index creation. Is there an alternate way to optimize the creation of these indexes?
I believe this is what JS was getting at…what if I have a table that has Foreign Keys referencing other tables? Can I just drop those, do the optimize, and then re-add?
I tried this trick when importing tables to a InnoDB with about 200,000 rows: I first created the tables without indexes, ran the import, ran OPTIMIZE TABLE and then added the index afterwards (followed by a ANALYZE TABLE). Unfortunately i could not notice any difference. Is there anything more to consider here?
Actually i wonder, if a table should even be optimized after importing at all.
Sorry, i think i missed that this trick only applies to Percona Server.
Yeah? You going to do this “drop key” thing manually on every key in every table you want to “optimize” when you’ve got 10 databases, each with several hundred tables and multiple keys?? Let me know next month when you’re finally done.
Cool! This is a very good tip. I was actually looking for a better way to optimize my database since it is taking almost 6 hours to optimize a single table. Your trick is way faster thanks!
Kenny – sure, that’s one reason why even a half competent DBA will write a script.
hi peter,
for this trick to work- do i need you enable fast_index_creation ?
thanking you.
regards,
ch Vishnu
My Innodb file size is 5,9Gb. and ai can’t optimize using mysqlcheck -o -A…
fuk innodb….
Peter, I have an InnoDB table that I want to optimize, but another table (a child table) has a foreign key constraint pointing into this table. I tried this:
SET FOREIGN_KEY_CHECKS = OFF;
ALTER TABLE schema.my_parent_table DROP all foreign keys …
ALTER TABLE schema.my_parent_table DROP all indexes …
OPTIMIZE TABLE schema.my_parent_table;
ALTER TABLE schema.my_parent_table ADD back all indexes and foreign keys …
SET FOREIGN_KEY_CHECKS = ON;
All of the above works except for the optimize itself (I verified). I get “Error 1025 … Error on rename of … (errno: 150)” because of the FK in the child table. I thought that “SET FOREIGN_KEY_CHECKS = OFF” would disable that relationship.
I even tried dropping the parent table (which works), then recreating it exactly as it was before the drop (with only the PK). I get the same error.
Is there a way I get optimize to work on parent tables?
Addition to the above entry: I guess I should have said………..
Is there a way I can get optimize to work on a parent table *without* having to drop the FK constraints in all child tables and then re-create them after I finish with the parent table?
YOU SAVE MY LIFE!!! THAAAAAAAAAANK YOU!!!!!!
Is this still true for newer MySQL 5.6.x?
Article is outdated (at least for MariaDB).
MariaDB [uni_db]> CALL dorepeat(1000000);
Query OK, 0 rows affected (1 min 46.11 sec)
MariaDB [uni_db]> optimize table a;
+———-+———-+———-+——————————————————————-+
| Table | Op | Msg_type | Msg_text |
+———-+———-+———-+——————————————————————-+
| uni_db.a | optimize | note | Table does not support optimize, doing recreate + analyze instead |
| uni_db.a | optimize | status | OK |
+———-+———-+———-+——————————————————————-+
2 rows in set (9.51 sec)
MariaDB [uni_db]> alter table a drop key c;
Query OK, 0 rows affected (0.01 sec)
Records: 0 Duplicates: 0 Warnings: 0
MariaDB [uni_db]> optimize table a;
+———-+———-+———-+——————————————————————-+
| Table | Op | Msg_type | Msg_text |
+———-+———-+———-+——————————————————————-+
| uni_db.a | optimize | note | Table does not support optimize, doing recreate + analyze instead |
| uni_db.a | optimize | status | OK |
+———-+———-+———-+——————————————————————-+
2 rows in set (4.83 sec)
MariaDB [uni_db]> alter table a add key(c);
Query OK, 0 rows affected (4.85 sec)
Records: 0 Duplicates: 0 Warnings: 0
i had table of 6gb data, to optimize it was taking more than 16 hours after dropping two existing indexes, it took 30 mins to optimize and 50min+9min for adding back index, it was pretty good response,
but still in production i have table of 150gb data, can anyone tell how time it might take for all above activities
I have a innodb table 650 GB in size and data shall be 100 GB.mysql 5.5 on solaris 10. optimize table was taking huge time . I had to cancell in between as speed suggested it will take more than 1 day. I will try to drop all indexes and see
I liked this notes, however, on mysql 5.6, Dropping a Primary Key takes a long time as well… Is there a way to drop it faster?
I received the information that “Tables using the InnoDB engine (10) will not be optimised. Other tables will be optimised (25)” doesn’t understand its mean. Will you please explain what is it and how to optimised?
could you do the same thing using pt-online-schema-change –alter ‘engine=innodb’?
For me it did not help
table with 1887681 and 500 columns take more without indexes (only primary) than with all indexes.