Finding the largest tables on MySQLFinding the largest tables on a MySQL instance is a no brainer in MySQL 5.0+ thanks to Information Schema, but I still wanted to post a little query I use for the purpose so I can easily find it later. Plus it is quite handy in a way it presents information:

I do some converting and rounding to see number of rows in millions and data and index size in GB so I can save on counting zeros.
The last column shows how much the index takes compared to the data which is mainly for informational purposes, but for MyISAM can also help you to size your key buffer compared to operating system cache.

I also use it to see which tables may be worth to review in terms of indexes. Large index size compared to data size often indicates there is a lot of indexes (so it is well possible there are some duplicates, redundant or simply unused indexes among them) or maybe there is a long primary key with Innodb tables. Of course, it also could be a perfectly fine table, but it is worth to look.

Changing the query a bit you can look for different sorting order or extra data, such as average row length, you can learn quite a lot about your schema this way.

It is also worth to note queries on information_schema can be rather slow if you have a lot of large tables. On this instance, it took 2.5 minutes to run for 450 tables.

UPDATE: To make things easier I’ve added INFORMATION_SCHEMA to the query so it works whatever database you have active. It does not work with MySQL before 5.0 still, of course 🙂

28 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Unomi

I’m sorry, but where does ‘TABLES’ come from? Do I miss something here? Is it something I’m supposed to know, but is not mentioned in the article?

I run this query against the ‘mysql’ database on 5.0.51, but mysql.TABLES cannot be found. Where should I look for the missing link?

– Unomi –

Matthew Montgomery

Unomi,

This is a query against information_schema.TABLES. The information_schema database was added at v5.0

Matt,

Frank Mash

Unomi, you need to use information_schema:

mysql> use information_schema;

Jay Pipes

Great stuff, Peter! I encourage you to add this to the Forge snippets repository! 🙂

Cheers,

Jay

Gabriel Menini

Same error here…

David Linsin

Awesome Peter, thanks!

Gabriel Menini

Frank, thanks for the tip 🙂 It’s working now.

Unomi

Thanks all for the tip. It’s new for me, but I wanted to have this query running… Everyday a new day to learn something!

– Unomi –

Radek

Example works perfectly.

Great website and great ideas, keep working dude.

gigiduru

Peter,

Why exactly is it taking 2 min 29.19 sec to extract that data?! I thought it’s readily available in the dictionary. And I don’t believe it’s taking a lot of processing power to transform, concatenate and order the results.
Also, this is a thing MySQL AB should work on to fix it (among other several thousands of not-fixed-yet bugs).

Chris

Thanks Peter, this was a great help.

Yafei Qin

I have the same question when it takes more than half minute during check my all database, as gigiduru mentioned.
Thanks peter. 🙂

btw, a tips:
If you want to show tables only in a certain database, add a WHERE clause in the SQL
WHERE table_schema = ‘db_name’

vladislav

Nice. Thanks for your time and effort. Open Source is cool.

lvermilion

Nice post!! The only issue I have is it is not the same result from “select count(*) from “. For that matter the show table status is not the same either. The true size of the table is increasing and I can not seem to find records getting deleted anywhere. Can you explain why this is?

Here is an example.

mysql> select count(*) from table;
+———-+
| count(*) |
+———-+
| 14828558 |
+———-+
1 row in set (39.94 sec)

#########
Now to make it interesting, I will to the show table status and your SQL statement and get different results from the count(*). (Note the “show table status” results match the output of your SQL statement, if I could execute them at identical seconds).
#########

mysql> SELECT concat(table_schema,’.’,table_name),concat(round(table_rows/1000000,2),’M’) rows,concat(round(data_length/(1024*1024*1024),2),’G’) DATA,concat(round(index_length/(1024*1024*1024),2),’G’) idx,concat(round((data_length+index_length)/(1024*1024*1024),2),’G’) total_size,round(index_length/data_length,2) idxfrac FROM information_schema.TABLES ORDER BY data_length+index_length DESC LIMIT 10;
+———————————————-+——–+——-+——-+————+———+
| concat(table_schema,’.’,table_name) | rows | DATA | idx | total_size | idxfrac |
+———————————————-+——–+——-+——-+————+———+
| database.table | 14.73M | 2.08G | 0.00G | 2.08G | 0.00 |
+———————————————-+——–+——-+——-+————+———+
10 rows in set (0.15 sec)

mysql> show table status like ‘table’ \G
*************************** 1. row ***************************
Name: table
Engine: InnoDB
Version: 10
Row_format: Compact
Rows: 14898977
Avg_row_length: 150
Data_length: 2237661184
Max_data_length: 0
Index_length: 0
Data_free: 0
Auto_increment: NULL
Create_time: 2008-10-01 09:38:00
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
Comment: InnoDB free: 4021248 kB
1 row in set (0.06 sec)

########
I have ran your query and the show table status query yet again and get new values that are not consistent with “select count(*) from “.
########

+———————————————-+——–+——-+——-+————+———+
| concat(table_schema,’.’,table_name) | rows | DATA | idx | total_size | idxfrac |
+———————————————-+——–+——-+——-+————+———+
| database.table | 15.00M | 2.08G | 0.00G | 2.08G | 0.00 |
+———————————————-+——–+——-+——-+————+———+

mysql> show table status like ‘table’ \G
*************************** 1. row ***************************
Name: table
Engine: InnoDB
Version: 10
Row_format: Compact
Rows: 15120698
Avg_row_length: 147
Data_length: 2237661184
Max_data_length: 0
Index_length: 0
Data_free: 0
Auto_increment: NULL
Create_time: 2008-10-01 09:38:00
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
Comment: InnoDB free: 4021248 kB
1 row in set (0.02 sec)

mysql> select count(*) from table;
+———-+
| count(*) |
+———-+
| 14828945 |
+———-+
1 row in set (44.33 sec)

lvermilion

If I check mysql reference show table status seems to be very inaccurate if we take it for the number of rows, but the datalength seems to be accurate. Is your SQL statement keying off the same values that show table status is?

Rows

The number of rows. Some storage engines, such as MyISAM, store the exact count. For other storage engines, such as InnoDB, this value is an approximation, and may vary from the actual value by as much as 40 to 50%. In such cases, use SELECT COUNT(*) to obtain an accurate count.

Stefan

Quick question.
I have a large MYISAM TABLE with 4 fields ( id = primary key, categID = int index, subcategID = int index, content= blob [ this field has a serialized array – avg 9KB ] ) ; this table has 5,000,000 records. The size of it is 9GB ( has serialized arrays ). An select like SELECT content FROM table WHERE categID = ‘categID’ AND subcategID = ‘subcategID’ takes ~ 4 – 5 seconds. Any idea to speed up this functionality?

Jonathan Bayer

Stefan,

Why not make a combined key of categID, subcategID?
CREATE INDEX index_2 on table(categID, subcategID)

Right now, if I’m reading it properly, you reference the categID index, and then scan everything in that index. If you make a combined key, it can do both at the same time.

JDS

How do the results of these queries compare to the size of the tables on disk? For example, using file_per_table, if the total_size column equals 1GB, how will this relate to the actual size of the table.ibd file on disk?

I understand that the size of the table.ibd file will not shrink if data is deleted. However, what I’m wondering is if there is a large discrepancy in the sizes, is this an indicator that it may be a good opportunity to run OPTIMIZE and thus reclaim some hard disk space?

Taking real-world data, I have for example a table that looks like this from the query in this article:

+—————————————+———+——–+——–+————+———+
| CONCAT(table_schema, ‘.’, table_name) | rows | DATA | idx | total_size | idxfrac |
+—————————————+———+——–+——–+————+———+
| databasename.tablename | 147.23M | 24.47G | 25.38G | 49.85G | 1.04 |

Key point: total_size is 49.85GB

On disk, that table’s .ibd file looks like this:

# cd /var/lib/mysql/databasename
# ls -lh tablename*
-rw-rw—- 1 mysql mysql 8.7K 2010-08-05 16:31 tablename.frm
-rw-rw—- 1 mysql mysql 58G 2012-04-24 13:41 tablename.ibd

On disk vs the information_schema query, there is around 9GB discrepancy. Does this have any real world meaning?

Thanks

BH

Thank you very much for this handy query! I adjusted it down from gigabytes to megabytes to suit my needs and it’s working great!

Simon

Thanks, very helpful!

kavitha

Mysql sever version 5 and above has all these posiblities to fetch data from information_schema. But the lower versions like 4.X.X dont have information_schema. Any idea to get the memoy of each schema and largest tables under each schema??

Pavel

Thank you boy!

Rahul Kadukar

Hi,

This does not give exact results, if you are running innoDB. Any suggestions ?

Jason

Awesome man I was looking for exactly this to find out which tables were being bad little piggies! Thanks!!!

Vedavrat

Thank you!

I rewrite your query.
Such way

SELECT CONCAT(table_schema, ‘.’, table_name),
CONCAT(ROUND(table_rows / 1, 0), ‘ r.’) rows,
CONCAT(ROUND(data_length / (1024*1024), 1), ‘ MB’) data,
CONCAT(ROUND(index_length / (1024*1024), 1), ‘ MB’) idx,
CONCAT(ROUND((data_length+index_length) / (1024*1024), 1), ‘ MB’) total,
ROUND(index_length / data_length, 2) idxfrac
FROM information_schema.TABLES
ORDER BY data_length + index_length ASC

seem to me to be more useful.

Vikas Arya

HI ,
i am using mysql server version 5.1(Free version).i have table whose size is 3.93G and this table contain ‘7240704’ rows and i want to insert 9000000 more rows into this table but at the time of insertion server gives error message like Error Code: 1114. The table ‘form_data_archive’ is full.
what should i do i am facing this problem on production server please help ASAP.

Eric Ruiz

10 years and still helping!
Thanks!