- A lot of tables indexes, columns, and tables
- A lot of records
- If a table has > ~90M records, you have to create the table with
MAX_ROWS=<amount of records in table anticipating growth>
:CREATE TABLE t1(...) ENGINE=ndbcluster MAX_ROWS=200000000;
This way the data node will allocate more partitions for the table, since there is a limitation in how many records that can be stored in one partition (and the limit is around 90M records). - Many tables / table objects --> Make sure you increase
MaxNoOfTables
(kernel limit is 20320 tables). This creates a table object pool of sizeMaxNoOfTables
.- Each table you create will use one table object.
- Each unique index you create will use one table object
- Each ordered index you create will use one table object
- Each BLOB/TEXT attribute will use one table object.
For example if you create the following table:CREATE TABLE `t1` (
`a` int(11) NOT NULL DEFAULT '0',
`b` char(32) DEFAULT NULL,
`c` blob,
`d` bigint(20) DEFAULT NULL,
PRIMARY KEY (`a`),
UNIQUE KEY `b` (`b`),
KEY `d` (`d`)
) ENGINE=ndbcluster;
The table will require table objects as follows:- PRIMARY KEY (there is an ordered index on the PK): 1 table objects
- UNIQUE KEY (unique hash index + ordered index): 1 + 1 table objects
- BLOB: 1 table object
- Actual Table: 1 table object
- IN TOTAL = 5 table objects
If you had specifiedPRIMARY KEY USING HASH
, andUNIQUE KEY USING HASH
, then you can save two table objects, one for each ordered index.
- Many UNIQUE indexes --> Increase
MaxNoOfUniqueHashIndexes
- Many ordered indexes --> Increase
MaxNoOfOrderedIndexes
- Many attributes - -> Increase
MaxNoOfAttributes
(should be sum of number of columns all tables and add 3-10% for spare).
DataMemory
IndexMemory
SendBufferMemory=8M
ReceiveBufferMemory=8M
StringMemory=25
toStringMemory=50
IndexMemory=
StringMemory=25
).This page will be updated if I find other things.
If you still have problems - let us/me know, e.g, on the cluster mailing list!
The Configurator has these parameters specified already in the config.ini so it is a small thing to change (just run the ./rolling-restart.sh script).
7 comments:
What would you think of the idea:
Making MaxNoOfUniqueHashIndexes and MaxNoOfOrderedIndexes default to unlimited?
You still retain the ability to limit these values, but DBA's new to mysql cluster have one less thing to find / change when testing large innodb db's on cluster.
Hi Johan,
I need to configure many databases which will have approx 12 tables each and I am facing the problem of limitation of MaxNoOfTables which is 20320. but I want to create more tables. Is there any way using which I can do it? Any alternative for this?
Manu,
No the hard limit is 20320 tables.
Denormalize?
Out of curiosity, why do you need so many tables?
BR
johan
Simon,
It will occupy RAM, that is why.
BR
johan
does this 20320 database objects includes tables and indexes of mysql's default tables like information_schema,mysql,test etc.
Vineet,
yes, it is the sum of:
ordered indexes, unique indexes, blob attributes, and normal tables!
BR
johan
and tables only with storage engine=ndb;
So not the tables in the 'mysql' database, nor the information_schema.* (not real tables anyway).
Post a Comment