Wednesday, April 07, 2010

MySQL Cluster - BLOB performance and other things

At the UC 2010 I will have a session on MySQL Cluster Performance Tuning. This session will address a lot of the most common performance problems I see in my day to day job with Cluster. A small excerpt of the talk is below here and many other things will be addressed in the session (JOINs, schema optimization, batching, indexes, parameter tuning etc). I hope to see you there!

First a bit on ndb_autoincrement_prefetch_sz and then blobs!


One thing is contention on auto_increments, which can really slow down performance.
By default the ndb_autoincrement_prefetch_sz=1. This means that the mysqld will cache one auto_increment number and then go down to the data nodes to fetch the next number. It is better to let the mysqld cache more numbers, to avoid the unnecessary round-trip.

From MySQL Cluster 7.0.13 you can set it up to 65536 (previously max was 256)

With ndb_autoincrement_prefetch_sz=1024 the mysqld will cache 1024 numbers before fetching the next range of numbers from the data nodes.

Som numbers for inserting batches of 16 records from 8 concurrent threads on one mysqld:
ndb_autoincrement_prefetch_sz=1:                1211.91TPS
ndb_autoincrement_prefetch_sz=256: 3471.71TPS
ndb_autoincrement_prefetch_sz=1024: 3659.52TPS
Here we got an 3x improvement on INSERT performance. Nice!

This test is by no means trying to max out the cluster, far from it, just to illustrate how important it is to set the ndb_autoincrement_prefetch_sz.


Another is on BLOB/TEXT attributes - which in many cases are overused in applications (e.g, there is no reason to store an 'email' as a TEXT, or if the data is less than about 8000B).

If you can change them to VARBINARY/VARCHAR (as LinuxJedi suggests), do it:

BLOBs/TEXTs are significantly slower compared to VARBINARY/VARCHAR (because the BLOBs are stored in a separate table, and need to be locked with at least a shared lock when accessed).
CREATE TABLE `t1_blob` (
`data1` blob,
`data2` blob,
and inserted 200K records. sizeof(data1) = 1024B, sizeof(data2) = 1024B.

SELECT data1, data2 FROM t1 WHERE id=<rand>

Executed from 1 App - 8 Threads on one MySQLD and two data nodes resulted in:

data1 and data2 represented as BLOBs : 5844 TPS
data1 and data2 represented as VARBINARYs: 19206 TPS

Again, this is not maxing out the data nodes, just to show you the idea.

If you can replace TEXT/BLOB with VARCHAR/VARBINARY (remember that the max record size in Cluster is 8052B) do it.

No comments: