Both which were worked around, and really we never did need to have a binary or a varbinary as the PK, we used a
Performance is great - we have two data nodes (nehalem, 32GB RAM, 146GB SAS 10K disk, 2x4 core 2.4GHz (E5620) ) and two application hosts (same spec, less RAM as data nodes) and we easily push through 70-80K transactions per second (reads and writes). We could easily do more, if we added more applications on the application hosts. The data nodes are using about 50-60% of their capacity (CPU).
The graph (mysql cluster statistics) is at start quite spiky. As you can see transactions/operations goes up and down. This is because we first tested with 10M, then reloaded with 20M records, 40M, and finally 80M records and let it run overnight.
The requests are primary key based, and there is a PK followed by a VARBINARY (and some other meta fields). In the VARBINARY a tree structure is stored. The tree structure represents a parent-child relationship, and we encode it in JSON. So we are using something like a document-oriented model. By doing this we avoid complex queries to navigate a parent child structure and thereby have many round-trips between application and data nodes , and instead ask for one big packet in one network roundtrip. Writes in this model will be more costly since we write back the whole parent-child structure instead of just one of the nodes in this structure.
In the Java applications we also have a REST interface to the data.
SQL is used only for one thing and that is to create the tables.
CMON Enterprise is used for monitoring and management, and the Cluster configuration comes from Severalnines Core Scripts.
Finally, we made some fine-tuning on MySQL Cluster and got another 20% - this is not shown in the graph, and another story.
The only thing we are missing (or rather waiting for) now is the multi-connection functionality (ndb_cluster_connection pooling) in cluster/j.
No comments:
Post a Comment