Friday, May 08, 2009

Cluster fails to start - self diagnosis (sanity check mysql cluster)

If the MySQL Cluster fails to start, what can you do and what to check?
Here are some sanity checks.

Initial system start
If it is the first time you start up the cluster and it fails, then check the following:
  • Network - Check /etc/hosts
    You should have (on the line for localhost)
    127.0.0.1 localhost
    and nothing else! Then usually, the normal IP address host mapping follows:
    10.0.1.10 hostA
    10.0.1.11 hostB
    ...
    REDHAT and others can add a lot of other things to the "localhost" line(s), and this results in that the nodes won't be able to connect to each other (they will be stuck in phase 0 or phase 1)
  • Network - Check if you can ping the machines
  • Network - Check if you have any firewalls enables ( e.g check with /sbin/iptables -L)
    Disable the firewall in that case. Exactly how depends on OS and Linux distribution.
    On Redhat systems, then SELinux might be enabled. Googling "disable firewall <your distro>" should give answers. Firewall is the most common culprit preventing the nodes in the cluster talking to each other.
  • RAM - Check if you have enough RAM to start the data nodes
    Check using 'top' on the computers where the data nodes running, while you start the data nodes. So always, have 'top -d1' running on the data nodes while they are starting up.
  • RAM - If you are allocating a lot of DataMemory, then you may also need to increase the parameter TimeBetweenWatchdogCheckInitial in [NDBD DEFAULT] of your config.ini. Set it to 60000 if you have >16GB of RAM.
  • Disk space - check using ' df -h' if you have enough space where the data nodes has its data directory.
  • CPU - if you use 7.0, enable multi-threading (8 cores) and only have a 4 core system or less, then there are chances that the Cluster won't come up. Competition for resources. I have seen this happening but no conclusive evidence yet.
  • OS - if you have a mix of OSs where the data nodes run, then it can be a problem. E.g, I have seen problems even when Fedora has been used on all machines, but one of the machines had a slightly older linux kernel. Also, it won't work if one of the nodes is a RH4 and the other is a RH5 (atleast mixing RH3 and RH4 doesn't).
So for the "initial start" it is mainly environmental factors preventing the cluster to start.
If you still have problems, ask on the Cluster Forum or MySQL Support if you have Support for advice.

Also, disable NUMA (Cluster is not NUMA aware) and make sure you dont SWAP!

System start
If you can't restart the cluster, and you haven't changed the configuration and haven't been filling up the disks with other things (i.e, check disk,ram, network as above) , then you have probably hit a bug. Ask on the Cluster Forum or MySQL Support if you have Support for advice.

In many cases it is recoverable by restarting one node in each node group (instead of all data nodes), and try out different combinations. When the "half" cluster has started, then you can restart the rest of the data nodes with --initial and they will sync up from the already started nodes.

Node (re)start
If you can't restart a failed data node, and you haven't changed the configuration and haven't been filling up the disks with other things (i.e, check disk,ram, network as above) , then you have probably hit a bug, but also there might have been corruption of the data files (this depends on how the computer/data node crashed).

You can try to do an initial node restart (see below).

Ask on the Cluster Forum or MySQL Support if you have Support for advice.

Initial Node (re)start
If you can't restart a failed data node with --initial, and you haven't changed the configuration and haven't been filling up the disks with other things (i.e, check disk,ram, network as above) , then you have probably hit a bug. Ask on the Cluster Forum or MySQL Support if you have Support for advice.

Collecting error data
The program 'ndb_error_reporter' is great to collect log files from the data nodes and management servers and puts them into a single bz file. Send this file to either Cluster Forum or MySQL Support if you have Support together with detailed steps what you have done.

10 comments:

Matthew Montgomery said...

Also for initial system start I found that you can have problems if you use hostnames in the config.ini and your /etc/hosts file points the hostname of the system to localhost. The cluster nodes will not accept connections from multiple interfaces. (ethN and lo)

Johan Andersson said...

Thanks that is right,
Have in /etc/hosts
127.0.0.1 localhost
a.b.c.d hostname

TonyBee said...

Hi Matthew,

Many thanks for the suggestion. Your comment has helped alot.

tonyb

Tri said...

hi Johan, we have a cluster set up that has 4 data nodes, one management node, when we tried to restart with or without --initial option, we get the following error in one of the data node:

jbalock waiting for lock, contentions: 1 spins: 1

Before this happened, we were able to load some data, but were forced to shut down due to undo log was full. After that, even cleaning up data node dir , and database dir, this problem still comes up during restart. Any ideas what happen ? Thanks a lot

Johan Andersson said...

Hi,

based on the information it is hard to say. It would be great if you could specify more in details and send a mail to the cluster mailing list ( http://lists.mysql.com/cluster ), and we can pick it up there.

However, "jbalock waiting for lock, contentions: 1 spins: 1" is not a error message, but just a basically a debug/information message for the multi-threading.


BR
johan

Knockin_Heavens_door said...

Hi Johan

I have just been through your live webinar on MySQL Cluster Deployment Best practices. It has been a very learning and informational session with your tips and suggestions on various aspects like hardware selection, Administration, primary keys etc. Thanks.

I also have a question about a response you have provided in a query above here.
I was trying to do some tests in my lab using MySQL Cluster. I am using 2 servers each with 2 mgmt nodes, 2 storage nodes, and 2 sql nodes. When checked using SHOW in ndb_mgm, it shows all 6 nodes connected and running.
But in the file ndb_*_out.log, I see errors like these below consistently appearing:
jbalock thr: 2 waiting for lock, contentions: 1800 spins: 21175
send lock node 6 waiting for lock, contentions: 6 spins: 2883
sendbufferpool waiting for lock, contentions: 15 spins: 1226

In an earlier post here, you have mentioned that errors like "jbalock waiting for lock" are not actually errors, but debug information. So, are the other errors I have shown above, also for informational purpose ? Should the terms like "contentions", "waiting for lock" and "spins" not to be considered serious problems ?
In my test currently, I have hardly any data.
Could you please suggest.
Interestingly, these errors do not come if I use ndbd instead of ndbmtd.
Thanks very much !

Johan Andersson said...

Hi,
I am writing a new blog post about the contentions. It should be finished pretty soon.

BR
johan

Johan Andersson said...

Here is the blog post:

http://johanandersson.blogspot.com/2010/09/cluster-spinscontentions-and-thread.html

Mariatti, Ariatti, etc... said...

Many thanks for the help. Your comment has helped a lot!!!

Marcelo Ariatti said...

Thanks a lot man! great tips! Your post has helped a lot.