[gmx-users] parallel run-more than one node / ci range checking error even applying the patch / parallel runs

Claus Valka lastexile7gr at yahoo.de
Wed Oct 1 15:50:12 CEST 2008


Hello,in reply to my previous question even with options in compilation with mpi_d I wasn't able to solve my problem. Every time I choose to run gromacs in more than one node, after some number of steps I get that error. It is very annoying, because I cannot increase the speed of my runs, yet only up to two processors (that means that I can at least run in parallel somehow). In order to see if the compilation was successful I even restarted the whole cluster,yet to no avail. The runs are ok if I run them in only one node though. Any help would be greatly appreciated!
Regards,Nikos
--- Claus Valka <lastexile7gr at yahoo.de> schrieb am Fr, 19.9.2008:
Von: Claus Valka <lastexile7gr at yahoo.de>
Betreff: [gmx-users] ci range checking error even applying the patch / parallel runs
An: gmx-users at gromacs.org
Datum: Freitag, 19. September 2008, 16:24

Hello,after searching extensively the mailing lists I wasn' able to solve my problem. This has to do with running gromacs in parallel (more than one nodes) in a rocks cluster. I 'm able to run a simulation both in one or two processors in a dual core node, yet every time I try to use more than one nodes, this error appears :Program mdrun_mpi, VERSION 3.3.2
Source code file: nsgrid.c, line: 226
    
    Range checking error:
    Explanation: During neighborsearching, we assign each particle to a grid   based on its coordinates. If your system contains collisions or parameter   errors that give particles very high velocities you might end up with some   coordinates being +-Infinity or NaN (not-a-number). Obviously, we cannot   put these on a grid, so this is usually where we detect those errors.   Make sure
 your system is properly energy-minimized and that the potential   energy seems reasonable before trying again.
    
   Variable ci has value 741. It should have been within [ 0 .. 740 ]

There isn't only me who is facing that problem so searching the forums I came accross a patch, that is : [gmx-users] fix for range checking errors in parallel double precision mdrun 
Compiling gromacs with this version of xtcio.c doesn't seem to solve the problem. Is there anything else I'm missing?I give you my configure options in case this is helpful:./configure --prefix=/export/local/gromacs-3.3.2 --program-suffix= --enable-double --disable-float --disable-fortran --with-x --with-fft=fftw3

My commands to run a simulation are for example in six nodes :grompp -f *.mdp -c *.gro -p *.top -np 6and in my script the arguments for running are:-pe mpi 6mpirun -np 6 --hostfile
 $TMPDIR/machines mdrun_mpiThank you in advance,Nikos
__________________________________________________
Do You Yahoo!?
Sie sind Spam leid? Yahoo! Mail verfügt über einen herausragenden Schutz gegen Massenmails. 
http://mail.yahoo.com _______________________________________________
gmx-users mailing list    gmx-users at gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-request at gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


      
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20081001/f6561494/attachment.html>


More information about the gromacs.org_gmx-users mailing list