[gmx-users] MD PME in parallel

David spoel at xray.bmc.uu.se
Tue Mar 11 19:50:16 CET 2003


On Tue, 2003-03-11 at 20:26, e.akhmatskaya at fle.fujitsu.com wrote:
> Hi David,
> 
> >I have run it using LAM and SCALI networks, in both cases it crashes at
> >the end when writing the coordinates (confout.gro). It writes roughly
> >6000 lines out of 23000.
> Sounds familiar to me! I had this scenario on the Linux cluster too.
> 
> >You implied somehow that the problem only occurs when you have no water
> on node 0. 
> This is what I thought. Now I think it is more complicated. It depends on
> distribution of water molecules but in more sophisticated way.
> 
> >There is a workaround for that, the -load option of grompp
> >allows you to modify the division over nodes, e.g.:
> >grompp -load "1.1 1.0 1.0 1.0 1.0"
> Thanks for the idea! Yes, playing with this option I can make those
> benchmarks running on all machines. However, performance becomes very
> upsetting. This is not surprising as I change the load blindly. Perhaps, I
> can play further and change the load on a few more processors to improve
> load balance but I still believe that there should be some nice fix for the
> code. I didn't find any so far ... 
It probably is a bug, however these seem to be hard to debug, since they
turn up at different times. If you can find a situation where it crashes
with a water on processor 0 it would be interesting too.

On the other hand you probably get more performance by running this on a
smaller number of processors. I run systems of roughly 200000 atoms with
PME on 16 processors routinely, however performance is rather poor.

Just out of curiosity, are you involved in software development for
Fujitsu?

-- 
Groeten, David.
________________________________________________________________________
Dr. David van der Spoel, 	Dept. of Cell and Molecular Biology
Husargatan 3, Box 596,  	75124 Uppsala, Sweden
phone:	46 18 471 4205		fax: 46 18 511 755
spoel at xray.bmc.uu.se	spoel at gromacs.org   http://xray.bmc.uu.se/~spoel
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++



More information about the gromacs.org_gmx-users mailing list