[gmx-users] Question about parallazing Gromacs

Florian Haberl Florian.Haberl at chemie.uni-erlangen.de
Wed Sep 13 14:14:39 CEST 2006


hi,

On Wednesday 13 September 2006 13:10, Milan Melichercik wrote:
> Dňa St 13. September 2006 12:42 Qiao Baofu napísal:
> > Hi all,
> >
> > I have a question about parallazing gromacs: I run the same system on a
> > cluster of my institute and my local computer,
> >      Cluster:* *dual processor boards AMD Opteron 270 (Dual-Core), 2.0
> > GHz Local computer: AMD X86-64 Cpu, double precision
> >
> > 1. The cluster (nodes=3:ppn=4) runs  87950 MD steps  for one hour
> > 2. The cluster (nodes=5:ppn=4) runs  42749 MD  steps  for one hour
> > 3. The cluster (nodes=11:ppn=4) runs  5962 MD  steps  for one hour
> > 3. My local computer runs  179090 MD steps  For 1hour 51 mintues.
> >
> > It is verry strange that the more cpus I use, the slowest the gromacs
> > runs.!!

Try something like multiple of 2 not something like 10 nodes, you need also a 
fast interconnection like infiniband to reach a good scaling with a higher 
node number, but always use 2 4 8 cpus/cores.

Upcoming release of gromacs will have a better scaling with new algos. You can 
try the cvs version with seperate pme nodes and domain decomposing.

Gromacs 3.3.1 scales well with infiniband up to 32 cores.


> >
> > Who knows what's wrong with my job?   And for paralleled gromacs, how
> > many cpus is prefered?
>
> As far as I know, the problem isn't your job but the interconnet between
> nodes, cause MD (and many other paralel computations) are very sensitive to
> the interconnect (network) bandwith and even more to the latencies - the
> processes need to transfer large amount of data to  the other nodes and
> till the other nodes don't have them, they cannot compute. The other
> problem can be related with the congestion of the network, so the switch
> (or generally network) isn't able to transfer so large amount of data and
> throw away some of them...
> So (in the extreme case of the very slow net) you will have the fastest
> system by using only one node (and all of available CPU cores on it). I
> can't give you more specific answer cause I don't know your cluster. But
> the best result, I think, you can have simply by try the job on 1, 2, 3,
> etc. nodes...
>
> Milan Melichercik
> _______________________________________________
> gmx-users mailing list    gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Greetings,

Florian

-- 
-------------------------------------------------------------------------------
 Florian Haberl                        
 Computer-Chemie-Centrum   
 Universitaet Erlangen/ Nuernberg
 Naegelsbachstr 25
 D-91052 Erlangen
 Telephone:  	+49(0) − 9131 − 85 26581
 Mailto: florian.haberl AT chemie.uni-erlangen.de
-------------------------------------------------------------------------------



More information about the gromacs.org_gmx-users mailing list