[gmx-users] [gmx-users]Strange statistics when running parallel Gromacs

Paulo S. L. de Oliveira Paulo.Oliveira at incor.usp.br
Wed Sep 25 18:21:55 CEST 2002


Hi GMXers

         I've started to use Gromacs and I'm trying to put the program to run 
in my PC cluster using MPI. I started to test the program in an Dual Athlon 
XP 1800+ using a system protein+water (4180 atoms) by 1000ps . For that I 
used:

grompp -np 2 -shuffle -sort -s -f complex -o full -c after_pr  -p MUT_R167Q
mpirun -np 2 -s n0 mdrun_mpi -v -s full  

         It worked fine. Both CPU on Dual Athlon reached about 99% and 
performance was 8.75 NODE hours/ns. After that, I (LAM)booted the system 
using nine CPU (two Dual Athlon and five single Athlon). I repeated the 
command above changing -np 2 by -np 9 and for my surprise the performance 
became worse: 9.85 NODE hours/ns).  CPU loading for first dual machine show 
80 and 60% and for another dual 70 and 55%. For all single CPU machines the 
CPU loading got about 20%. All machines are dedicated to run molecular 
dynamics aplication and there  wasn't other application consuming CPU that 
justified the low performance.

         Please, I would like to know if someone had a similar problem  and 
appreciated any suggestion.


         Thanks in advance!

                                 Paulo



More information about the gromacs.org_gmx-users mailing list