[gmx-users] Attempting to scale gromacs mdrun_mpi

Roland Schulz roland at utk.edu
Mon Aug 23 18:10:47 CEST 2010


Hi,

you don't write what network you have and how many cores per node. If it is
ethernet it will be difficult to scale.
You might want to try the 4.5beta version because we have improved the
scaling with it. Also there is a tool g_tune_pme which might help you.

Roland

On Mon, Aug 23, 2010 at 11:15 AM, NG HUI WEN <HuiWen.Ng at nottingham.edu.my>wrote:

>  Hi,
>
>
>
> I have been playing with the “mdrun_mpi” command in gromacs 4.0.7 to try
> out  parallel processing. Unfortunately, the results I got did not show
> any significant improvement in simulation time.
>
>
>
> Below is the command I issued:
>
>
>
> mpirun –np x mdrun_mpi  -deffnm
>
>
>
> where x is the number of processors being used.
>
>
>
> From the machine output, it seemed that the work had indeed been
> distributed to multiple processors e.g. -np 10:
>
> NNODES=10, MYRANK=5, HOSTNAME=beowulf
> NODEID=4 argc=3
> NNODES=10, MYRANK=1, HOSTNAME=beowulf
> NNODES=10, MYRANK=2, HOSTNAME=beowulf
> NODEID=1 argc=3
> NODEID=9 argc=3
> NODEID=5 argc=3
> NNODES=10, MYRANK=3, HOSTNAME=beowulf
> NNODES=10, MYRANK=7, HOSTNAME=beowulf
> NNODES=10, MYRANK=8, HOSTNAME=beowulf
> NODEID=8 argc=3
> NODEID=2 argc=3
> NODEID=6 argc=3
> NODEID=3 argc=3
> NODEID=7 argc=3
> Making 2D domain decomposition 5 x 1 x 2
> starting mdrun 'PROTEIN'
> 1000 steps,      2.0 ps.
>
>
>
> The simulation system consists of 100581 atoms, the duration is 2ps (1000
> steps). results obtained are as followed:
>
>
>
> *number of CPUs       Simulation time*
>
> 1                                      13m28s
>
> 2                                         6m31s
>
> 3                                         7m33s
>
> 4                                          6m47s
>
> 5                                          7m48s
>
> 6                                          6m55s
>
> 7                                          7m36s
>
> 8                                          6m58s
>
> 9                                          7m15s
>
> 10                                        7m01s
>
> 15                                        7m27s
>
> 20                                        7m15s
>
> 30                                        7m42s
>
>
>
> Significant improvement in simulation time was only observed from -np 1 to
> 2.  As almost all (except -np = 1) complaint about load imbalance and PP:PME
> imbalance (the latter was  seen especially in those with larger -np value),
> I tried to increase the pme nodes by adding a -npme flag and entered a
> bigger number but the results either showed no improvement or worsened.
>
>
>
> As I am new to gromacs, there might be some things that I'd missed out/done
> incorrectly. Would really appreciate some input to this. Many thanks in
> advance!!
>
>
>
> HW
> <<
>
> Email has been scanned for viruses by UNMC email management service<http://www.nottingham.edu.my>
> >>
>
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20100823/a4e254dd/attachment.html>


More information about the gromacs.org_gmx-users mailing list