Hi,<div><br></div><div>you don&#39;t write what network you have and how many cores per node. If it is ethernet it will be difficult to scale.</div><div>You might want to try the 4.5beta version because we have improved the scaling with it. Also there is a tool g_tune_pme which might help you.</div>

<div><br></div><div>Roland<br><br><div class="gmail_quote">On Mon, Aug 23, 2010 at 11:15 AM, NG HUI WEN <span dir="ltr">&lt;<a href="mailto:HuiWen.Ng@nottingham.edu.my">HuiWen.Ng@nottingham.edu.my</a>&gt;</span> wrote:<br>

<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">






<div lang="EN-US" link="blue" vlink="purple">
<div>
<p class="MsoNormal">Hi,</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">I have been playing with the “<span>mdrun_mpi</span>” command in <span>gromacs</span> 4.0.7 to try out <span> </span>parallel processing. Unfortunately, the results I got did not show any significant improvement in simulation time.</p>


<p class="MsoNormal"> </p>
<p class="MsoNormal">Below is the command I issued:</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal"><span>mpirun</span> –<span>np</span> x <span>mdrun_mpi</span> <span> </span>-<span>deffnm</span></p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">where x is the number of processors being used.</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">From the machine output, it seemed that the work had indeed been distributed to multiple processors e.g. -np 10:</p>
<p><span style="font-family:&#39;Courier New&#39;;font-size:9pt">NNODES=10, MYRANK=5, HOSTNAME=<span>beowulf</span><br>NODEID=4 <span>argc</span>=3<br>NNODES=10, MYRANK=1, HOSTNAME=<span>beowulf</span><br>NNODES=10, MYRANK=2, HOSTNAME=<span>beowulf</span><br>

NODEID=1 <span>argc</span>=3<br>NODEID=9 <span>argc</span>=3<br>NODEID=5 <span>argc</span>=3<br>NNODES=10, MYRANK=3, HOSTNAME=<span>beowulf</span><br>NNODES=10, MYRANK=7, HOSTNAME=<span>beowulf</span><br>NNODES=10, MYRANK=8, HOSTNAME=<span>beowulf</span><br>

NODEID=8 <span>argc</span>=3<br>NODEID=2 <span>argc</span>=3<br>NODEID=6 <span>argc</span>=3<br>NODEID=3 <span>argc</span>=3<br>NODEID=7 <span>argc</span>=3<br>Making 2D domain decomposition 5 x 1 x 2<br>starting <span>mdrun</span> &#39;PROTEIN&#39;<br>

1000 steps,<span>      </span>2.0 ps.</span></p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">The simulation system consists of 100581 atoms, the duration is 2ps (1000 steps). results obtained are as followed:</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal"><u>number of CPUs       Simulation time</u></p>
<p class="MsoNormal">1                                      13m28s</p>
<p class="MsoNormal">2                                         6m31s</p>
<p class="MsoNormal">3                                         7m33s</p>
<p class="MsoNormal">4                                          6m47s</p>
<p class="MsoNormal">5                                          7m48s</p>
<p class="MsoNormal">6                                          6m55s</p>
<p class="MsoNormal">7                                          7m36s</p>
<p class="MsoNormal">8                                          6m58s</p>
<p class="MsoNormal">9                                          7m15s</p>
<p class="MsoNormal">10                                        7m01s</p>
<p class="MsoNormal">15                                        7m27s</p>
<p class="MsoNormal">20                                        7m15s</p>
<p class="MsoNormal">30                                        7m42s</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">Significant improvement in simulation time was only observed from -np 1 to 2.  As almost all (except -np = 1) complaint about load imbalance and PP:PME imbalance (the latter was  seen especially in those with larger -np value), I tried to increase the pme nodes by adding a -npme flag and entered a bigger number but the results either showed no improvement or worsened.</p>


<p class="MsoNormal"> </p>
<p class="MsoNormal">As I am new to gromacs, there might be some things that I&#39;d missed out/done incorrectly. Would really appreciate some input to this. Many thanks in advance!!</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">HW</p></div> &lt;&lt;
<p><font face="Arial" size="2"><a href="http://www.nottingham.edu.my" target="_blank">Email has been scanned for viruses by UNMC email management service</a></font></p>
 &gt;&gt;
</div><br>--<br>
gmx-users mailing list    <a href="mailto:gmx-users@gromacs.org">gmx-users@gromacs.org</a><br>
<a href="http://lists.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-users</a><br>
Please search the archive at <a href="http://www.gromacs.org/search" target="_blank">http://www.gromacs.org/search</a> before posting!<br>
Please don&#39;t post (un)subscribe requests to the list. Use the<br>
www interface or send it to <a href="mailto:gmx-users-request@gromacs.org">gmx-users-request@gromacs.org</a>.<br>
Can&#39;t post? Read <a href="http://www.gromacs.org/mailing_lists/users.php" target="_blank">http://www.gromacs.org/mailing_lists/users.php</a><br></blockquote></div><br><br clear="all"><br>-- <br>ORNL/UT Center for Molecular Biophysics <a href="http://cmb.ornl.gov">cmb.ornl.gov</a><br>

865-241-1537, ORNL PO BOX 2008 MS6309<br>
</div>