<br><br><div class="gmail_quote">2008/11/11 Justin A. Lemkul <span dir="ltr"><<a href="mailto:jalemkul@vt.edu">jalemkul@vt.edu</a>></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="Ih2E3d"><br>
<br>
vivek sharma wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
HI MArtin,<br>
I am using here the infiniband having speed more than 10 gbps..Can you suggest some option to scale better in this case.<br>
<br>
</blockquote>
<br></div>
What % imbalance is being reported in the log file? What fraction of the load is being assigned to PME, from grompp? How many processors are you assigning to the PME calculation? Are you using dynamic load balancing?</blockquote>
<div><br>Everybody thanks for your usefull suggestions..<br>What do you mean by % imbalance reported in log file. I don't know how to assign the specific load to PME, but I can see that around 37% of the computation is being used by PME. <br>
I am not assigning PME nodes separately. I have no idea of dynamic load balancing and how to use it ?<br><br>Looking forward for answers...<br><br>With Thanks,<br>Vivek <br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
<br>
All of these factors affect performance.<br>
<br>
-Justin<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
With Thanks,<br>
Vivek<br>
<br>
2008/11/11 Martin Höfling <<a href="mailto:martin.hoefling@gmx.de" target="_blank">martin.hoefling@gmx.de</a> <mailto:<a href="mailto:martin.hoefling@gmx.de" target="_blank">martin.hoefling@gmx.de</a>>><div><div>
</div><div class="Wj3C7c"><br>
<br>
Am Dienstag 11 November 2008 12:06:06 schrieb vivek sharma:<br>
<br>
<br>
> I have also tried scaling gromacs for a number of nodes ....but<br>
was not<br>
> able to optimize it beyond 20 processor..on 20 nodes i.e. 1<br>
processor per<br>
<br>
As mentioned before, performance strongly depends on the type of<br>
interconnect<br>
you're using between your processes. Shared Memory, Ethernet,<br>
Infiniband,<br>
NumaLink, whatever...<br>
<br>
I assume you're using ethernet (100/1000 MBit?), you can tune here<br>
to some<br>
extend as described in:<br>
<br>
Kutzner, C.; Spoel, D. V. D.; Fechner, M.; Lindahl, E.; Schmitt, U.<br>
W.; Groot,<br>
B. L. D. & Grubmüller, H. Speeding up parallel GROMACS on high-latency<br>
networks Journal of Computational Chemistry, 2007<br>
<br>
...but be aware that principal limitations of ethernet remain. To<br>
come around<br>
this, you might consider to invest in the interconnect. If you can<br>
come out<br>
with <16 cores, shared memory nodes will give you the "biggest bang<br>
for the<br>
buck".<br>
<br>
Best<br>
Martin<br>
_______________________________________________<br>
gmx-users mailing list <a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br></div></div>
<mailto:<a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a>><div class="Ih2E3d"><br>
<a href="http://www.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://www.gromacs.org/mailman/listinfo/gmx-users</a><br>
Please search the archive at <a href="http://www.gromacs.org/search" target="_blank">http://www.gromacs.org/search</a> before<br>
posting!<br>
Please don't post (un)subscribe requests to the list. Use the<br>
www interface or send it to <a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a><br></div>
<mailto:<a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>>.<div class="Ih2E3d"><br>
Can't post? Read <a href="http://www.gromacs.org/mailing_lists/users.php" target="_blank">http://www.gromacs.org/mailing_lists/users.php</a><br>
<br>
<br>
<br></div>
------------------------------------------------------------------------<div class="Ih2E3d"><br>
<br>
_______________________________________________<br>
gmx-users mailing list <a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br>
<a href="http://www.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://www.gromacs.org/mailman/listinfo/gmx-users</a><br>
Please search the archive at <a href="http://www.gromacs.org/search" target="_blank">http://www.gromacs.org/search</a> before posting!<br>
Please don't post (un)subscribe requests to the list. Use the www interface or send it to <a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>.<br>
Can't post? Read <a href="http://www.gromacs.org/mailing_lists/users.php" target="_blank">http://www.gromacs.org/mailing_lists/users.php</a><br>
</div></blockquote>
<br><div class="Ih2E3d">
-- <br>
========================================<br>
<br>
Justin A. Lemkul<br>
Graduate Research Assistant<br>
Department of Biochemistry<br>
Virginia Tech<br>
Blacksburg, VA<br>
jalemkul[at]<a href="http://vt.edu" target="_blank">vt.edu</a> | (540) 231-9080<br>
<a href="http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin" target="_blank">http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin</a><br>
<br>
========================================<br>
_______________________________________________<br></div><div><div></div><div class="Wj3C7c">
gmx-users mailing list <a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br>
<a href="http://www.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://www.gromacs.org/mailman/listinfo/gmx-users</a><br>
Please search the archive at <a href="http://www.gromacs.org/search" target="_blank">http://www.gromacs.org/search</a> before posting!<br>
Please don't post (un)subscribe requests to the list. Use the www interface or send it to <a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>.<br>
Can't post? Read <a href="http://www.gromacs.org/mailing_lists/users.php" target="_blank">http://www.gromacs.org/mailing_lists/users.php</a><br>
</div></div></blockquote></div><br>