<div dir="ltr"><p>Hi.</p><p>I've checked with the person who did the test and the system is 36000 atoms large, it seems small, doesn't it? Maybe that's the problem, coupled with the high latency of GbE connection. </p>
<p>What I find strange though, is that for many computers in the Top 500, this is sufficient.<br></p><p>Best regards,</p><p> Tiago Marques</p><p></p><br><div class="gmail_quote">On Tue, Sep 23, 2008 at 5:04 PM, Tiago Marques <span dir="ltr"><<a href="mailto:a28427@ua.pt">a28427@ua.pt</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div dir="ltr">I don't know how large the system is. I'm the cluster's system administrator and don't understand much of what's going on. The test was given to me by a person who works with it. I can ask him or look at it, if you can point me how to do it.<br>
<br>Thanks, I will look at some of his posts.<br><br>Best regards,<br><br> Tiago Marques<br><br><br>On Tue, Sep 23, 2008 at 4:03 PM, Jochen Hub <<a href="mailto:jhub@gwdg.de" target="_blank">jhub@gwdg.de</a>> wrote:<br>
Tiago Marques wrote:<br>> Hi!<br>><br>> I've been using Gromacs on dual-socket quad-core Xeons with 8GiB of RAM,<br>> connected with Gigabit Ethernet and I always seem to have problems scaling<br>> to more than a node.<br>
><br>> When I run a test on 16 cores, it does run but the result is often slower<br>> than when running on only 8 cores on the same machine. The best result I've<br>> managed is not being slower than 8 cores on 16.<br>
><br>> What am I missing here, or are the tests inappropriate to run over more than<br>> one machine?<br><br>How large is your system? Which gromacs version are you using?<br><br>And have a look at the messages by Carsten Kutzner in this list, he<br>
wrote a lot on gromacs scaling.<br><br>Jochen<br><br>><br>> Best regards,<br>><br>> Tiago Marques<br>><br>><br>><br>> ------------------------------------------------------------------------<br>
><br>> _______________________________________________<br>> gmx-users mailing list <a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br>> <a href="http://www.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://www.gromacs.org/mailman/listinfo/gmx-users</a><br>
> Please search the archive at <a href="http://www.gromacs.org/search" target="_blank">http://www.gromacs.org/search</a> before posting!<br>> Please don't post (un)subscribe requests to the list. Use the<br>> www interface or send it to <a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>.<br>
> Can't post? Read <a href="http://www.gromacs.org/mailing_lists/users.php" target="_blank">http://www.gromacs.org/mailing_lists/users.php</a><br><br><br>--<br>************************************************<br>Dr. Jochen Hub<br>
Max Planck Institute for Biophysical Chemistry<br>Computational biomolecular dynamics group<br>Am Fassberg 11<br>D-37077 Goettingen, Germany<br>Email: jhub[at]<a href="http://gwdg.de" target="_blank">gwdg.de</a><br>Tel.: +49 (0)551 201-2312<br>
************************************************<br>_______________________________________________<br>gmx-users mailing list <a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br><a href="http://www.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://www.gromacs.org/mailman/listinfo/gmx-users</a><br>
Please search the archive at <a href="http://www.gromacs.org/search" target="_blank">http://www.gromacs.org/search</a> before posting!<br>Please don't post (un)subscribe requests to the list. Use the<br>www interface or send it to <a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>.<br>
Can't post? Read <a href="http://www.gromacs.org/mailing_lists/users.php" target="_blank">http://www.gromacs.org/mailing_lists/users.php</a><br><br></div>
</blockquote></div><br></div>