<div dir="ltr">Hi Carsten and Justin,<br>I am interrupting here as I tried with the option u suggested..<br>I tried cut-off instead of PME as coulombtype option it is running well for 24 processor, then I tried with 60 processor , following is the result I am getting<br>
<br>Result1: When tried for 50 ps of run on 24 processors, with PME took 12:29 in comparison to 7:54 with cut-off <br><br>Result2: When tried for 500 ps of run on 60 processors, with PME it is giving same segmentation fault again and with cut-off it is giving LINCS warning and exiting with writing the intermediate step.pdb<br>
<br>Can you suggest some more option that I can try for scaling experiment...<br>Also I tried with shuffle and sort option it didn't worked for me as my system is simply one protein molecule in a ater box (around 45000 no. of atoms) <br>
connected Gromacs version I am using is 3.3.3 and the hardware is like all nodes contain quad-core 3.0 GHz Intel Xeon processors connected via infiniband.<br><br>With Thanks,<br>Vivek<br><br><div class="gmail_quote">2008/9/26 Carsten Kutzner <span dir="ltr"><<a href="mailto:ckutzne@gwdg.de">ckutzne@gwdg.de</a>></span><br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Hi Tiago,<br>
<br>
if you swith off PME and suddenly your system scales, then the<br>
problems are likely to result from bad MPI_Alltoall performance. Maybe<br>
this is worth a check. If this is the case, there's a lot more information<br>
about this in the paper "Speeding up parallel GROMACS on high-<br>
latency networks" from 2007 to which you will also find link on the<br>
gromacs webpage.<br>
<br>
What you can also do to track down the problem is to compile gromacs with<br>
MPE logging, for which you have to enable the #define USE_MPE macro at the<br>
begin of mpelogging.h (you will have to use gmx version 4, though). You<br>
will get a logfile which you can view with jumpshot then. The MPE tools<br>
come with the MPICH MPI distribution.<br>
<br>
Carsten<br>
<br>
<br>
Tiago Marques wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div class="Ih2E3d">
We currently have no funds available to migrate to infiniband but we will in the future.<br>
<br>
I thought on doing interface bonding but I really think that isn't really the problem here, there must be something I'm missing, since most applications scale well to 32 cores on GbE. I can't scale any application to more than 8 though.<br>
<br>
Best regards, <br>
Tiago Marques<br>
<br>
<br></div><div class="Ih2E3d">
On Tue, Sep 23, 2008 at 6:30 PM, Diego Enry <<a href="mailto:diego.enry@gmail.com" target="_blank">diego.enry@gmail.com</a> <mailto:<a href="mailto:diego.enry@gmail.com" target="_blank">diego.enry@gmail.com</a>>> wrote:<br>
<br>
Tiago you can try merging two network interfaces with "channel<br>
bonding" it's native on all new (2.6.x) linux kernels. You only need<br>
two network adapters (most dual socket boards come with then), two<br>
network switches ( or two VPN on the same switch).<br>
<br>
To tell you the truth, you will not much improvement even with the<br>
latest gromacs version (4beta). However other software that may be<br>
used by your group like NAMD, GAMESS, will benefit a lot from this<br>
approach. (it almost doubles network bandwidth)<br>
<br>
The best solution for gromacs is to migrate to infiniband. Go for it,<br>
it is not super expensive anymore.<br>
<br>
<br>
On Tue, Sep 23, 2008 at 1:48 PM, Jochen Hub <<a href="mailto:jhub@gwdg.de" target="_blank">jhub@gwdg.de</a><br></div><div class="Ih2E3d">
<mailto:<a href="mailto:jhub@gwdg.de" target="_blank">jhub@gwdg.de</a>>> wrote:<br>
> Tiago Marques wrote:<br>
>> I don't know how large the system is. I'm the cluster's system<br>
administrator<br>
>> and don't understand much of what's going on. The test was given<br>
to me by a<br>
>> person who works with it. I can ask him or look at it, if you<br>
can point me<br>
>> how to do it.<br>
><br>
> Hi,<br>
><br>
> you can count the nr of atoms in the structure:<br>
><br>
> grep -c ATOM protein.pdb<br>
><br>
> Jochen<br>
><br>
>><br>
>> Thanks, I will look at some of his posts.<br>
>><br>
>> Best regards,<br>
>><br>
>> Tiago Marques<br>
>><br>
>><br>
>> On Tue, Sep 23, 2008 at 4:03 PM, Jochen Hub <<a href="mailto:jhub@gwdg.de" target="_blank">jhub@gwdg.de</a><br></div><div><div></div><div class="Wj3C7c">
<mailto:<a href="mailto:jhub@gwdg.de" target="_blank">jhub@gwdg.de</a>>> wrote:<br>
>> Tiago Marques wrote:<br>
>>> Hi!<br>
>>><br>
>>> I've been using Gromacs on dual-socket quad-core Xeons with<br>
8GiB of RAM,<br>
>>> connected with Gigabit Ethernet and I always seem to have<br>
problems scaling<br>
>>> to more than a node.<br>
>>><br>
>>> When I run a test on 16 cores, it does run but the result is<br>
often slower<br>
>>> than when running on only 8 cores on the same machine. The best<br>
result<br>
>> I've<br>
>>> managed is not being slower than 8 cores on 16.<br>
>>><br>
>>> What am I missing here, or are the tests inappropriate to run<br>
over more<br>
>> than<br>
>>> one machine?<br>
>><br>
>> How large is your system? Which gromacs version are you using?<br>
>><br>
>> And have a look at the messages by Carsten Kutzner in this list, he<br>
>> wrote a lot on gromacs scaling.<br>
>><br>
>> Jochen<br>
>><br>
>>> Best regards,<br>
>>><br>
>>> Tiago Marques<br>
>>><br>
>>><br>
>>><br>
>>><br>
------------------------------------------------------------------------<br>
>>><br>
>>> _______________________________________________<br>
>>> gmx-users mailing list <a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br></div></div>
<mailto:<a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a>><div class="Ih2E3d"><br>
>>> <a href="http://www.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://www.gromacs.org/mailman/listinfo/gmx-users</a><br>
>>> Please search the archive at <a href="http://www.gromacs.org/search" target="_blank">http://www.gromacs.org/search</a><br>
before posting!<br>
>>> Please don't post (un)subscribe requests to the list. Use the<br>
>>> www interface or send it to <a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a><br></div>
<mailto:<a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>>.<div class="Ih2E3d"><br>
>>> Can't post? Read <a href="http://www.gromacs.org/mailing_lists/users.php" target="_blank">http://www.gromacs.org/mailing_lists/users.php</a><br>
>><br>
>><br>
>> --<br>
>> ************************************************<br>
>> Dr. Jochen Hub<br>
>> Max Planck Institute for Biophysical Chemistry<br>
>> Computational biomolecular dynamics group<br>
>> Am Fassberg 11<br>
>> D-37077 Goettingen, Germany<br></div>
>> Email: jhub[at]<a href="http://gwdg.de" target="_blank">gwdg.de</a> <<a href="http://gwdg.de" target="_blank">http://gwdg.de</a>><div class="Ih2E3d"><br>
>> Tel.: +49 (0)551 201-2312<br>
>> ************************************************<br>
>> _______________________________________________<br>
>> gmx-users mailing list <a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br></div>
<mailto:<a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a>><div class="Ih2E3d"><br>
>> <a href="http://www.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://www.gromacs.org/mailman/listinfo/gmx-users</a><br>
>> Please search the archive at <a href="http://www.gromacs.org/search" target="_blank">http://www.gromacs.org/search</a><br>
before posting!<br>
>> Please don't post (un)subscribe requests to the list. Use the<br>
>> www interface or send it to <a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a><br></div>
<mailto:<a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>>.<div class="Ih2E3d"><br>
>> Can't post? Read <a href="http://www.gromacs.org/mailing_lists/users.php" target="_blank">http://www.gromacs.org/mailing_lists/users.php</a><br>
>><br>
>><br>
>><br>
>><br>
------------------------------------------------------------------------<br>
>><br>
>> _______________________________________________<br>
>> gmx-users mailing list <a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br></div>
<mailto:<a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a>><div class="Ih2E3d"><br>
>> <a href="http://www.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://www.gromacs.org/mailman/listinfo/gmx-users</a><br>
>> Please search the archive at <a href="http://www.gromacs.org/search" target="_blank">http://www.gromacs.org/search</a><br>
before posting!<br>
>> Please don't post (un)subscribe requests to the list. Use the<br>
>> www interface or send it to <a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a><br></div>
<mailto:<a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>>.<div class="Ih2E3d"><br>
>> Can't post? Read <a href="http://www.gromacs.org/mailing_lists/users.php" target="_blank">http://www.gromacs.org/mailing_lists/users.php</a><br>
><br>
><br>
> --<br>
> ************************************************<br>
> Dr. Jochen Hub<br>
> Max Planck Institute for Biophysical Chemistry<br>
> Computational biomolecular dynamics group<br>
> Am Fassberg 11<br>
> D-37077 Goettingen, Germany<br></div>
> Email: jhub[at]<a href="http://gwdg.de" target="_blank">gwdg.de</a> <<a href="http://gwdg.de" target="_blank">http://gwdg.de</a>><div class="Ih2E3d"><br>
> Tel.: +49 (0)551 201-2312<br>
> ************************************************<br>
> _______________________________________________<br>
> gmx-users mailing list <a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br></div>
<mailto:<a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a>><div class="Ih2E3d"><br>
> <a href="http://www.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://www.gromacs.org/mailman/listinfo/gmx-users</a><br>
> Please search the archive at <a href="http://www.gromacs.org/search" target="_blank">http://www.gromacs.org/search</a> before<br>
posting!<br>
> Please don't post (un)subscribe requests to the list. Use the<br>
> www interface or send it to <a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a><br></div>
<mailto:<a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>>.<div class="Ih2E3d"><br>
> Can't post? Read <a href="http://www.gromacs.org/mailing_lists/users.php" target="_blank">http://www.gromacs.org/mailing_lists/users.php</a><br>
><br>
<br>
<br>
<br>
--<br>
Diego Enry B. Gomes<br>
Laboratório de Modelagem e Dinamica Molecular<br>
Universidade Federal do Rio de Janeiro - Brasil.<br>
_______________________________________________<br>
gmx-users mailing list <a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br></div>
<mailto:<a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a>><div class="Ih2E3d"><br>
<a href="http://www.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://www.gromacs.org/mailman/listinfo/gmx-users</a><br>
Please search the archive at <a href="http://www.gromacs.org/search" target="_blank">http://www.gromacs.org/search</a> before<br>
posting!<br>
Please don't post (un)subscribe requests to the list. Use the<br>
www interface or send it to <a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a><br></div>
<mailto:<a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>>.<div class="Ih2E3d"><br>
Can't post? Read <a href="http://www.gromacs.org/mailing_lists/users.php" target="_blank">http://www.gromacs.org/mailing_lists/users.php</a><br>
<br>
<br>
<br>
<br>
------------------------------------------------------------------------<br>
<br>
_______________________________________________<br>
gmx-users mailing list <a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br>
<a href="http://www.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://www.gromacs.org/mailman/listinfo/gmx-users</a><br>
Please search the archive at <a href="http://www.gromacs.org/search" target="_blank">http://www.gromacs.org/search</a> before posting!<br>
Please don't post (un)subscribe requests to the list. Use the www interface or send it to <a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>.<br>
Can't post? Read <a href="http://www.gromacs.org/mailing_lists/users.php" target="_blank">http://www.gromacs.org/mailing_lists/users.php</a><br>
</div></blockquote>
<br>
-- <br>
Dr. Carsten Kutzner<div class="Ih2E3d"><br>
Max Planck Institute for Biophysical Chemistry<br></div>
Theoretical and Computational Biophysics Department<br>
Am Fassberg 11<br>
37077 Goettingen, Germany<br>
Tel. +49-551-2012313, Fax: +49-551-2012302<br>
<a href="http://www.mpibpc.mpg.de/home/grubmueller/" target="_blank">www.mpibpc.mpg.de/home/grubmueller/</a><br>
<a href="http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne/" target="_blank">www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne/</a><div><div></div><div class="Wj3C7c"><br>
_______________________________________________<br>
gmx-users mailing list <a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br>
<a href="http://www.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://www.gromacs.org/mailman/listinfo/gmx-users</a><br>
Please search the archive at <a href="http://www.gromacs.org/search" target="_blank">http://www.gromacs.org/search</a> before posting!<br>
Please don't post (un)subscribe requests to the list. Use the www interface or send it to <a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>.<br>
Can't post? Read <a href="http://www.gromacs.org/mailing_lists/users.php" target="_blank">http://www.gromacs.org/mailing_lists/users.php</a><br>
</div></div></blockquote></div><br></div>