<DIV><SPAN>On 29/12/11, <B class=name>"Peter C. Lai" </B><pcl@uab.edu> wrote:</SPAN> </DIV><BLOCKQUOTE style="BORDER-LEFT: #00f 1px solid; PADDING-LEFT: 13px; MARGIN-LEFT: 0px" class=iwcQuote cite=mid:20111228215856.GT11254@cesium.hyperfine.info type="cite">What performance are you getting that you want to improve more?<br />Here's a datapoint from the last simulation I ran:<br /><br />Currently running gromacs 4.5.4 built with icc+fftw+openmpi on infiniband <br />qdr and I get about 9.7ns/day on 64 PP nodes with 4 PME nodes (68 total <br />2.66ghz X5650) on my 99113 atom system in single precision....</BLOCKQUOTE>
<DIV> </DIV><DIV>It is very likely you can do better by following grompp's advice about having one-third to one-quarter of your nodes doing PME. See manual 3.17.5.</DIV><DIV> </DIV><BLOCKQUOTE style="BORDER-LEFT: #00f 1px solid; PADDING-LEFT: 13px; MARGIN-LEFT: 0px" class=iwcQuote cite=mid:20111228215856.GT11254@cesium.hyperfine.info type="cite"><br /><br />I find that it is more important to optimize your PP/PME allocation than <br />microoptimizing the code...</BLOCKQUOTE>
<DIV> </DIV><DIV>Yes, hence the existence of g_tune_pme and other tools. </DIV><DIV> </DIV><BLOCKQUOTE style="BORDER-LEFT: #00f 1px solid; PADDING-LEFT: 13px; MARGIN-LEFT: 0px" class=iwcQuote cite=mid:20111228215856.GT11254@cesium.hyperfine.info type="cite">I also find that at some point above 232 nodes (I don't remember what the exact<br />number is), mdrun will complain about the overhead it takes to communicate<br />energies if am having it communicate energies every 5 steps; which is a<br />reflection on thea limitation of the infrastructure than the code too.</BLOCKQUOTE>
<DIV> </DIV><DIV>I'd say this is more a reflection the limitations of the model you've asked it to use. Per manual 7.3.8 you can control this cost with suitable choices for the nst* variables. You can judge best whether you want faster performance or higher accuracy in the implementation of your approximate model...</DIV><DIV> </DIV><DIV>Mark </DIV><DIV> </DIV><BLOCKQUOTE style="BORDER-LEFT: #00f 1px solid; PADDING-LEFT: 13px; MARGIN-LEFT: 0px" class=iwcQuote cite=mid:20111228215856.GT11254@cesium.hyperfine.info type="cite">
<DIV class="mimepart text plain"><br /><br />On 2011-12-27 06:48:23AM -0600, Mark Abraham wrote:<br />> On 12/27/2011 11:18 PM, Sudip Roy wrote:<br />> > Gromacs users,<br />> ><br />> > Please let me know what is the best option for gromacs compilation<br />> > (looking for better performance in INFINIBAND QDR systems)<br />> ><br />> > 1. Intel composer XE i.e. Intel compilers, mkl but open MPI library<br />> ><br />> > 2. Intel studio i.e. Intel compilers, mkl, and Intel MPI library<br />> <br />> GROMACS is strongly CPU-bound in a way that is rather insensitive to <br />> compilers and libraries. I would expect no strong difference between the <br />> above two - and icc+MKL+OpenMPI was only a few percent faster than <br />> gcc+FFTW+OpenMPI when I tested them on such a machine about two years ago.<br />> <br />> Mark<br />> <br />> -- <br />> This message has been scanned for viruses and<br />> dangerous content by MailScanner, and is<br />> believed to be clean.<br />> <br />> -- <br />> gmx-users mailing list gmx-users@gromacs.org<br />> <a href="http://lists.gromacs.org/mailman/listinfo/gmx-users" target=l >http://lists.gromacs.org/mailman/listinfo/gmx-users</A><br />> Please search the archive at <a href="http://www.gromacs.org/Support/Mailing_Lists/Search" target=l >http://www.gromacs.org/Support/Mailing_Lists/Search</A> before posting!<br />> Please don't post (un)subscribe requests to the list. Use the <br />> www interface or send it to gmx-users-request@gromacs.org.<br />> Can't post? Read <a href="http://www.gromacs.org/Support/Mailing_Lists" target=l >http://www.gromacs.org/Support/Mailing_Lists</A><br /><br />-- <br />==================================================================<br />Peter C. Lai | University of Alabama-Birmingham<br />Programmer/Analyst | KAUL 752A<br />Genetics, Div. of Research | 705 South 20th Street<br />pcl@uab.edu | Birmingham AL 35294-4461<br />(205) 690-0808 |<br />==================================================================<br /><br />-- <br />gmx-users mailing list gmx-users@gromacs.org<br /><a href="http://lists.gromacs.org/mailman/listinfo/gmx-users" target=l >http://lists.gromacs.org/mailman/listinfo/gmx-users</A><br />Please search the archive at <a href="http://www.gromacs.org/Support/Mailing_Lists/Search" target=l >http://www.gromacs.org/Support/Mailing_Lists/Search</A> before posting!<br />Please don't post (un)subscribe requests to the list. Use the <br />www interface or send it to gmx-users-request@gromacs.org.<br />Can't post? Read <a href="http://www.gromacs.org/Support/Mailing_Lists" target=l >http://www.gromacs.org/Support/Mailing_Lists</A><br /></DIV></BLOCKQUOTE>
<DIV> </DIV>