At the same time, I would emphasize that from a scaling point of view less cores per node is better than lots of cores because processors on the same node share the same network link. Also frequency of CPUs on 2-socket nodes goes higher than 4+-socket configurations.<div>
<br clear="all">--<br>Szilárd<br>
<br><br><div class="gmail_quote">On Wed, Jan 19, 2011 at 10:30 PM, Mark Abraham <span dir="ltr">&lt;<a href="mailto:Mark.Abraham@anu.edu.au">Mark.Abraham@anu.edu.au</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">


  
    
  
  <div text="#000000" bgcolor="#ffffff"><div class="im">
    On 20/01/2011 3:33 AM, Maryam Hamzehee wrote:
    <blockquote type="cite">
      <table border="0" cellpadding="0" cellspacing="0">
        <tbody>
          <tr>
            <td style="font:inherit" valign="top">Dear Szilárd,
              <div><br>
              </div>
              <div>Many thanks for your reply. I&#39;ve got following reply
                from my question from <span style="font-family:arial,helvetica,sans-serif;line-height:16px">Linux-PowerEdge mailing list. I
                  was wondering which one applies to GROMACS parallel
                  computation (I mean CPU bound, disk bound, etc).<span style="font-family:arial;line-height:normal"> <br>
                  </span></span></div>
            </td>
          </tr>
        </tbody>
      </table>
    </blockquote>
    <br></div>
    In serial, GROMACS is very much CPU-bound, and a lot of work has
    gone into making the most of the CPU. In parallel, that
    CPU-optimization work is so effective that smallish packets of
    information have to be transferred regularly without much
    possibility of effectively overlapping communication and
    computation, and so a low-latency communication network is essential
    in order to continue making effective use of all the CPUs. Something
    like Infiniband or NUMAlink is definitely required.<br><font color="#888888">
    <br>
    Mark</font><div><div></div><div class="h5"><br>
    <br>
    <blockquote type="cite">
      <table border="0" cellpadding="0" cellspacing="0">
        <tbody>
          <tr>
            <td style="font:inherit" valign="top">
              <div><span style="font-family:arial,helvetica,sans-serif;line-height:16px">&gt;There
                  shouldn&#39;t be any linux compatibility issues with any
                  PowerEdge <br style="line-height:1.2em;outline-style:none">
                  &gt;system.  At Duke we have a large compute cluster
                  using a variety of <br style="line-height:1.2em;outline-style:none">
                  &gt;PowerEdge blades (including M710&#39;s) all running on
                  linux.<br style="line-height:1.2em;outline-style:none">
                  <br style="line-height:1.2em;outline-style:none">
                  &gt;What interconnect are you using?  And are your
                  jobs memory bound, cpu <br style="line-height:1.2em;outline-style:none">
                  &gt;bound, disk bound, or network bound?<br style="line-height:1.2em;outline-style:none">
                  <br style="line-height:1.2em;outline-style:none">
                  &gt;If your computation is more dependent on the
                  interlink and communication <br style="line-height:1.2em;outline-style:none">
                  &gt;between the nodes, its more important to worry
                  about your interconnect.<br style="line-height:1.2em;outline-style:none">
                  <br style="line-height:1.2em;outline-style:none">
                  &gt;If Inter-node communication is highly important,
                  you may also want to <br style="line-height:1.2em;outline-style:none">
                  &gt;consider something like the M910.  The M910 can be
                  configured with 4 <br style="line-height:1.2em;outline-style:none">
                  &gt;8-core CPUs, thus giving you 32 NUMA-connected
                  cores.  Or 64 logical <br style="line-height:1.2em;outline-style:none">
                  &gt;processors if your job is one that can benefit
                  from HT.  Note that when <br style="line-height:1.2em;outline-style:none">
                  &gt;going with more cores-per chip, your max clockrate
                  tends to be lower. <br style="line-height:1.2em;outline-style:none">
                  &gt;As such, its really important to know how your
                  jobs are bound so that <br style="line-height:1.2em;outline-style:none">
                  &gt;you can order a cluster configuration that&#39;ll be
                  best for that job.</span></div>
              <div><font face="arial,
                  helvetica, sans-serif"><span style="line-height:16px"><br>
                  </span></font></div>
              <div><font face="arial,
                  helvetica, sans-serif"><span style="line-height:16px"><br>
                  </span></font></div>
              <div><font face="arial,
                  helvetica, sans-serif"><span style="line-height:16px">Cheers, Maryam<br>
                  </span></font><br>
                --- On <b>Tue, 18/1/11, Szilárd Páll <i><a href="mailto:szilard.pall@cbr.su.se" target="_blank">&lt;szilard.pall@cbr.su.se&gt;</a></i></b>
                wrote:<br>
                <blockquote style="border-left:2px solid rgb(16, 16, 255);margin-left:5px;padding-left:5px"><br>
                  From: Szilárd Páll <a href="mailto:szilard.pall@cbr.su.se" target="_blank">&lt;szilard.pall@cbr.su.se&gt;</a><br>
                  Subject: Re: [gmx-users] Dell PowerEdge M710 with
                  Intel Xeon 5667 processor<br>
                  To: &quot;Discussion list for GROMACS users&quot;
                  <a href="mailto:gmx-users@gromacs.org" target="_blank">&lt;gmx-users@gromacs.org&gt;</a><br>
                  Received: Tuesday, 18 January, 2011, 10:31 PM<br>
                  <br>
                  <div>Hi,<br>
                    <br>
                    Although the question is a bit fuzzy, I might be
                    able to give you a<br>
                    useful answer.<br>
                    <br>
                    &gt;From what I see on the whitepaper of the
                    Poweredge m710 baldes, among<br>
                    other (not so interesting :) OS-es, Dell provides
                    the options of Red<br>
                    Had or SUSE Linux as factory installed OS-es. If you
                    have any of<br>
                    these, you can rest assured that Gromacs will run
                    just fine -- on a<br>
                    single node.<br>
                    <br>
                    Parallel runs are little bit different story and
                    depends on the<br>
                    interconnect. If you have Infiniband, than you&#39;ll
                    have a very good<br>
                    scaling over multiple nodes. This is true especially
                    if it&#39;s the I/O<br>
                    cards are the Mellanox QDR-s.<br>
                    <br>
                    Cheers,<br>
                    --<br>
                    Szilárd<br>
                    <br>
                    <br>
                    On Tue, Jan 18, 2011 at 4:48 PM, Maryam Hamzehee<br>
                    &lt;<a href="http://mc/compose?to=maryam_h_7860@yahoo.com" target="_blank">maryam_h_7860@yahoo.com</a>&gt;
                    wrote:<br>
                    &gt;<br>
                    &gt; Dear list,<br>
                    &gt;<br>
                    &gt; I will appreciate it if I can get your expert
                    opinion on doing parallel computation (I will use
                    GROMACS and AMBER molecular mechanics packages and
                    some other programs like CYANA, ARIA and CNS to do
                    structure calculations based on NMR experimental
                    data) using a cluster based on Dell PowerEdge M710
                    with Intel Xeon 5667 processor architecture which<br>
                    &gt; apparently each blade has two quad-core cpus. I
                    was wondering if I can get some information about
                    LINUX compatibility and parallel computation on this
                    system.<br>
                    &gt; Cheers,<br>
                    &gt; Maryam<br>
                    &gt;<br>
                    &gt; --<br>
                    &gt; gmx-users mailing list    <a href="http://mc/compose?to=gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br>
                    &gt; <a href="http://lists.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-users</a><br>
                    &gt; Please search the archive at <a href="http://www.gromacs.org/Support/Mailing_Lists/Search" target="_blank">http://www.gromacs.org/Support/Mailing_Lists/Search</a>
                    before posting!<br>
                    &gt; Please don&#39;t post (un)subscribe requests to the
                    list. Use the<br>
                    &gt; www interface or send it to <a href="http://mc/compose?to=gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>.<br>
                    &gt; Can&#39;t post? Read <a href="http://www.gromacs.org/Support/Mailing_Lists" target="_blank">http://www.gromacs.org/Support/Mailing_Lists</a><br>
                    --<br>
                    gmx-users mailing list    <a href="http://mc/compose?to=gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br>
                    <a href="http://lists.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-users</a><br>
                    Please search the archive at <a href="http://www.gromacs.org/Support/Mailing_Lists/Search" target="_blank">http://www.gromacs.org/Support/Mailing_Lists/Search</a>
                    before posting!<br>
                    Please don&#39;t post (un)subscribe requests to the
                    list. Use the<br>
                    www interface or send it to <a href="http://mc/compose?to=gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>.<br>
                    Can&#39;t post? Read <a href="http://www.gromacs.org/Support/Mailing_Lists" target="_blank">http://www.gromacs.org/Support/Mailing_Lists</a><br>
                  </div>
                </blockquote>
              </div>
            </td>
          </tr>
        </tbody>
      </table>
      <br>
       
    </blockquote>
    <br>
  </div></div></div>

<br>--<br>
gmx-users mailing list    <a href="mailto:gmx-users@gromacs.org">gmx-users@gromacs.org</a><br>
<a href="http://lists.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-users</a><br>
Please search the archive at <a href="http://www.gromacs.org/Support/Mailing_Lists/Search" target="_blank">http://www.gromacs.org/Support/Mailing_Lists/Search</a> before posting!<br>
Please don&#39;t post (un)subscribe requests to the list. Use the<br>
www interface or send it to <a href="mailto:gmx-users-request@gromacs.org">gmx-users-request@gromacs.org</a>.<br>
Can&#39;t post? Read <a href="http://www.gromacs.org/Support/Mailing_Lists" target="_blank">http://www.gromacs.org/Support/Mailing_Lists</a><br></blockquote></div><br></div>