<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#ffffff">
    On 20/01/2011 3:33 AM, Maryam Hamzehee wrote:
    <blockquote cite="mid:128995.4423.qm@web76912.mail.sg1.yahoo.com"
      type="cite">
      <table border="0" cellpadding="0" cellspacing="0">
        <tbody>
          <tr>
            <td style="font: inherit;" valign="top">Dear&nbsp;Szil&aacute;rd,
              <div><br>
              </div>
              <div>Many thanks for your reply. I've got following reply
                from my question from&nbsp;<span class="Apple-style-span"
                  style="font-family: arial,helvetica,sans-serif;
                  line-height: 16px;">Linux-PowerEdge mailing list. I
                  was wondering which one applies to GROMACS parallel
                  computation (I mean CPU bound, disk bound, etc).<span
                    class="Apple-style-span" style="font-family: arial;
                    line-height: normal;"> <br>
                  </span></span></div>
            </td>
          </tr>
        </tbody>
      </table>
    </blockquote>
    <br>
    In serial, GROMACS is very much CPU-bound, and a lot of work has
    gone into making the most of the CPU. In parallel, that
    CPU-optimization work is so effective that smallish packets of
    information have to be transferred regularly without much
    possibility of effectively overlapping communication and
    computation, and so a low-latency communication network is essential
    in order to continue making effective use of all the CPUs. Something
    like Infiniband or NUMAlink is definitely required.<br>
    <br>
    Mark<br>
    <br>
    <blockquote cite="mid:128995.4423.qm@web76912.mail.sg1.yahoo.com"
      type="cite">
      <table border="0" cellpadding="0" cellspacing="0">
        <tbody>
          <tr>
            <td style="font: inherit;" valign="top">
              <div><span class="Apple-style-span" style="font-family:
                  arial,helvetica,sans-serif; line-height: 16px;">&gt;There
                  shouldn't be any linux compatibility issues with any
                  PowerEdge&nbsp;<br style="line-height: 1.2em;
                    outline-style: none;">
                  &gt;system.&nbsp; At Duke we have a large compute cluster
                  using a variety of&nbsp;<br style="line-height: 1.2em;
                    outline-style: none;">
                  &gt;PowerEdge blades (including M710's) all running on
                  linux.<br style="line-height: 1.2em; outline-style:
                    none;">
                  <br style="line-height: 1.2em; outline-style: none;">
                  &gt;What interconnect are you using?&nbsp; And are your
                  jobs memory bound, cpu&nbsp;<br style="line-height: 1.2em;
                    outline-style: none;">
                  &gt;bound, disk bound, or network bound?<br
                    style="line-height: 1.2em; outline-style: none;">
                  <br style="line-height: 1.2em; outline-style: none;">
                  &gt;If your computation is more dependent on the
                  interlink and communication&nbsp;<br style="line-height:
                    1.2em; outline-style: none;">
                  &gt;between the nodes, its more important to worry
                  about your interconnect.<br style="line-height: 1.2em;
                    outline-style: none;">
                  <br style="line-height: 1.2em; outline-style: none;">
                  &gt;If Inter-node communication is highly important,
                  you may also want to&nbsp;<br style="line-height: 1.2em;
                    outline-style: none;">
                  &gt;consider something like the M910.&nbsp; The M910 can be
                  configured with 4&nbsp;<br style="line-height: 1.2em;
                    outline-style: none;">
                  &gt;8-core CPUs, thus giving you 32 NUMA-connected
                  cores.&nbsp; Or 64 logical&nbsp;<br style="line-height: 1.2em;
                    outline-style: none;">
                  &gt;processors if your job is one that can benefit
                  from HT.&nbsp; Note that when&nbsp;<br style="line-height:
                    1.2em; outline-style: none;">
                  &gt;going with more cores-per chip, your max clockrate
                  tends to be lower.&nbsp;<br style="line-height: 1.2em;
                    outline-style: none;">
                  &gt;As such, its really important to know how your
                  jobs are bound so that&nbsp;<br style="line-height: 1.2em;
                    outline-style: none;">
                  &gt;you can order a cluster configuration that'll be
                  best for that job.</span></div>
              <div><font class="Apple-style-span" face="arial,
                  helvetica, sans-serif"><span class="Apple-style-span"
                    style="line-height: 16px;"><br>
                  </span></font></div>
              <div><font class="Apple-style-span" face="arial,
                  helvetica, sans-serif"><span class="Apple-style-span"
                    style="line-height: 16px;"><br>
                  </span></font></div>
              <div><font class="Apple-style-span" face="arial,
                  helvetica, sans-serif"><span class="Apple-style-span"
                    style="line-height: 16px;">Cheers, Maryam<br>
                  </span></font><br>
                --- On <b>Tue, 18/1/11, Szil&aacute;rd P&aacute;ll <i><a class="moz-txt-link-rfc2396E" href="mailto:szilard.pall@cbr.su.se">&lt;szilard.pall@cbr.su.se&gt;</a></i></b>
                wrote:<br>
                <blockquote style="border-left: 2px solid rgb(16, 16,
                  255); margin-left: 5px; padding-left: 5px;"><br>
                  From: Szil&aacute;rd P&aacute;ll <a class="moz-txt-link-rfc2396E" href="mailto:szilard.pall@cbr.su.se">&lt;szilard.pall@cbr.su.se&gt;</a><br>
                  Subject: Re: [gmx-users] Dell PowerEdge M710 with
                  Intel Xeon 5667 processor<br>
                  To: "Discussion list for GROMACS users"
                  <a class="moz-txt-link-rfc2396E" href="mailto:gmx-users@gromacs.org">&lt;gmx-users@gromacs.org&gt;</a><br>
                  Received: Tuesday, 18 January, 2011, 10:31 PM<br>
                  <br>
                  <div class="plainMail">Hi,<br>
                    <br>
                    Although the question is a bit fuzzy, I might be
                    able to give you a<br>
                    useful answer.<br>
                    <br>
                    &gt;From what I see on the whitepaper of the
                    Poweredge m710 baldes, among<br>
                    other (not so interesting :) OS-es, Dell provides
                    the options of Red<br>
                    Had or SUSE Linux as factory installed OS-es. If you
                    have any of<br>
                    these, you can rest assured that Gromacs will run
                    just fine -- on a<br>
                    single node.<br>
                    <br>
                    Parallel runs are little bit different story and
                    depends on the<br>
                    interconnect. If you have Infiniband, than you'll
                    have a very good<br>
                    scaling over multiple nodes. This is true especially
                    if it's the I/O<br>
                    cards are the Mellanox QDR-s.<br>
                    <br>
                    Cheers,<br>
                    --<br>
                    Szil&aacute;rd<br>
                    <br>
                    <br>
                    On Tue, Jan 18, 2011 at 4:48 PM, Maryam Hamzehee<br>
                    &lt;<a moz-do-not-send="true"
                      ymailto="mailto:maryam_h_7860@yahoo.com"
                      href="/mc/compose?to=maryam_h_7860@yahoo.com">maryam_h_7860@yahoo.com</a>&gt;
                    wrote:<br>
                    &gt;<br>
                    &gt; Dear list,<br>
                    &gt;<br>
                    &gt; I will appreciate it if I can get your expert
                    opinion on doing parallel&nbsp;computation (I will use
                    GROMACS and AMBER molecular mechanics packages&nbsp;and
                    some other programs like CYANA, ARIA and CNS to do
                    structure&nbsp;calculations based on NMR experimental
                    data) using a cluster based on&nbsp;Dell PowerEdge M710
                    with Intel Xeon 5667 processor architecture which<br>
                    &gt; apparently each blade has two quad-core cpus. I
                    was wondering if I can&nbsp;get some information about
                    LINUX compatibility and parallel computation&nbsp;on this
                    system.<br>
                    &gt; Cheers,<br>
                    &gt; Maryam<br>
                    &gt;<br>
                    &gt; --<br>
                    &gt; gmx-users mailing list &nbsp; &nbsp;<a
                      moz-do-not-send="true"
                      ymailto="mailto:gmx-users@gromacs.org"
                      href="/mc/compose?to=gmx-users@gromacs.org">gmx-users@gromacs.org</a><br>
                    &gt; <a moz-do-not-send="true"
                      href="http://lists.gromacs.org/mailman/listinfo/gmx-users"
                      target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-users</a><br>
                    &gt; Please search the archive at <a
                      moz-do-not-send="true"
                      href="http://www.gromacs.org/Support/Mailing_Lists/Search"
                      target="_blank">http://www.gromacs.org/Support/Mailing_Lists/Search</a>
                    before posting!<br>
                    &gt; Please don't post (un)subscribe requests to the
                    list. Use the<br>
                    &gt; www interface or send it to <a
                      moz-do-not-send="true"
                      ymailto="mailto:gmx-users-request@gromacs.org"
                      href="/mc/compose?to=gmx-users-request@gromacs.org">gmx-users-request@gromacs.org</a>.<br>
                    &gt; Can't post? Read <a moz-do-not-send="true"
                      href="http://www.gromacs.org/Support/Mailing_Lists"
                      target="_blank">http://www.gromacs.org/Support/Mailing_Lists</a><br>
                    --<br>
                    gmx-users mailing list&nbsp; &nbsp; <a moz-do-not-send="true"
                      ymailto="mailto:gmx-users@gromacs.org"
                      href="/mc/compose?to=gmx-users@gromacs.org">gmx-users@gromacs.org</a><br>
                    <a moz-do-not-send="true"
                      href="http://lists.gromacs.org/mailman/listinfo/gmx-users"
                      target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-users</a><br>
                    Please search the archive at <a
                      moz-do-not-send="true"
                      href="http://www.gromacs.org/Support/Mailing_Lists/Search"
                      target="_blank">http://www.gromacs.org/Support/Mailing_Lists/Search</a>
                    before posting!<br>
                    Please don't post (un)subscribe requests to the
                    list. Use the<br>
                    www interface or send it to <a
                      moz-do-not-send="true"
                      ymailto="mailto:gmx-users-request@gromacs.org"
                      href="/mc/compose?to=gmx-users-request@gromacs.org">gmx-users-request@gromacs.org</a>.<br>
                    Can't post? Read <a moz-do-not-send="true"
                      href="http://www.gromacs.org/Support/Mailing_Lists"
                      target="_blank">http://www.gromacs.org/Support/Mailing_Lists</a><br>
                  </div>
                </blockquote>
              </div>
            </td>
          </tr>
        </tbody>
      </table>
      <br>
      &nbsp;
    </blockquote>
    <br>
  </body>
</html>