<table cellspacing="0" cellpadding="0" border="0" ><tr><td valign="top" style="font: inherit;">Dear Szilárd,<div><br></div><div>Many thanks for your reply. I've got following reply from my question from <span class="Apple-style-span" style="font-family: arial, helvetica, sans-serif; line-height: 16px; ">Linux-PowerEdge mailing list. I was wondering which one applies to GROMACS parallel computation (I mean CPU bound, disk bound, etc).<span class="Apple-style-span" style="font-family: arial; line-height: normal; "> </span></span></div><div><font class="Apple-style-span" face="arial, helvetica, sans-serif"><br></font></div><div><font class="Apple-style-span" face="arial, helvetica, sans-serif"><br></font></div><div><font class="Apple-style-span" face="arial, helvetica, sans-serif"><br></font></div><div><span class="Apple-style-span" style="font-family: arial, helvetica, sans-serif; line-height: 16px; ">>There shouldn't be any linux
compatibility issues with any PowerEdge <br style="line-height: 1.2em; outline-style: none; outline-width: initial; outline-color: initial; ">>system. At Duke we have a large compute cluster using a variety of <br style="line-height: 1.2em; outline-style: none; outline-width: initial; outline-color: initial; ">>PowerEdge blades (including M710's) all running on linux.<br style="line-height: 1.2em; outline-style: none; outline-width: initial; outline-color: initial; "><br style="line-height: 1.2em; outline-style: none; outline-width: initial; outline-color: initial; ">>What interconnect are you using? And are your jobs memory bound, cpu <br style="line-height: 1.2em; outline-style: none; outline-width: initial; outline-color: initial; ">>bound, disk bound, or network bound?<br style="line-height: 1.2em; outline-style: none; outline-width: initial; outline-color: initial; "><br style="line-height: 1.2em;
outline-style: none; outline-width: initial; outline-color: initial; ">>If your computation is more dependent on the interlink and communication <br style="line-height: 1.2em; outline-style: none; outline-width: initial; outline-color: initial; ">>between the nodes, its more important to worry about your interconnect.<br style="line-height: 1.2em; outline-style: none; outline-width: initial; outline-color: initial; "><br style="line-height: 1.2em; outline-style: none; outline-width: initial; outline-color: initial; ">>If Inter-node communication is highly important, you may also want to <br style="line-height: 1.2em; outline-style: none; outline-width: initial; outline-color: initial; ">>consider something like the M910. The M910 can be configured with 4 <br style="line-height: 1.2em; outline-style: none; outline-width: initial; outline-color: initial; ">>8-core CPUs, thus giving you 32 NUMA-connected cores. Or
64 logical <br style="line-height: 1.2em; outline-style: none; outline-width: initial; outline-color: initial; ">>processors if your job is one that can benefit from HT. Note that when <br style="line-height: 1.2em; outline-style: none; outline-width: initial; outline-color: initial; ">>going with more cores-per chip, your max clockrate tends to be lower. <br style="line-height: 1.2em; outline-style: none; outline-width: initial; outline-color: initial; ">>As such, its really important to know how your jobs are bound so that <br style="line-height: 1.2em; outline-style: none; outline-width: initial; outline-color: initial; ">>you can order a cluster configuration that'll be best for that job.</span></div><div><font class="Apple-style-span" face="arial, helvetica, sans-serif"><span class="Apple-style-span" style="line-height: 16px;"><br></span></font></div><div><font class="Apple-style-span" face="arial, helvetica,
sans-serif"><span class="Apple-style-span" style="line-height: 16px;"><br></span></font></div><div><font class="Apple-style-span" face="arial, helvetica, sans-serif"><span class="Apple-style-span" style="line-height: 16px;">Cheers, Maryam<br></span></font><br>--- On <b>Tue, 18/1/11, Szilárd Páll <i><szilard.pall@cbr.su.se></i></b> wrote:<br><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px;"><br>From: Szilárd Páll <szilard.pall@cbr.su.se><br>Subject: Re: [gmx-users] Dell PowerEdge M710 with Intel Xeon 5667 processor<br>To: "Discussion list for GROMACS users" <gmx-users@gromacs.org><br>Received: Tuesday, 18 January, 2011, 10:31 PM<br><br><div class="plainMail">Hi,<br><br>Although the question is a bit fuzzy, I might be able to give you a<br>useful answer.<br><br>>From what I see on the whitepaper of the Poweredge m710 baldes, among<br>other (not so interesting :) OS-es, Dell provides
the options of Red<br>Had or SUSE Linux as factory installed OS-es. If you have any of<br>these, you can rest assured that Gromacs will run just fine -- on a<br>single node.<br><br>Parallel runs are little bit different story and depends on the<br>interconnect. If you have Infiniband, than you'll have a very good<br>scaling over multiple nodes. This is true especially if it's the I/O<br>cards are the Mellanox QDR-s.<br><br>Cheers,<br>--<br>Szilárd<br><br><br>On Tue, Jan 18, 2011 at 4:48 PM, Maryam Hamzehee<br><<a ymailto="mailto:maryam_h_7860@yahoo.com" href="/mc/compose?to=maryam_h_7860@yahoo.com">maryam_h_7860@yahoo.com</a>> wrote:<br>><br>> Dear list,<br>><br>> I will appreciate it if I can get your expert opinion on doing parallel computation (I will use GROMACS and AMBER molecular mechanics packages and some other programs like CYANA, ARIA and CNS to do structure calculations based on NMR experimental data) using a
cluster based on Dell PowerEdge M710 with Intel Xeon 5667 processor architecture which<br>> apparently each blade has two quad-core cpus. I was wondering if I can get some information about LINUX compatibility and parallel computation on this system.<br>> Cheers,<br>> Maryam<br>><br>> --<br>> gmx-users mailing list <a ymailto="mailto:gmx-users@gromacs.org" href="/mc/compose?to=gmx-users@gromacs.org">gmx-users@gromacs.org</a><br>> <a href="http://lists.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-users</a><br>> Please search the archive at <a href="http://www.gromacs.org/Support/Mailing_Lists/Search" target="_blank">http://www.gromacs.org/Support/Mailing_Lists/Search</a> before posting!<br>> Please don't post (un)subscribe requests to the list. Use the<br>> www interface or send it to <a ymailto="mailto:gmx-users-request@gromacs.org"
href="/mc/compose?to=gmx-users-request@gromacs.org">gmx-users-request@gromacs.org</a>.<br>> Can't post? Read <a href="http://www.gromacs.org/Support/Mailing_Lists" target="_blank">http://www.gromacs.org/Support/Mailing_Lists</a><br>--<br>gmx-users mailing list <a ymailto="mailto:gmx-users@gromacs.org" href="/mc/compose?to=gmx-users@gromacs.org">gmx-users@gromacs.org</a><br><a href="http://lists.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-users</a><br>Please search the archive at <a href="http://www.gromacs.org/Support/Mailing_Lists/Search" target="_blank">http://www.gromacs.org/Support/Mailing_Lists/Search</a> before posting!<br>Please don't post (un)subscribe requests to the list. Use the<br>www interface or send it to <a ymailto="mailto:gmx-users-request@gromacs.org" href="/mc/compose?to=gmx-users-request@gromacs.org">gmx-users-request@gromacs.org</a>.<br>Can't post? Read <a
href="http://www.gromacs.org/Support/Mailing_Lists" target="_blank">http://www.gromacs.org/Support/Mailing_Lists</a><br></div></blockquote></div></td></tr></table><br>