<html><head></head><body><div style="font-family: Verdana;font-size: 12.0px;"><div>
<div>
<div>Is there any progress in openCL versions of Gromacs, as it is listed on the developer site? Just askin. One thing I ran across is one can get integrated GPU arrays on a board if you find say Russian board designs from China for about the same price with 10x the computational speed, but the boards would be largly OpenCL dependent.</div>
<div> </div>
<div>Stephan Watkins</div>
<div name="quote" style="margin:10px 5px 5px 10px; padding: 10px 0 10px 10px; border-left:2px solid #C3D9E5; word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;">
<div style="margin:0 0 10px 0;"><b>Gesendet:</b> Donnerstag, 16. Oktober 2014 um 20:21 Uhr<br/>
<b>Von:</b> "Szilárd Páll" <pall.szilard@gmail.com><br/>
<b>An:</b> "Discussion list for GROMACS users" <gmx-users@gromacs.org><br/>
<b>Betreff:</b> Re: [gmx-users] MD workstation</div>
<div name="quoted-content">On Thu, Oct 16, 2014 at 3:35 PM, Hadházi Ádám <hadadam@gmail.com> wrote:<br/>
> May I ask why your config is better than e.g.<br/>
><br/>
> 2x Intel Xeon E5-2620 CPUs (2x$405)<br/>
> 4x GTX 970(4x $330)<br/>
> 1x Z9PE-D8 WS ($449)<br/>
> 64 GB DDR3 ($600)<br/>
> PSU 1600W, ($250)<br/>
> standard 2TB 5400rpm drive, ($85)<br/>
> total: (~$3500)<br/>
<br/>
Mirco's suggested setup will give much higher *aggregate* simulation<br/>
throughput. GROMACS uses both CPUs and GPUs and requires a balanced<br/>
resource mix to run efficiently (less so if you don't use PME). The<br/>
E5-2620 is rather slow and it will be a good match for a single GTX<br/>
970, perhaps even a 980, but it will be the limiting factor with two<br/>
GPUs per socket.<br/>
<br/>
> As for your setup...can I use that 4 nodes in parallel for 1 long<br/>
> simulation or 1 FEP job?<br/>
<br/>
Not without a fast network.<br/>
<br/>
> What are the weak points of my workstation?<br/>
<br/>
The CPU. Desktop IVB-E or HSW-E (e.g. i7 49XX, 59XX) will give much<br/>
better performance per buck.<br/>
<br/>
Also note:<br/>
* your smaller 25k MD setup will not scale across multiple GPUs;<br/>
* in FEP runs you, by sharing a GPU between multiple runs you can<br/>
increase the aggregate throughput by quite a lot!<br/>
<br/>
Cheers,<br/>
--<br/>
Szilárd<br/>
<br/>
> Best,<br/>
> Adam<br/>
><br/>
><br/>
> 2014-10-16 23:00 GMT+10:00 Mirco Wahab <mirco.wahab@chemie.tu-freiberg.de>:<br/>
><br/>
>> On 16.10.2014 14:38, Hadházi Ádám wrote:<br/>
>><br/>
>>> Dear GMX Stuff and Users,<br/>
>>>>> I am planning to buy a new MD workstation with 4 GPU (GTX 780 or 970)<br/>
>>>>> or 3<br/>
>>>>> GPU (GTX 980) for 4000$.<br/>
>>>>> Could you recommend me a setup for this machine?<br/>
>>>>> 1 or 2 CPU is necessary? 32/64 GB memory? Cooling? Power?<br/>
>>>>><br/>
>>>><br/>
>>>> - What system (size, type, natoms) do you plan to simulate?<br/>
>>>><br/>
>>>> - Do you have to run *only one single simulation* over long time<br/>
>>>> or *some similar simulations* with similar parameters?<br/>
>>>><br/>
>>><br/>
>>> The systems are kind of mix:<br/>
>>> MD:<br/>
>>> smallest system: 25k atoms, spc/tip3p, 2fs/4fs, NPT, simulation time:<br/>
>>> 500-1000ns<br/>
>>> biggest system: 150k atoms, spc/tip3p, 2fs/4fs, NPT, simulation time:<br/>
>>> 100-1000ns<br/>
>>> FEP (free energy perturbation): ligand functional group mutation<br/>
>>> 25k-150k atoms, in complex and in water simulations, production<br/>
>>> simulation:<br/>
>>> 5ns for each lambda window (number of windows: 12)<br/>
>>><br/>
>><br/>
>> In this situation, I'd probably use 4 machines for $1000 each,<br/>
>> putting in each:<br/>
>> - consumer i7/4790(K), $300<br/>
>> - any 8GB DDR3, $75-$80<br/>
>> - standard Z97 board, $100<br/>
>> - standard PSU 450W, $40<br/>
>> - standard 2TB 5400rpm drive, $85<br/>
>><br/>
>> the rest of the money (4 x $395), I'd use for 4 graphics<br/>
>> cards, probably 3 GTX-970 ($330) and one GTX-980 ($550) -<br/>
>> depending on availability, the actual prices, and<br/>
>> your detailed budget.<br/>
>><br/>
>> YMMV,<br/>
>><br/>
>><br/>
>> Regards<br/>
>><br/>
>> M.<br/>
>><br/>
>> --<br/>
>> Gromacs Users mailing list<br/>
>><br/>
>> * Please search the archive at <a href="http://www.gromacs.org/" target="_blank">http://www.gromacs.org/</a><br/>
>> Support/Mailing_Lists/GMX-Users_List before posting!<br/>
>><br/>
>> * Can't post? Read <a href="http://www.gromacs.org/Support/Mailing_Lists" target="_blank">http://www.gromacs.org/Support/Mailing_Lists</a><br/>
>><br/>
>> * For (un)subscribe requests visit<br/>
>> <a href="https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users" target="_blank">https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users</a> or<br/>
>> send a mail to gmx-users-request@gromacs.org.<br/>
>><br/>
> --<br/>
> Gromacs Users mailing list<br/>
><br/>
> * Please search the archive at <a href="http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List" target="_blank">http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List</a> before posting!<br/>
><br/>
> * Can't post? Read <a href="http://www.gromacs.org/Support/Mailing_Lists" target="_blank">http://www.gromacs.org/Support/Mailing_Lists</a><br/>
><br/>
> * For (un)subscribe requests visit<br/>
> <a href="https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users" target="_blank">https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users</a> or send a mail to gmx-users-request@gromacs.org.<br/>
--<br/>
Gromacs Users mailing list<br/>
<br/>
* Please search the archive at <a href="http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List" target="_blank">http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List</a> before posting!<br/>
<br/>
* Can't post? Read <a href="http://www.gromacs.org/Support/Mailing_Lists" target="_blank">http://www.gromacs.org/Support/Mailing_Lists</a><br/>
<br/>
* For (un)subscribe requests visit<br/>
<a href="https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users" target="_blank">https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users</a> or send a mail to gmx-users-request@gromacs.org.</div>
</div>
</div>
</div></div></body></html>