[gmx-users] hardware setup for gmx

Szilárd Páll pall.szilard at gmail.com
Thu Jul 31 02:45:45 CEST 2014


On Thu, Jul 31, 2014 at 12:35 AM, Szilárd Páll <pall.szilard at gmail.com> wrote:
> Dear Michael,
>
> On Wed, Jul 30, 2014 at 1:49 PM, Michael Brunsteiner <mbx0009 at yahoo.com> wrote:
>>
>> Dear Szilard,
>>
>> sorry for bothering you again ... regarding performance tuning by adjusting
>> the VdW and Coulomb cut-offs you wrote:
>
> No worries, I am happy to answer on the list whenever I can!
>
>> The PP-PME load balancing - which acts as CPU-GPU load balancing - is
>> meant to take care of that by  scaling only rcoulomb to adjust the
>> real vs reciprocal space load while keeping rvdw fixed. This is not
>> perfect, though as short- and long-range interaction cost scales quite
>> differently and it does not always work perfectly either, bu
>>
>> hut here:
>>
>> http://www.gromacs.org/About_Gromacs/Release_Notes/Versions_4.6.x
>>
>> it says:
>>
>> "Made g_tune_pme honour the requirement that the van der Waals radius must
>> equal the Coulomb radius with Verlet cut-off scheme #1460"
>
> Unfortunately, that's a very crude and developer-centric commit
> message that IMO should not end up straight in the release notes, but
> let's not get off-topic.
>
>> does this mean that what you wrote does not apply to the Verlet cut-off
>> scheme?
>
> No, the difference is that the user can't set rvdw != rcoulomb in the
> mdp file, but mdrun itself can increase rcoulomb to shift work from
> long- to short-range calculation. So the above commit message refers
> to the mdp/tpr generated by g_tune_pme having to respect rvdw ==
> rcoulomb.

Note that you won't see much performance difference between a
simulation that uses rvdw < rcoulomb as a result of PP-PME load
balancing and one that uses fixed rvdw == rcoulomb = the value the
previous case tuned the cut-off to. That's because the kernels don't
completely avoid calculating the LJ interactions in the rcoulomb-rvdw
range but rather just mask out these results.

> Cheers,
> --
> Szilárd
>
>> cheers
>> mic
>>
>>
>> ===============================
>>
>>
>> Why be happy when you could be normal?
>>
>> ________________________________
>> From: Szilárd Páll <pall.szilard at gmail.com>
>> To: Michael Brunsteiner <mbx0009 at yahoo.com>
>> Sent: Thursday, July 17, 2014 2:00 AM
>> Subject: Re: [gmx-users] hardware setup for gmx
>>
>> Dear Michael,
>>
>> I'd appreciate if you kept the further discussion on the gmx-users list.
>>
>> On Thu, Jul 10, 2014 at 10:20 AM, Michael Brunsteiner <mbx0009 at yahoo.com>
>> wrote:
>>> Dear Szilard,
>>>
>>> Thank you for the two replies to my questions in gmx-users. I was
>>> glad to learn that free energy calculations + GPU now work! (are you aware
>>> of any tests/benchmarkls there?)
>>
>> What kind of tests are you referring to? "make check" does some
>> regressiontests, benchmarks we don't have any. The performance will
>> depend greatly on the kind of system used; the actual free energy
>> kernels are still running on the CPU (and aren't super-optimized
>> either), so the amount of GPU speedup will depend on the balance of
>> normal vs perturbed non-bondeds.
>>
>>>
>>> about the harware ... i figured as much .. what i keep worrying about ist
>>> the
>>> CPU-GPU balance ... this can only be adjusted through the cut-off length.
>>
>> Or by algorithmic changes and optimizations. :)
>>
>>> With PME/Ewald one can easily optimize this for electrostatics within
>>> certain limits ... but the VdW cut-off is usually parameter that comes
>>> with the force field and tinkering with this cutoff one might see
>>> unexpeted
>>> consequences, but then this is perhaps a minor issue ...
>>
>> The PP-PME load balancing - which acts as CPU-GPU load balancing - is
>> meant to take care of that by  scaling only rcoulomb to adjust the
>> real vs reciprocal space load while keeping rvdw fixed. This is not
>> perfect, though as short- and long-range interaction cost scales quite
>> differently and it does not always work perfectly either, bu
>>
>> Cheers,
>> --
>> Szilárd
>>
>>
>>> thanks again & best regards
>>> michael
>>>
>>>
>>>
>>>
>>> ===============================
>>> Why be happy when you could be normal?
>>>
>>>
>>> On Tuesday, July 8, 2014 7:47 PM, Szilárd Páll <pall.szilard at gmail.com>
>>> wrote:
>>>
>>>
>>>
>>> Hi,
>>>
>>> Please have a look at the gmx-users history, there has been recent
>>> discussions about this topic.
>>>
>>> Brief answer:
>>> * If you only/mostly run GROMACS using GPU, Intel CPUs with many fast
>>> cores
>>> combined with high-end Geforce GTX cards will give the best performance/$;
>>> e.g. currently i7 4930K + GTX 770,780 is what I would recommend
>>> * The ideal hardware balance depends on the kind of simulations you plan
>>> to
>>> do (e.g. system size, cut-off, #of concurrent simulations, etc.).
>>>
>>> Note however, that you could get much better perf/buck on e.g. AMD CPUs
>>> with
>>> middle-range GTX cards e.g. if you have many small simulations to run
>>> concurrently (and especially if you want rack-mountable OEM servers).
>>>
>>> Cheers,
>>>
>>>
>>> On Thu, Jul 3, 2014 at 3:46 PM, Michael Brunsteiner <mbx0009 at yahoo.com>
>>> wrote:
>>>
>>>
>>>
>>> Hi,
>>>
>>> can anybody recommend a hardware setup to perform MD runs (with PME) that
>>> has a good
>>> price-performance ratio? ... in particular I'd be interested in learning
>>> which combinations
>>> of CPU and GPU can be expected to provide a good FLOPS-per-dollar ratio
>>> with
>>> the more
>>> recent gmx versions (4.6 or 5.0)?
>>>
>>> thanks in advance for any recommendations!
>>>
>>> Michael
>>>
>>>
>>>
>>> ps: if your opinion is highly subjective and/or perhaps prone to make
>>> particular hardware vendors
>>> really sad, you might want to send your answer only to my email rather
>>> than
>>> to all gmx-users)
>>>
>>>
>>>
>>> ===============================
>>> Why be happy when you could be normal?
>>> --
>>> Gromacs Users mailing list
>>>
>>> * Please search the archive at
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>> posting!
>>>
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
>>> a
>>> mail to gmx-users-request at gromacs.org.
>>>
>>>
>>>
>>>
>>> --
>>> Páll Szilárd
>>>
>>>
>>
>>


More information about the gromacs.org_gmx-users mailing list