<div dir="ltr">Hi,<br><br><div>Node sharing cannot be automagically supported, because there's no "reliable" source of information except the user. This is nothing new (e.g. <a href="http://www.gromacs.org/Documentation/Acceleration_and_parallelization#Pinning_threads_to_physical_cores">http://www.gromacs.org/Documentation/Acceleration_and_parallelization#Pinning_threads_to_physical_cores</a>). mdrun can't know whether omp_get_num_procs or OMP_NUM_THREADS is more reliable in the general case (naturally, every job scheduler is different, and we can't even assume that there is a job scheduler who might do it right, e.g. the case of users sharing an in-house machine). However, if only omp_get_num_procs is set, then maybe we can use that rather than assume that the number of hardware threads is appropriate to use? We'd still report the difference to the user.</div><div><br></div><div>Agree with Berk that a scheduler that only used this mechanism to declare the number of available physical cores would be flawed, e.g. consider a pthreads or TBB code.</div><div><br></div><div>Mark</div></div><br><div class="gmail_quote"><div dir="ltr">On Thu, Jun 4, 2015 at 1:46 PM David van der Spoel <<a href="mailto:spoel@xray.bmc.uu.se">spoel@xray.bmc.uu.se</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 04/06/15 12:51, Berk Hess wrote:<br>
> PS There is something strange on that machine. If Gromacs detects 16<br>
> threads, omp_get_num_procs should return 16, not 8.<br>
Nope.<br>
The queue system allocates 8 cores out of 16 physical cores to my job.<br>
GROMACS see both values, reports a conflict, and follows the hardware<br>
rather than OpenMP settings. I would think it should do the reverse.<br>
<br>
><br>
> Berk<br>
><br>
> On 2015-06-04 12:49, Berk Hess wrote:<br>
>> Hi,<br>
>><br>
>> I don't think anything changed in the master branch.<br>
>><br>
>> But we do adhere to the OpenMP environment. The value reported in the<br>
>> message comes from omp_get_num_procs, which should be a report about<br>
>> the hardware available. OMP_NUM_THREADS sets the number of OpenMP<br>
>> threads to use, that is respected.<br>
>><br>
>> Cheers,<br>
>><br>
>> Berk<br>
>><br>
>> On 2015-06-04 11:21, David van der Spoel wrote:<br>
>>> Hi,<br>
>>><br>
>>> why does GROMACS in the master branch not adhere to the OpenMP<br>
>>> environment?<br>
>>><br>
>>> Number of hardware threads detected (16) does not match the number<br>
>>> reported by OpenMP (8).<br>
>>> Consider setting the launch configuration manually!<br>
>>> Reading file md.tpr, VERSION 5.1-beta1-dev-20150603-99a1e1f-dirty<br>
>>> (single precision)<br>
>>> Changing nstlist from 10 to 40, rlist from 1.1 to 1.1<br>
>>><br>
>>> Using 1 MPI process<br>
>>> Using 16 OpenMP threads<br>
>>><br>
>>> Cheers,<br>
>><br>
><br>
<br>
<br>
--<br>
David van der Spoel, Ph.D., Professor of Biology<br>
Dept. of Cell & Molec. Biol., Uppsala University.<br>
Box 596, 75124 Uppsala, Sweden. Phone: +46184714205.<br>
<a href="mailto:spoel@xray.bmc.uu.se" target="_blank">spoel@xray.bmc.uu.se</a> <a href="http://folding.bmc.uu.se" target="_blank">http://folding.bmc.uu.se</a><br>
--<br>
Gromacs Developers mailing list<br>
<br>
* Please search the archive at <a href="http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List" target="_blank">http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List</a> before posting!<br>
<br>
* Can't post? Read <a href="http://www.gromacs.org/Support/Mailing_Lists" target="_blank">http://www.gromacs.org/Support/Mailing_Lists</a><br>
<br>
* For (un)subscribe requests visit<br>
<a href="https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers" target="_blank">https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers</a> or send a mail to <a href="mailto:gmx-developers-request@gromacs.org" target="_blank">gmx-developers-request@gromacs.org</a>.<br>
</blockquote></div>