<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Hi,<br>
<br>
I would say the OMP_NUM_THREADS goes above anything, since that
actually tells to use that many threads. Even in omp_get_num_procs
tell there are fewer cores, you might want to oversubscribe. I
assume OMP_NUM_THREADS was not set in this case (or it was set to
16), otherwise 8 threads would have been used.<br>
<br>
Would could restrict the number of available hardware threads to
omp_get_num_procs if it conflicts with the number of hardware
threads detected by Gromacs. But I guess that could still be
problematic. What would happen if you ask for half of a node, but
ask for 2 MPI processes that both use OpenMP threads?<br>
<br>
Berk<br>
<br>
<br>
On 2015-06-04 14:27, Mark Abraham wrote:<br>
</div>
<blockquote
cite="mid:CAMNuMAT_aBw_tJv4CvF+9MCafPc+m3gx8b2URRVZ2V2V-NmEfA@mail.gmail.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
<div dir="ltr">Hi,<br>
<br>
<div>Node sharing cannot be automagically supported, because
there's no "reliable" source of information except the user.
This is nothing new (e.g. <a moz-do-not-send="true"
href="http://www.gromacs.org/Documentation/Acceleration_and_parallelization#Pinning_threads_to_physical_cores">http://www.gromacs.org/Documentation/Acceleration_and_parallelization#Pinning_threads_to_physical_cores</a>).
mdrun can't know whether omp_get_num_procs or OMP_NUM_THREADS
is more reliable in the general case (naturally, every job
scheduler is different, and we can't even assume that there is
a job scheduler who might do it right, e.g. the case of users
sharing an in-house machine). However, if only
omp_get_num_procs is set, then maybe we can use that rather
than assume that the number of hardware threads is appropriate
to use? We'd still report the difference to the user.</div>
<div><br>
</div>
<div>Agree with Berk that a scheduler that only used this
mechanism to declare the number of available physical cores
would be flawed, e.g. consider a pthreads or TBB code.</div>
<div><br>
</div>
<div>Mark</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr">On Thu, Jun 4, 2015 at 1:46 PM David van der
Spoel <<a moz-do-not-send="true"
href="mailto:spoel@xray.bmc.uu.se">spoel@xray.bmc.uu.se</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">On 04/06/15
12:51, Berk Hess wrote:<br>
> PS There is something strange on that machine. If Gromacs
detects 16<br>
> threads, omp_get_num_procs should return 16, not 8.<br>
Nope.<br>
The queue system allocates 8 cores out of 16 physical cores to
my job.<br>
GROMACS see both values, reports a conflict, and follows the
hardware<br>
rather than OpenMP settings. I would think it should do the
reverse.<br>
<br>
><br>
> Berk<br>
><br>
> On 2015-06-04 12:49, Berk Hess wrote:<br>
>> Hi,<br>
>><br>
>> I don't think anything changed in the master branch.<br>
>><br>
>> But we do adhere to the OpenMP environment. The value
reported in the<br>
>> message comes from omp_get_num_procs, which should be
a report about<br>
>> the hardware available. OMP_NUM_THREADS sets the
number of OpenMP<br>
>> threads to use, that is respected.<br>
>><br>
>> Cheers,<br>
>><br>
>> Berk<br>
>><br>
>> On 2015-06-04 11:21, David van der Spoel wrote:<br>
>>> Hi,<br>
>>><br>
>>> why does GROMACS in the master branch not adhere
to the OpenMP<br>
>>> environment?<br>
>>><br>
>>> Number of hardware threads detected (16) does not
match the number<br>
>>> reported by OpenMP (8).<br>
>>> Consider setting the launch configuration
manually!<br>
>>> Reading file md.tpr, VERSION
5.1-beta1-dev-20150603-99a1e1f-dirty<br>
>>> (single precision)<br>
>>> Changing nstlist from 10 to 40, rlist from 1.1 to
1.1<br>
>>><br>
>>> Using 1 MPI process<br>
>>> Using 16 OpenMP threads<br>
>>><br>
>>> Cheers,<br>
>><br>
><br>
<br>
<br>
--<br>
David van der Spoel, Ph.D., Professor of Biology<br>
Dept. of Cell & Molec. Biol., Uppsala University.<br>
Box 596, 75124 Uppsala, Sweden. Phone: +46184714205.<br>
<a moz-do-not-send="true" href="mailto:spoel@xray.bmc.uu.se"
target="_blank">spoel@xray.bmc.uu.se</a> <a
moz-do-not-send="true" href="http://folding.bmc.uu.se"
target="_blank">http://folding.bmc.uu.se</a><br>
--<br>
Gromacs Developers mailing list<br>
<br>
* Please search the archive at <a moz-do-not-send="true"
href="http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List"
target="_blank">http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List</a>
before posting!<br>
<br>
* Can't post? Read <a moz-do-not-send="true"
href="http://www.gromacs.org/Support/Mailing_Lists"
target="_blank">http://www.gromacs.org/Support/Mailing_Lists</a><br>
<br>
* For (un)subscribe requests visit<br>
<a moz-do-not-send="true"
href="https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers"
target="_blank">https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers</a>
or send a mail to <a moz-do-not-send="true"
href="mailto:gmx-developers-request@gromacs.org"
target="_blank">gmx-developers-request@gromacs.org</a>.<br>
</blockquote>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
</blockquote>
<br>
</body>
</html>