<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">On 2017-09-18 18:34, John Eblen wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:CANxFD_kqQMN2PPxCAGdSU=MgWnMD4_vAccjr6f-s2VJ4MQECNQ@mail.gmail.com">
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      <div dir="ltr">
        <div>Hi Szilárd<br>
        </div>
        <div><br>
        </div>
        <div>These runs used 2M huge pages. I will file a redmine
          shortly.</div>
        <div><br>
        </div>
        <div>On a related topic, how difficult would it be to modify
          GROMACS to support &gt; 50%</div>
        <div>PME nodes?</div>
      </div>
    </blockquote>
    That's not so hard, but I see little benefit, since then the MPI
    communication is not reduced much compared to all ranks doing PME.<br>
    <br>
    Berk<br>
    <blockquote type="cite"
cite="mid:CANxFD_kqQMN2PPxCAGdSU=MgWnMD4_vAccjr6f-s2VJ4MQECNQ@mail.gmail.com">
      <div dir="ltr">
        <div><br>
        </div>
        <div><br>
        </div>
        <div>John<br>
        </div>
        <br>
        <div>
          <div>
            <div class="gmail_extra">
              <div class="gmail_quote">On Fri, Sep 15, 2017 at 6:37 PM,
                Szilárd Páll <span dir="ltr">&lt;<a
                    href="mailto:pall.szilard@gmail.com" target="_blank"
                    moz-do-not-send="true">pall.szilard@gmail.com</a>&gt;</span>
                wrote:<br>
                <blockquote class="gmail_quote" style="margin:0px 0px
                  0px 0.8ex;border-left:1px solid
                  rgb(204,204,204);padding-left:1ex">Hi John,<br>
                  <br>
                  Thanks for diagnosing the issue!<br>
                  <br>
                  We have been aware of this behavior, but been both
                  intentional (as we<br>
                  re-scan grids after the first pass at least once
                  more); plus, it's<br>
                  also simply been considered a "not too big of a deal"
                  given that in<br>
                  general mdrun has very low memory footprint. However,
                  it seems that,<br>
                  at least on this particular machine, our assumption
                  was wrong. What is<br>
                  the page sizes on Cori KNL?<br>
                  <br>
                  Can you please file a redmine with your observations?<br>
                  <br>
                  Thanks,<br>
                  --<br>
                  Szilárd<br>
                  <br>
                  <br>
                  On Fri, Sep 15, 2017 at 8:25 PM, John Eblen &lt;<a
                    href="mailto:jeblen@acm.org" target="_blank"
                    moz-do-not-send="true">jeblen@acm.org</a>&gt; wrote:<br>
                  &gt; This issue appears to not be a GROMACS problem so
                  much as a problem with<br>
                  &gt; "huge pages" that is<br>
                  &gt; triggered by PME tuning. PME tuning creates a
                  large data structure for every<br>
                  &gt; cutoff that it tries, which<br>
                  &gt; is replicated on each PME node. These data
                  structures are not freed during<br>
                  &gt; tuning, so memory usage<br>
                  &gt; expands. Normally it is still too small to cause
                  problems. With huge pages,<br>
                  &gt; however, I get errors from<br>
                  &gt; "libhugetlbfs" and very slow runs if more than
                  about five cutoffs are<br>
                  &gt; attempted.<br>
                  &gt;<br>
                  &gt; Sample output on NERSC Cori KNL with 32 nodes.
                  Input system size is 248,101<br>
                  &gt; atoms.<br>
                  &gt;<br>
                  &gt; step 0<br>
                  &gt; step 100, remaining wall clock time:    24 s<br>
                  &gt; step  140: timed with pme grid 128 128 128,
                  coulomb cutoff 1.200: 66.2<br>
                  &gt; M-cycles<br>
                  &gt; step  210: timed with pme grid 112 112 112,
                  coulomb cutoff 1.336: 69.6<br>
                  &gt; M-cycles<br>
                  &gt; step  280: timed with pme grid 100 100 100,
                  coulomb cutoff 1.496: 63.6<br>
                  &gt; M-cycles<br>
                  &gt; step  350: timed with pme grid 84 84 84, coulomb
                  cutoff 1.781: 85.9 M-cycles<br>
                  &gt; step  420: timed with pme grid 96 96 96, coulomb
                  cutoff 1.559: 68.8 M-cycles<br>
                  &gt; step  490: timed with pme grid 100 100 100,
                  coulomb cutoff 1.496: 68.3<br>
                  &gt; M-cycles<br>
                  &gt; libhugetlbfs [nid08887:140420]: WARNING: New heap
                  segment map at<br>
                  &gt; 0x10001200000 failed: Cannot allocate memory<br>
                  &gt; libhugetlbfs [nid08881:97968]: WARNING: New heap
                  segment map at<br>
                  &gt; 0x10001200000 failed: Cannot allocate memory<br>
                  &gt; libhugetlbfs [nid08881:97978]: WARNING: New heap
                  segment map at<br>
                  &gt; 0x10001200000 failed: Cannot allocate memory<br>
                  &gt;<br>
                  &gt; Szilárd, to answer to your questions: This is the
                  verlet scheme. The problem<br>
                  &gt; happens during tuning, and<br>
                  &gt; no problems occur if -notunepme is used. In fact,
                  the best performance thus<br>
                  &gt; far has been with 50% PME<br>
                  &gt; nodes, using huge pages, and '-notunepme'.<br>
                  &gt;<br>
                  &gt;<br>
                  &gt; John<br>
                  &gt;<br>
                  &gt; On Wed, Sep 13, 2017 at 6:20 AM, Szilárd Páll
                  &lt;<a href="mailto:pall.szilard@gmail.com"
                    target="_blank" moz-do-not-send="true">pall.szilard@gmail.com</a>&gt;<br>
                  &gt; wrote:<br>
                  &gt;&gt;<br>
                  &gt;&gt; Forking the discussion as now we've learned
                  more about the issue Åke<br>
                  &gt;&gt; is reporting and it is quiterather
                  dissimilar.<br>
                  &gt;&gt;<br>
                  &gt;&gt; On Mon, Sep 11, 2017 at 8:09 PM, John Eblen
                  &lt;<a href="mailto:jeblen@acm.org" target="_blank"
                    moz-do-not-send="true">jeblen@acm.org</a>&gt; wrote:<br>
                  &gt;&gt; &gt; Hi Szilárd<br>
                  &gt;&gt; &gt;<br>
                  &gt;&gt; &gt; No, I'm not using the group scheme.<br>
                  &gt;&gt;<br>
                  &gt;&gt;  $ grep -i 'cutoff-scheme' md.log<br>
                  &gt;&gt;    cutoff-scheme                  = Verlet<br>
                  &gt;&gt;<br>
                  &gt;&gt; &gt; The problem seems similar because:<br>
                  &gt;&gt; &gt;<br>
                  &gt;&gt; &gt; 1) Deadlocks and very slow runs can be
                  hard to distinguish.<br>
                  &gt;&gt; &gt; 2) Since Mark mentioned it, I assume he
                  believes PME tuning is a<br>
                  &gt;&gt; &gt; possible<br>
                  &gt;&gt; &gt;     cause, which is also the cause in my
                  situation.<br>
                  &gt;&gt;<br>
                  &gt;&gt; Does that mean you tested with "-notunepme"
                  and the excessive memory<br>
                  &gt;&gt; usage could not be reproduced? Did the memory
                  usage increase only<br>
                  &gt;&gt; during the tuning or did it keep increasing
                  after the tuning<br>
                  &gt;&gt; completed?<br>
                  &gt;&gt;<br>
                  &gt;&gt; &gt; 3) Åke may be experiencing
                  higher-than-normal memory usage as far as I<br>
                  &gt;&gt; &gt; know.<br>
                  &gt;&gt; &gt;     Not sure how you know otherwise.<br>
                  &gt;&gt; &gt; 4) By "successful," I assume you mean
                  the tuning had completed. That<br>
                  &gt;&gt; &gt; doesn't<br>
                  &gt;&gt; &gt;     mean, though, that the tuning could
                  not be creating conditions that<br>
                  &gt;&gt; &gt; causes the<br>
                  &gt;&gt; &gt;     problem, like an excessively high
                  cutoff.<br>
                  &gt;&gt;<br>
                  &gt;&gt; Sure. However, it's unlikely that the tuning
                  creates conditions under<br>
                  &gt;&gt; which the run proceeds after the after the
                  initial tuning phase and<br>
                  &gt;&gt; keeps allocating memory (which is more prone
                  to be the source of<br>
                  &gt;&gt; issues).<br>
                  &gt;&gt;<br>
                  &gt;&gt; I suggest to first rule our the bug I linked
                  and if that's not the<br>
                  &gt;&gt; culprit, we can have a closer look.<br>
                  &gt;&gt;<br>
                  &gt;&gt; Cheers,<br>
                  &gt;&gt; --<br>
                  &gt;&gt; Szilárd<br>
                  &gt;&gt;<br>
                  &gt;&gt; &gt;<br>
                  &gt;&gt; &gt;<br>
                  &gt;&gt; &gt; John<br>
                  &gt;&gt; &gt;<br>
                  &gt;&gt; &gt; On Mon, Sep 11, 2017 at 1:09 PM, Szilárd
                  Páll &lt;<a href="mailto:pall.szilard@gmail.com"
                    target="_blank" moz-do-not-send="true">pall.szilard@gmail.com</a>&gt;<br>
                  &gt;&gt; &gt; wrote:<br>
                  &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; John,<br>
                  &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; In what way do you think your
                  problem is similar? Åke seems to be<br>
                  &gt;&gt; &gt;&gt; experiencing a deadlock after
                  successful PME tuning, much later during<br>
                  &gt;&gt; &gt;&gt; the run, but no excessive memory
                  usage.<br>
                  &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; Do you happen to be using the group
                  scheme with 2016.x (release code)?<br>
                  &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; Your issue sounds more like it could
                  be related to the the excessive<br>
                  &gt;&gt; &gt;&gt; tuning bug with group scheme fixed
                  quite a few months ago, but it's<br>
                  &gt;&gt; &gt;&gt; yet to be released (<a
                    href="https://redmine.gromacs.org/issues/2200"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://redmine.gromacs.org/i<wbr>ssues/2200</a>).<br>
                  &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; Cheers,<br>
                  &gt;&gt; &gt;&gt; --<br>
                  &gt;&gt; &gt;&gt; Szilárd<br>
                  &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; On Mon, Sep 11, 2017 at 6:50 PM,
                  John Eblen &lt;<a href="mailto:jeblen@acm.org"
                    target="_blank" moz-do-not-send="true">jeblen@acm.org</a>&gt;
                  wrote:<br>
                  &gt;&gt; &gt;&gt; &gt; Hi<br>
                  &gt;&gt; &gt;&gt; &gt;<br>
                  &gt;&gt; &gt;&gt; &gt; I'm having a similar problem
                  that is related to PME tuning. When it<br>
                  &gt;&gt; &gt;&gt; &gt; is<br>
                  &gt;&gt; &gt;&gt; &gt; enabled, GROMACS often, but not<br>
                  &gt;&gt; &gt;&gt; &gt; always, slows to a crawl and
                  uses excessive amounts of memory. Using<br>
                  &gt;&gt; &gt;&gt; &gt; "huge<br>
                  &gt;&gt; &gt;&gt; &gt; pages" and setting a high<br>
                  &gt;&gt; &gt;&gt; &gt; number of PME processes seems
                  to exacerbate the problem.<br>
                  &gt;&gt; &gt;&gt; &gt;<br>
                  &gt;&gt; &gt;&gt; &gt; Also, occurrences of this
                  problem seem to correlate with how high the<br>
                  &gt;&gt; &gt;&gt; &gt; tuning<br>
                  &gt;&gt; &gt;&gt; &gt; raises the cutoff value.<br>
                  &gt;&gt; &gt;&gt; &gt;<br>
                  &gt;&gt; &gt;&gt; &gt; Mark, can you give us more
                  information on the problems with PME<br>
                  &gt;&gt; &gt;&gt; &gt; tuning?<br>
                  &gt;&gt; &gt;&gt; &gt; Is<br>
                  &gt;&gt; &gt;&gt; &gt; there a redmine?<br>
                  &gt;&gt; &gt;&gt; &gt;<br>
                  &gt;&gt; &gt;&gt; &gt;<br>
                  &gt;&gt; &gt;&gt; &gt; Thanks<br>
                  &gt;&gt; &gt;&gt; &gt; John<br>
                  &gt;&gt; &gt;&gt; &gt;<br>
                  &gt;&gt; &gt;&gt; &gt; On Mon, Sep 11, 2017 at 10:53
                  AM, Mark Abraham<br>
                  &gt;&gt; &gt;&gt; &gt; &lt;<a
                    href="mailto:mark.j.abraham@gmail.com"
                    target="_blank" moz-do-not-send="true">mark.j.abraham@gmail.com</a>&gt;<br>
                  &gt;&gt; &gt;&gt; &gt; wrote:<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; Hi,<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; Thanks. Was PME tuning
                  active? Does it reproduce if that is<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; disabled?<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; Is<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; the PME tuning still
                  active? How many steps have taken place (at<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; least<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; as<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; reported in the log file
                  but ideally from processes)?<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; Mark<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; On Mon, Sep 11, 2017 at
                  4:42 PM Åke Sandgren<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; &lt;<a
                    href="mailto:ake.sandgren@hpc2n.umu.se"
                    target="_blank" moz-do-not-send="true">ake.sandgren@hpc2n.umu.se</a>&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; wrote:<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; My debugger run finally
                  got to the lockup.<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; All processes are
                  waiting on various MPI operations.<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; Attached a stack dump
                  of all 56 tasks.<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; I'll keep the debug
                  session running for a while in case anyone<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; wants<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; some more detailed
                  data.<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; This is a RelwithDeb
                  build though so not everything is available.<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; On 09/08/2017 11:28 AM,
                  Berk Hess wrote:<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt; But you should be
                  able to get some (limited) information by<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt; attaching a<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt; debugger to an
                  aldready running process with a release build.<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt; If you plan on
                  compiling and running a new case, use a release +<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt; debug<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt; symbols build.
                  That should run as fast as a release build.<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt; Cheers,<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt; Berk<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt; On 2017-09-08
                  11:23, Åke Sandgren wrote:<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt; We have, at
                  least, one case that when run over 2 nodes, or more,<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt; quite<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt; often (always)
                  hangs, i.e. no more output in md.log or otherwise<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt; while<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt; mdrun still
                  consumes cpu time. It takes a random time before it<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt; happens,<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt; like 1-3 days.<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt; The case can
                  be shared if someone else wants to investigate. I'm<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt; planning to
                  run it in the debugger to be able to break and look<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt; at<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt; states when it
                  happens, but since it takes so long with the<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt; production<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt; build it is
                  not something i'm looking forward to.<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt; On 09/08/2017
                  11:13 AM, Berk Hess wrote:<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; Hi,<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; We are far
                  behind schedule for the 2017 release. We are working<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; hard<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; on<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; it, but I
                  don't think we can promise a date yet.<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; We have a
                  2016.4 release planned for this week (might slip to<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; next<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; week). But
                  if you can give us enough details to track down your<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; hanging<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; issue, we
                  might be able to fix it in 2016.4.<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; &gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; --<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; Ake Sandgren, HPC2N,
                  Umea University, S-90187 Umea, Sweden<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; Internet: <a
                    href="mailto:ake@hpc2n.umu.se" target="_blank"
                    moz-do-not-send="true">ake@hpc2n.umu.se</a>   Phone:
                  <a href="tel:%2B46%2090%207866134"
                    value="+46907866134" target="_blank"
                    moz-do-not-send="true">+46 90 7866134</a> Fax: +46
                  90-580<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; 14<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; Mobile: <a
                    href="tel:%2B46%2070%207716134" value="+46707716134"
                    target="_blank" moz-do-not-send="true">+46 70
                    7716134</a> WWW: <a href="http://www.hpc2n.umu.se"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">http://www.hpc2n.umu.se</a><br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; --<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; Gromacs Developers
                  mailing list<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; * Please search the
                  archive at<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; <a
                    href="http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">http://www.gromacs.org/Support<wbr>/Mailing_Lists/GMX-developers_<wbr>List</a><br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; before<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; posting!<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; * Can't post? Read <a
                    href="http://www.gromacs.org/Support/Mailing_Lists"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">http://www.gromacs.org/Support<wbr>/Mailing_Lists</a><br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; * For (un)subscribe
                  requests visit<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; <a
href="https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://maillist.sys.kth.se/ma<wbr>ilman/listinfo/gromacs.org_gmx<wbr>-developers</a><br>
                  &gt;&gt; &gt;&gt; &gt;&gt;&gt; or send a mail to <a
                    href="mailto:gmx-developers-request@gromacs.org"
                    target="_blank" moz-do-not-send="true">gmx-developers-request@gromacs<wbr>.org</a>.<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; --<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; Gromacs Developers mailing
                  list<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; * Please search the archive
                  at<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; <a
                    href="http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">http://www.gromacs.org/Support<wbr>/Mailing_Lists/GMX-developers_<wbr>List</a><br>
                  &gt;&gt; &gt;&gt; &gt;&gt; before<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; posting!<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; * Can't post? Read <a
                    href="http://www.gromacs.org/Support/Mailing_Lists"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">http://www.gromacs.org/Support<wbr>/Mailing_Lists</a><br>
                  &gt;&gt; &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; * For (un)subscribe
                  requests visit<br>
                  &gt;&gt; &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; <a
href="https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://maillist.sys.kth.se/ma<wbr>ilman/listinfo/gromacs.org_gmx<wbr>-developers</a><br>
                  &gt;&gt; &gt;&gt; &gt;&gt; or<br>
                  &gt;&gt; &gt;&gt; &gt;&gt; send a mail to <a
                    href="mailto:gmx-developers-request@gromacs.org"
                    target="_blank" moz-do-not-send="true">gmx-developers-request@gromacs<wbr>.org</a>.<br>
                  &gt;&gt; &gt;&gt; &gt;<br>
                  &gt;&gt; &gt;&gt; &gt;<br>
                  &gt;&gt; &gt;&gt; &gt;<br>
                  &gt;&gt; &gt;&gt; &gt; --<br>
                  &gt;&gt; &gt;&gt; &gt; Gromacs Developers mailing list<br>
                  &gt;&gt; &gt;&gt; &gt;<br>
                  &gt;&gt; &gt;&gt; &gt; * Please search the archive at<br>
                  &gt;&gt; &gt;&gt; &gt; <a
                    href="http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">http://www.gromacs.org/Support<wbr>/Mailing_Lists/GMX-developers_<wbr>List</a><br>
                  &gt;&gt; &gt;&gt; &gt; before<br>
                  &gt;&gt; &gt;&gt; &gt; posting!<br>
                  &gt;&gt; &gt;&gt; &gt;<br>
                  &gt;&gt; &gt;&gt; &gt; * Can't post? Read <a
                    href="http://www.gromacs.org/Support/Mailing_Lists"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">http://www.gromacs.org/Support<wbr>/Mailing_Lists</a><br>
                  &gt;&gt; &gt;&gt; &gt;<br>
                  &gt;&gt; &gt;&gt; &gt; * For (un)subscribe requests
                  visit<br>
                  &gt;&gt; &gt;&gt; &gt;<br>
                  &gt;&gt; &gt;&gt; &gt; <a
href="https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://maillist.sys.kth.se/ma<wbr>ilman/listinfo/gromacs.org_gmx<wbr>-developers</a><br>
                  &gt;&gt; &gt;&gt; &gt; or<br>
                  &gt;&gt; &gt;&gt; &gt; send a mail to <a
                    href="mailto:gmx-developers-request@gromacs.org"
                    target="_blank" moz-do-not-send="true">gmx-developers-request@gromacs<wbr>.org</a>.<br>
                  &gt;&gt; &gt;&gt; --<br>
                  &gt;&gt; &gt;&gt; Gromacs Developers mailing list<br>
                  &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; * Please search the archive at<br>
                  &gt;&gt; &gt;&gt; <a
                    href="http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">http://www.gromacs.org/Support<wbr>/Mailing_Lists/GMX-developers_<wbr>List</a>
                  before<br>
                  &gt;&gt; &gt;&gt; posting!<br>
                  &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; * Can't post? Read <a
                    href="http://www.gromacs.org/Support/Mailing_Lists"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">http://www.gromacs.org/Support<wbr>/Mailing_Lists</a><br>
                  &gt;&gt; &gt;&gt;<br>
                  &gt;&gt; &gt;&gt; * For (un)subscribe requests visit<br>
                  &gt;&gt; &gt;&gt; <a
href="https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://maillist.sys.kth.se/ma<wbr>ilman/listinfo/gromacs.org_gmx<wbr>-developers</a><br>
                  &gt;&gt; &gt;&gt; or<br>
                  &gt;&gt; &gt;&gt; send a mail to <a
                    href="mailto:gmx-developers-request@gromacs.org"
                    target="_blank" moz-do-not-send="true">gmx-developers-request@gromacs<wbr>.org</a>.<br>
                  &gt;&gt; &gt;<br>
                  &gt;&gt; &gt;<br>
                  &gt;&gt; &gt;<br>
                  &gt;&gt; &gt; --<br>
                  &gt;&gt; &gt; Gromacs Developers mailing list<br>
                  &gt;&gt; &gt;<br>
                  &gt;&gt; &gt; * Please search the archive at<br>
                  &gt;&gt; &gt; <a
                    href="http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">http://www.gromacs.org/Support<wbr>/Mailing_Lists/GMX-developers_<wbr>List</a>
                  before<br>
                  &gt;&gt; &gt; posting!<br>
                  &gt;&gt; &gt;<br>
                  &gt;&gt; &gt; * Can't post? Read <a
                    href="http://www.gromacs.org/Support/Mailing_Lists"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">http://www.gromacs.org/Support<wbr>/Mailing_Lists</a><br>
                  &gt;&gt; &gt;<br>
                  &gt;&gt; &gt; * For (un)subscribe requests visit<br>
                  &gt;&gt; &gt; <a
href="https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://maillist.sys.kth.se/ma<wbr>ilman/listinfo/gromacs.org_gmx<wbr>-developers</a><br>
                  &gt;&gt; &gt; or<br>
                  &gt;&gt; &gt; send a mail to <a
                    href="mailto:gmx-developers-request@gromacs.org"
                    target="_blank" moz-do-not-send="true">gmx-developers-request@gromacs<wbr>.org</a>.<br>
                  &gt;<br>
                  &gt;<br>
                </blockquote>
              </div>
              <br>
            </div>
          </div>
        </div>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
    </blockquote>
    <br>
  </body>
</html>