<div dir='auto'>Hi,<div dir="auto"><br></div><div dir="auto">But this simply looks like an unstable system (or too short tau_p). We can not easily prevent a segfault from happening when a system is pressure scaling too much. The alternative would be to terminate the run with an error message instead of just printing the warning.</div><div dir="auto"><br></div><div dir="auto">Cheers,</div><div dir="auto"><br></div><div dir="auto">Berk</div></div><div class="gmail_extra"><br><div class="gmail_quote">On May 5, 2017 13:13, Aleksei Iupinov <aleksei.iupinov@scilifelab.se> wrote:<br type="attribution"><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div>Hello Michael,<br /><br /></div>You are welcome to register and file the bug at the <a href="https://redmine.gromacs.org/">https://redmine.gromacs.org/</a> issue tracker. <br />There you can attach the input file and the logs as well (so that we know the exact Gromacs version, etc).<br /><br /></div>Best regards,<br /></div>Aleksei<br /></div><div><br /><div class="elided-text">On Fri, May 5, 2017 at 10:23 AM, Michael Brunsteiner <span dir="ltr"><<a href="mailto:mbx0009@yahoo.com">mbx0009@yahoo.com</a>></span> wrote:<br /><blockquote style="margin:0 0 0 0.8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="color:#000;background-color:#fff;font-family:'helvetica neue' , 'helvetica' , 'arial' , 'lucida grande' , sans-serif;font-size:13px"><div> </div><div><div>hi,</div><div><br /></div><div>I post this here as it might be developers rather than a user issue ...</div><div>I ran an NPT sim with simulated annealing of an amorphous solid sample with</div><div dir="ltr">some organic molecules,as in:</div><div dir="ltr"><br /></div><div dir="ltr">gmx grompp -f md-1bar-353-253.mdp -p sis3-7-simp.top -c up-nr2-3.gro -o do-nr2-3.tpr<br />nohup gmx mdrun -v -deffnm do-nr2-3 > er 2>&1 &<br /></div><div dir="ltr"><br /></div><div dir="ltr"> after around 30 nano secs the simulation stops without further notice.</div><div dir="ltr">neither in the log-file nor in stdout or stderr there are any indicators of what happened</div><div dir="ltr">but when i look into the relevant syslog file i find:<br /></div><div dir="ltr"><br />May 5 03:24:38 rcpe-sbd-node03 kernel: [82541302.295784] gmx[2218]: <br />segfault at ffffffff9d3ebea0 ip 00007f5b708be3a1 sp 00007f5b657f9dc0 error 7 in libgromacs.so.2.3.0[<wbr />7f5b706d9000+1d1e000]<br /><br />when i restart the sim on the same computer and from the last cpt file, as in: <br /></div><div dir="ltr"><br /></div><div dir="ltr">nohup gmx mdrun -v -deffnm do-nr2-3 -cpi do-nr2-3.cpt -noappend > er 2>&1 &<br /><br /></div><div dir="ltr">the sim happily continues beyond the point where it previously seg-faulted without any further issues ... <br /></div><div dir="ltr"><br /></div><div dir="ltr">tpr file is too large to attach (if anybody's interested i can upload it somewhere)</div><div dir="ltr">below i put the last 30 or so lines of both stderr+stdout and the log-file</div>I believe the warning at the end of stderr is harmless, but even if it actually is the reason<div dir="ltr">for the segfault this still does not explain why nothing is written to stderr when it happens</div><div dir="ltr">and why the sim works when restarted from the cpt file ... can it be that this is a hardware issue??</div><div dir="ltr"><br /></div><div dir="ltr">regards,</div><div dir="ltr">Michael</div><div dir="ltr"><br /></div><div dir="ltr"><br /></div><div dir="ltr"> stderr+stdout:</div><div dir="ltr">[..]</div><div dir="ltr"> Brand: Intel(R) Core(TM) i7-4930K CPU @ 3.40GHz<br /> SIMD instructions most likely to fit this hardware: AVX_256<br /> SIMD instructions selected at GROMACS compile time: AVX_256<br /><br /> Hardware topology: Full, with devices<br /> GPU info:<br /> Number of GPUs detected: 1<br /> #0: NVIDIA GeForce GTX 780, compute cap.: 3.5, ECC: no, stat: compatible<br /><br />Reading file do-nr2-3.tpr, VERSION 2016.3 (single precision)<br />Changing nstlist from 20 to 40, rlist from 1.2 to 1.2<br /><br />Using 1 MPI thread<br />Using 12 OpenMP threads <br /><br />1 compatible GPU is present, with ID 0<br />1 GPU auto-selected for this run.<br />Mapping of GPU ID to the 1 PP rank in this node: 0<br /><br />starting mdrun 'system'<br />110000000 steps, 110000.0 ps.<br />step 80: timed with pme grid 40 40 24, coulomb cutoff 1.200: 81.2 M-cycles<br />step 80: the box size limits the PME load balancing to a coulomb cut-off of 1.368<br />step 160: timed with pme grid 32 36 24, coulomb cutoff 1.368: 72.9 M-cycles<br />step 240: timed with pme grid 36 36 24, coulomb cutoff 1.264: 75.7 M-cycles<br />step 320: timed with pme grid 36 40 24, coulomb cutoff 1.216: 78.6 M-cycles<br />step 400: timed with pme grid 40 40 24, coulomb cutoff 1.200: 81.2 M-cycles<br /> optimal pme grid 32 36 24, coulomb cutoff 1.368<br />step 31031000, will finish Fri May 5 14:27:08 2017<br />Step 31031061 Warning: pressure scaling more than 1%, mu: 0.999153 0.982333 0.997814<br /><br /></div><div dir="ltr"><br /></div><div dir="ltr"><br /></div><div dir="ltr">log-file:</div><div dir="ltr">[..]<br /></div></div><div> Step Time<br /> 31030000 31030.00000<br /><br />Current ref_t for group System: 327.9<br /> Energies (kJ/mol)<br /> Bond Angle Proper Dih. Improper Dih. LJ-14<br /> 8.65826e+03 1.38650e+04 1.08781e+04 4.26101e+02 6.01339e+03<br /> Coulomb-14 LJ (SR) Coulomb (SR) Coul. recip. Potential<br /> -2.98635e+04 -1.12638e+04 1.63267e+04 2.11350e+02 1.52514e+04<br /> Kinetic En. Total Energy Temperature Pressure (bar)<br /> 2.22541e+04 3.75055e+04 3.28069e+02 6.14802e+02<br /><br /> Step Time<br /> 31031000 31031.00000<br /><br />Current ref_t for group System: 327.8<br /> Energies (kJ/mol)<br /> Bond Angle Proper Dih. Improper Dih. LJ-14<br /> 8.41290e+03 1.38660e+04 1.08950e+04 3.51583e+02 5.79937e+03<br /> Coulomb-14 LJ (SR) Coulomb (SR) Coul. recip. Potential<br /> -2.99386e+04 -1.15255e+04 1.64549e+04 2.30994e+02 1.45468e+04<br /> Kinetic En. Total Energy Temperature Pressure (bar)<br /> 2.27307e+04 3.72775e+04 3.35095e+02 -1.54620e+00<br /><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div dir="ltr"><br /></div><br /></div><div style="display:block"> <div style="font-family:'helvetica neue' , 'helvetica' , 'arial' , 'lucida grande' , sans-serif;font-size:13px"> <div style="font-family:'helveticaneue' , 'helvetica neue' , 'helvetica' , 'arial' , 'lucida grande' , sans-serif;font-size:16px"> <div> <font size="2" face="Arial"> </font><hr size="1" /><font color="#888888"><b><span style="font-weight:bold"></span></b></font><font color="#888888">-- <br />Gromacs Developers mailing list<br /><br />* Please search the archive at <a href="http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List">http://www.gromacs.org/<wbr />Support/Mailing_Lists/GMX-<wbr />developers_List </a>before posting!<br /><br />* Can't post? Read <a href="http://www.gromacs.org/Support/Mailing_Lists">http://www.gromacs.org/<wbr />Support/Mailing_Lists</a><br /><br />* For (un)subscribe requests visit<br /><a href="https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers">https://maillist.sys.kth.se/<wbr />mailman/listinfo/gromacs.org_<wbr />gmx-developers </a>or send a mail to <a href="mailto:gmx-developers-request@gromacs.org.">gmx-developers-request@<wbr />gromacs.org.</a><br /><br /></font></div> </div> </div> </div></div></div><br />--<br />
Gromacs Developers mailing list<br />
<br />
* Please search the archive at <a href="http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List">http://www.gromacs.org/<wbr />Support/Mailing_Lists/GMX-<wbr />developers_List</a> before posting!<br />
<br />
* Can't post? Read <a href="http://www.gromacs.org/Support/Mailing_Lists">http://www.gromacs.org/<wbr />Support/Mailing_Lists</a><br />
<br />
* For (un)subscribe requests visit<br />
<a href="https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers">https://maillist.sys.kth.se/<wbr />mailman/listinfo/gromacs.org_<wbr />gmx-developers</a> or send a mail to <a href="mailto:gmx-developers-request@gromacs.org">gmx-developers-request@<wbr />gromacs.org</a>.<br /></blockquote></div><br /></div>
</blockquote></div><br></div>