<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
<title></title>
</head>
<body text="#000000" bgcolor="#ffffff">
On 03/12/2011 12:51 PM, Justin A. Lemkul wrote:<br>
Getting hard to follow, so I put my new comments in blue.<br>
<font color="#000099">Output from Energy Minimization<br>
<br>
Steepest Descents converged to Fmax < 1000 in 1814 steps<br>
Potential Energy = -3.81270276926664e+05<br>
Maximum force = 9.38712770623942e+02 on atom 2292<br>
Norm of force = 2.88274347161761e+01</font><br>
<br>
<br>
<br>
<blockquote cite="mid:4D7BB2AC.2050807@vt.edu" type="cite">
<br>
Steve Vivian wrote:
<br>
<blockquote type="cite">Based on a preliminary test using multiple
threads, the issue is not
<br>
resolved.
<br>
This leads me to believe that my Unit Cell is not built
properly.
<br>
<br>
Below is the procedure used to build the unit cell. I have
reviewed it many
<br>
times, but I would appreciate any input regarding potential
improvements,
<br>
specifically on the line using trjconv in the EM/Shrink loop.
<br>
<br>
Safe up to here, (I hope)...
<br>
<br>
cat KALP_newbox.gro dppc128_whole.gro > system.gro
<br>
<br>
update minim.mdp
<br>
; Strong position restraints for InflateGRO
<br>
#ifdef STRONG_POSRES
<br>
#include "strong_posre.itp"
<br>
#endif
<br>
<br>
Create Strong Position Restraint for protein
<br>
genrestr -f KALP_newbox.gro -o strong_posre.itp -fc 100000
100000 100000
<br>
<br>
Scale Lipid positions by a factor of 4
<br>
perl inflategro.pl system.gro 4 DPPC 14 system_inflated.gro 5
area.dat
<br>
<br>
Begin loop of repeated Energy Minimizations and Shrinking
(repeat loop approximately 25 times until area is approx 71 Ang
sq)
<br>
Begin LOOP (from n=1 to n = 26)
<br>
grompp -f minim.mdp -c systm_inf_n.gro -p topol.top -o
em_n.tpr
<br>
mdrun -v -deffnm em_n
<br>
trjconv -s em_n.tpr -f em_n.gro -o em_n_out.gro -pbc mol -ur
compact
<br>
perl inflategro.pl em_n_out.gro 0.95 DPPC 0 sys_shr_1.gro 5
<br>
</blockquote>
<br>
One problem here: you start the loop every time with
system_inf_n? What is system_inf_n? It seems that you should
start one (non-loop) shrink and then process the subsequent
shrinking steps from there. At the end of the loop, you write to
sys_shr_1.gro, which then never gets used again.
<br>
<br>
-Justin
<br>
</blockquote>
<font color="#000066">Put my comments in blue to make it easier to
read.</font><br>
<br>
<font color="#000066">I apologize for the poor description here. <br>
The first loop begins with the System_inflate.gro file created in
the earlier process. The rest of the first loop is shown.<br>
<br>
At the beginning of the second loop, the input file is
system_shrink_1.gro, which was the output file at the end of loop
1. Proceed through the loop, updating the n from 1 to 2 in each
step. Output file is system_shrink_2.gro.<br>
<br>
Input file for loop 3 is system_shrink_3.gro, ....<br>
<br>
....Output from loop 26 is system_shrink_26.gro and output on
screen provides information on updated Lipid area which has
achieved target value.<br>
<br>
Then proceed to add water (after changing vdw radius of C atoms)<br>
Add ions<br>
EM again<br>
Then Equilibrate.</font><br>
<br>
<br>
<font color="#000099">Output from Energy Minimization (after
addition of water and ions)<br>
<br>
Steepest Descents converged to Fmax < 1000 in 1814 steps<br>
Potential Energy = -3.81270276926664e+05<br>
Maximum force = 9.38712770623942e+02 on atom 2292<br>
Norm of force = 2.88274347161761e+01</font><br>
<br>
<br>
<br>
<blockquote cite="mid:4D7BB2AC.2050807@vt.edu" type="cite">
<br>
<blockquote type="cite">ar_shr1.dat
<br>
End LOOP
<br>
<br>
Add water Add ions
<br>
Re-run EM
<br>
Equilibrate (and watch it all explode)
<br>
<br>
<br>
<br>
<br>
<br>
-----Original Message-----
<br>
From: <a class="moz-txt-link-abbreviated" href="mailto:gmx-users-bounces@gromacs.org">gmx-users-bounces@gromacs.org</a>
[<a class="moz-txt-link-freetext" href="mailto:gmx-users-bounces@gromacs.org">mailto:gmx-users-bounces@gromacs.org</a>]
<br>
On Behalf Of Justin A. Lemkul
<br>
Sent: Thursday, March 10, 2011 12:56 PM
<br>
To: Discussion list for GROMACS users
<br>
Subject: Re: [gmx-users] Fwd: KALP-15 in DPPC Tutorial Step 0
Segmentation
<br>
Fault
<br>
<br>
<br>
<br>
Steve Vivian wrote:
<br>
<blockquote type="cite">On 03/08/2011 10:23 PM, Justin A. Lemkul
wrote:
<br>
<blockquote type="cite">
<br>
Steve Vivian wrote:
<br>
<blockquote type="cite">New to Gromacs.
<br>
<br>
Worked my way through the tutorial with relatively few
issues until the Equilibration stage. My system blows
up!!
<br>
<br>
Returned to the Topology stage and rebuilt my system
ensuring that I followed the procedure correctly for the
InflateGro process. It appears to be correct, reasonable
lipid area, no water inside my bilayer, vmd shows a
structure which appears normal (although I am new to
this). There are voids between bilayer and water
molecules, but this is to be expected, correct?
<br>
<br>
Energy Minimization repeatedly produces results within the
expected range.
<br>
<br>
Again system blows up at equilibration, step 0
segmentation fault. Regardless of whether I attempt the
NVT or Anneal_Npt process (using the provided mdp files,
including the updates for restraints on the protein and
the lipid molecules).
<br>
<br>
I have attempted many variations of the nvt.mdp and
anneal_npt.mdp files hoping to resolve my issue, but with
no success. I will post the log information from the
nvt.mdp file included in the tutorial.
<br>
<br>
Started mdrun on node 0 Tue Mar 8 15:42:35 2011
<br>
<br>
Step Time Lambda
<br>
0 0.00000 0.00000
<br>
<br>
Grid: 9 x 9 x 9 cells
<br>
Energies (kJ/mol)
<br>
G96Angle Proper Dih. Improper
Dih. LJ-14 Coulomb-14
<br>
8.52380e+01 6.88116e+01 2.23939e+01
-3.03546e+01 2.71260e+03
<br>
LJ (SR) Disper. corr.
Coulomb (SR) Coul. recip. Position Rest.
<br>
1.49883e+04 -1.42684e+03 -2.78329e+05
-1.58540e+05 2.57100e+00
<br>
Potential Kinetic En. Total
Energy Conserved En.
Temperature
<br>
-4.20446e+05 *1.41436e+14 1.41436e+14
1.41436e+14 1.23343e+12*
<br>
Pres. DC (bar) Pressure (bar) Constr. rmsd
<br>
-1.56331e+02 5.05645e+12 1.18070e+01
<br>
<br>
<br>
As you can see the Potential Energy is reasonable, but the
Kinetic Energy and Temperature seem unrealistic.
<br>
<br>
I am hoping that this is enough information for a more
experienced Gromacs user to provide guidance. Note: that
I have tried all of the suggestions that I read on the
mailing list and in the "blowing up" section of the
manual, specifically:
<br>
-reduced time steps in Equilibration Stages
<br>
-reduced Fmax during EM stage (down as low as 100kJ which
did not help)
<br>
-modified neighbours list parameters
<br>
<br>
Any help is appreciated. I can attach and forward any
further information as required, please let me know.
<br>
<br>
</blockquote>
Which Gromacs version are you using? It looks like you're
running in serial, is that correct? Otherwise, please
provide your mdrun command line. If you're using version
4.5.3 in serial, I have identified a very problematic bug
that seems to affect a wide variety of systems that could be
related:
<br>
<br>
</blockquote>
Yes I am currently using Gromacs 4.5.3 in serial.
<br>
<br>
<blockquote type="cite"><a class="moz-txt-link-freetext" href="http://redmine.gromacs.org/issues/715">http://redmine.gromacs.org/issues/715</a>
<br>
<br>
I have seen even the most robust tutorial systems fail as
well, as some new lab members experienced the same problem.
The workaround is to run in parallel.
<br>
<br>
</blockquote>
If I understand you correctly, the recommended workaround is
to re-configure gromacs 4.5.3 with mpi enabled and complete
the Equilibration and Production simulation in parallel.
<br>
<br>
</blockquote>
<br>
Strictly speaking, an external MPI library is no longer
required. Gromacs
<br>
now builds with internal threading support (as long as your
hardware and
<br>
compilers support such features). In fact, thread support
builds by default if
<br>
possible, so if your mdrun has an -nt flag, you don't need to do
anything else except
<br>
use "mdrun -nt (number of threads)" when running your command.
<br>
<br>
<blockquote type="cite">Do you have a recommendation for which
mpi library to install (lam mpi seems to be referenced in
other articles on the mailing list)?
<br>
<br>
</blockquote>
<br>
I've had good luck with OpenMPI in the past, but this is not
strictly
<br>
necessary in all cases.
<br>
<br>
<blockquote type="cite">Are there documented installation
procedures for this process (upgrading to gromacs with mpi
enabled)?
<br>
<br>
</blockquote>
<br>
<a class="moz-txt-link-freetext" href="http://www.gromacs.org/Downloads/Installation_Instructions#Using_MPI">http://www.gromacs.org/Downloads/Installation_Instructions#Using_MPI</a>
<br>
<br>
-Justin
<br>
<br>
<blockquote type="cite">Thanks for your assistance.
<br>
Steve.
<br>
<br>
<blockquote type="cite">-Justin
<br>
<br>
<blockquote type="cite">Regards,
<br>
Steve Vivian.
<br>
<a class="moz-txt-link-abbreviated" href="mailto:svivian@uwo.ca">svivian@uwo.ca</a>
<br>
<br>
<br>
<br>
</blockquote>
</blockquote>
</blockquote>
<br>
</blockquote>
<br>
</blockquote>
<br>
</body>
</html>