<html>
<head>
<style>
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Verdana
}
</style>
</head>
<body class='hmmessage'>
Hi,<br><br>SD will tau_t=0.1 will make your dynamics a lot slower.<br><br>I don't see a reason why there should be a difference between serial and parallel.<br>Are all simulations single runs, or do you do restarts?<br><br>Did you compare the temperatures to check if there is no strong energy loss<br>or heating and if there are differences between the different simulations?<br><br>Berk<br><br>> Date: Wed, 4 Feb 2009 13:35:30 -0500<br>> From: chris.neale@utoronto.ca<br>> To: gmx-users@gromacs.org<br>> Subject: [gmx-users] micelle disaggregated in serial, but not parallel,        runs using sd integrator<br>> <br>> Hello,<br>> <br>> I have been experiencing problems with a detergent micelle falling <br>> apart. This micelle spontaneously aggregated in tip4p and was stable <br>> for >200 ns. I then took the .gro file from 100 ns after stable <br>> micelle formation and began some free energy calculations, during <br>> which the micelle partially disaggregated, even at lambda=0. At first <br>> I thought that this was related to the free energy code, and indeed <br>> the energygrps solution posted by Berk did stop my to-be-annihilated <br>> detergent monomer from flying around the box even at lambda=0.00. <br>> However, I have been able to reproduce this disaggregation in the <br>> absence of the free-energy code, so I believe that there is something <br>> else going on and my tests using GMX_NO_SOLV_OPT, separate energygrps, <br>> and the code change all indicate that this is a separate issue.<br>> <br>> I have been trying to locate the error for a few days, but each round <br>> of tests takes about 24h so the progress is slow. Here is a summary of <br>> what I have learned so far.<br>> <br>> A. Do not fall apart by 2.5 ns:<br>> GMX 3.3.1, 4.0.2, or 4.0.3<br>> integrator = md<br>> energygrps = DPC SOL<br>> tcoupl = Berendsen<br>> tc_grps = DPC SOL<br>> tau_t = 0.1 0.1<br>> ref_t = 300. 300.<br>> <br>> B. Partial dissaggregation or irregular micelle shape observed by 2.5 ns:<br>> GMX 3.3.1, 4.0.2, or 4.0.3<br>> integrator = sd<br>> energygrps = System --- or --- DPC SOL<br>> tc_grps = System<br>> tau_t = 0.1<br>> ref_t = 300.<br>> * GMX 4.0.3 gives same result with "export GMX_NO_SOLV_OPT=1"<br>> * GMX 4.0.3 gives same result when compiled with the tip4p <br>> optimization code fix.<br>> * GMX 4.0.3 Using tip3p in place of tip4p gives same result.<br>> <br>> C. Does not fall apart by 7.5 ns when running section B options in parallel.<br>> <br>> Common MDP options:<br>> comm_mode = linear<br>> nstcomm = 1<br>> comm_grps = System<br>> nstlist = 5<br>> ns_type = grid<br>> pbc = xyz<br>> coulombtype = PME<br>> rcoulomb = 0.9<br>> fourierspacing = 0.12<br>> pme_order = 4<br>> vdwtype = cut-off<br>> rvdw_switch = 0<br>> rvdw = 1.4<br>> rlist = 0.9<br>> DispCorr = EnerPres<br>> Pcoupl = Berendsen<br>> pcoupltype = isotropic<br>> compressibility = 4.5e-5<br>> ref_p = 1.<br>> tau_p = 4.0<br>> gen_temp = 300.<br>> gen_seed = 9896<br>> constraints = all-bonds<br>> constraint_algorithm= lincs<br>> lincs-iter = 1<br>> lincs-order = 6<br>> gen_vel = no<br>> unconstrained-start = yes<br>> dt = 0.004<br>> <br>> ##################<br>> <br>> My current hypothesis is that the sd integrator somehow functions <br>> differently in serial than in parallel in gromacs versions 3.3.1, <br>> 4.0.2, and 4.0.3. I suspect that this is not limited to tip4p, since I <br>> see disaggregation in tip3p also, although I did not control the tip3p <br>> run and this may not be related to the md/sd difference.<br>> <br>> I realize that I may have other problems, for example perhaps I should <br>> have used dt=1.0 instead of dt=0.1 while using the sd integrator, but <br>> the fact that a parallel run resolved the problem makes me think that <br>> it is something else.<br>> <br>> I am currently working to find a smaller test system, but would <br>> appreciate it if a developer can comment on the liklihood of my above <br>> hypothesis being correct. Also, any suggestions on sets of mdp options <br>> that might narrow down the possibilities would be greatly appreciated.<br>> <br>> I have included the entire .mdp file from the 4 core job that ran <br>> without disaggregation:<br>> <br>> integrator = sd<br>> comm_mode = linear<br>> nstcomm = 1<br>> comm_grps = System<br>> nstlog = 50000<br>> nstlist = 5<br>> ns_type = grid<br>> pbc = xyz<br>> coulombtype = PME<br>> rcoulomb = 0.9<br>> fourierspacing = 0.12<br>> pme_order = 4<br>> vdwtype = cut-off<br>> rvdw_switch = 0<br>> rvdw = 1.4<br>> rlist = 0.9<br>> DispCorr = EnerPres<br>> Pcoupl = Berendsen<br>> pcoupltype = isotropic<br>> compressibility = 4.5e-5<br>> ref_p = 1.<br>> tau_p = 4.0<br>> tc_grps = System<br>> tau_t = 0.1<br>> ref_t = 300.<br>> annealing = no<br>> gen_temp = 300.<br>> gen_seed = 9896<br>> constraints = all-bonds<br>> constraint_algorithm= lincs<br>> lincs-iter = 1<br>> lincs-order = 6<br>> energygrps = SOL DPC DPN<br>> ; Free energy control stuff<br>> free_energy = no<br>> init_lambda = 0.00<br>> delta_lambda = 0<br>> sc_alpha = 0.0<br>> sc-power = 1.0<br>> sc-sigma = 0.3<br>> couple-moltype = DPN<br>> couple-lambda0 = vdw-q<br>> couple-lambda1 = vdw<br>> couple-intramol = no<br>> nsteps = 50000<br>> tinit = 7600<br>> dt = 0.004<br>> nstxout = 50000<br>> nstvout = 50000<br>> nstfout = 50000<br>> nstenergy = 2500<br>> nstxtcout = 2500<br>> gen_vel = no<br>> unconstrained-start = yes<br>> <br>> ####<br>> <br>> Note that the free energy code was turned on in the above (with <br>> lambda=0). This is because I started the debugging when I thought that <br>> the free-energy code / tip4p combination was causing the differences. <br>> I also ran this exact mdp file in serial and observed disaggregation <br>> in ~2 ns.<br>> <br>> Thank you,<br>> Chris.<br>> <br>> _______________________________________________<br>> gmx-users mailing list gmx-users@gromacs.org<br>> http://www.gromacs.org/mailman/listinfo/gmx-users<br>> Please search the archive at http://www.gromacs.org/search before posting!<br>> Please don't post (un)subscribe requests to the list. Use the <br>> www interface or send it to gmx-users-request@gromacs.org.<br>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php<br><br /><hr />See all the ways you can stay connected <a href='http://www.microsoft.com/windows/windowslive/default.aspx' target='_new'>to friends and family</a></body>
</html>