<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hi!<div><br></div><div>If you are running implicit solvent with no cutoffs, ie using the special all-vs-all kernels, then particle decomposition will be used. This exact combination (gb, all-vs-all, dd) is quite tricky to implement, and is not supported at the moment, IIRC. </div><div>This could be documented better, sorry.</div><div><br></div><div>You could try changing constraints from all-bonds to h-bonds, meaning you will have only local constraints, which should allow you to run with particle decomposition. Or use a cut-off and domain decomposition. </div><div><br></div><div>/Per</div><div><br></div><div><br></div><div><br><div><div>4 maj 2011 kl. 16.05 skrev Ozlem Ulucan:</div><br class="Apple-interchange-newline"><blockquote type="cite">Dear Justin, this was only a test run and I ran the simulations on my multi-core workstations (4 cores actually). MPI is no longer required for such a situation. Since I did not set -nt option to 1, this can be accepted as a parallel run. So the command I sent in my previous e-mail was for the parallel run and for serial run I set -nt to 1.<br>
<br><br>Dear Justin, as I said I am using a workstation of 4 processors. I have approximately 2200 atoms in my system. That means for one processor I have slightly more than 550 atoms. I set all the cut-offs to 0. I really need to run this system in parallel. Any suggestions to make it work out? <br>
<br><br>Here is my run input file :<br><br>;<br>; File 'mdout.mdp' was generated<br>; By user: onbekend (0)<br>; On host: onbekend<br>; At date: Sun May 1 16:19:29 2011<br>;<br><br>; VARIOUS PREPROCESSING OPTIONS<br>
; Preprocessor information: use cpp syntax.<br>; e.g.: -I/home/joe/doe -I/home/mary/roe<br>include = <br>; e.g.: -DPOSRES -DFLEXIBLE (note these variable names are case sensitive)<br>define = <br>
<br>; RUN CONTROL PARAMETERS<br>integrator = SD<br>; Start time and timestep in ps<br>tinit = 0<br>dt = 0.002<br>nsteps = 500000<br>; For exact run continuation or redoing part of a run<br>
init_step = 0<br>; Part index is updated automatically on checkpointing (keeps files separate)<br>simulation_part = 1<br>; mode for center of mass motion removal<br>comm-mode = Angular<br>
; number of steps for center of mass motion removal<br>nstcomm = 10<br>; group(s) for center of mass motion removal<br>comm-grps = system<br><br>; LANGEVIN DYNAMICS OPTIONS<br>; Friction coefficient (amu/ps) and random seed<br>
bd-fric = 0<br>ld-seed = 1993<br><br>; ENERGY MINIMIZATION OPTIONS<br>; Force tolerance and initial step-size<br>emtol = 10.0<br>emstep = 0.01<br>; Max number of iterations in relax_shells<br>
niter = 20<br>; Step size (ps^2) for minimization of flexible constraints<br>fcstep = 0<br>; Frequency of steepest descents steps when doing CG<br>nstcgsteep = 1000<br>nbfgscorr = 10<br>
<br>; TEST PARTICLE INSERTION OPTIONS<br>rtpi = 0.05<br><br>; OUTPUT CONTROL OPTIONS<br>; Output frequency for coords (x), velocities (v) and forces (f)<br>nstxout = 1000<br>nstvout = 1000<br>
nstfout = 0<br>; Output frequency for energies to log file and energy file<br>nstlog = 1000<br>nstcalcenergy = -1<br>nstenergy = 1000<br>; Output frequency and precision for .xtc file<br>
nstxtcout = 0<br>xtc-precision = 500<br>; This selects the subset of atoms for the .xtc file. You can<br>; select multiple groups. By default all atoms will be written.<br>xtc-grps = Protein<br>
; Selection of energy groups<br>energygrps = Protein<br><br>; NEIGHBORSEARCHING PARAMETERS<br>; nblist update frequency<br>nstlist = 0<br>; ns algorithm (simple or grid)<br>ns_type = simple<br>
; Periodic boundary conditions: xyz, no, xy<br>pbc = no<br>periodic_molecules = no<br>; nblist cut-off <br>rlist = 0<br>; long-range cut-off for switched potentials<br>
rlistlong = -1<br><br>; OPTIONS FOR ELECTROSTATICS AND VDW<br>; Method for doing electrostatics<br>coulombtype = cut-off<br>rcoulomb-switch = 0<br>rcoulomb = 0<br>; Relative dielectric constant for the medium and the reaction field<br>
epsilon_r = 1<br>epsilon_rf = 1<br>; Method for doing Van der Waals<br>vdw-type = Cut-off<br>; cut-off lengths <br>rvdw-switch = 0<br>rvdw = 0<br>
; Apply long range dispersion corrections for Energy and Pressure<br>DispCorr = No<br>; Extension of the potential lookup tables beyond the cut-off<br>table-extension = 1<br>; Seperate tables between energy group pairs<br>
energygrp_table = <br>; Spacing for the PME/PPPM FFT grid<br>fourierspacing = 0.12<br>; FFT grid size, when a value is 0 fourierspacing will be used<br>fourier_nx = 0<br>fourier_ny = 0<br>
fourier_nz = 0<br>; EWALD/PME/PPPM parameters<br>pme_order = 4<br>ewald_rtol = 1e-05<br>ewald_geometry = 3d<br>epsilon_surface = 0<br>optimize_fft = yes<br>
<br>; IMPLICIT SOLVENT ALGORITHM<br>implicit_solvent = GBSA<br><br>; GENERALIZED BORN ELECTROSTATICS<br>; Algorithm for calculating Born radii<br>gb_algorithm = OBC<br>; Frequency of calculating the Born radii inside rlist<br>
nstgbradii = 1<br>; Cutoff for Born radii calculation; the contribution from atoms<br>; between rlist and rgbradii is updated every nstlist steps<br>rgbradii = 0<br>; Dielectric coefficient of the implicit solvent<br>
gb_epsilon_solvent = 80<br>; Salt concentration in M for Generalized Born models<br>gb_saltconc = 0<br>; Scaling factors used in the OBC GB model. Default values are OBC(II)<br>gb_obc_alpha = 1<br>
gb_obc_beta = 0.8<br>gb_obc_gamma = 4.85<br>gb_dielectric_offset = 0.009<br>sa_algorithm = Ace-approximation<br>; Surface tension (kJ/mol/nm^2) for the SA (nonpolar surface) part of GBSA<br>
; The value -1 will set default value for Still/HCT/OBC GB-models.<br>sa_surface_tension = -1<br><br>; OPTIONS FOR WEAK COUPLING ALGORITHMS<br>; Temperature coupling <br>tcoupl = v-rescale<br>nsttcouple = -1<br>
nh-chain-length = 10<br>; Groups to couple separately<br>tc-grps = Protein<br>; Time constant (ps) and reference temperature (K)<br>tau-t = 0.1 <br>ref-t = 300 <br>
; Pressure coupling <br>Pcoupl = Parrinello-Rahman<br>Pcoupltype = isotropic<br>nstpcouple = -1<br>; Time constant (ps), compressibility (1/bar) and reference P (bar)<br>tau-p = 1<br>
compressibility = 4.5e-5<br>ref-p = 1.0<br>; Scaling of reference coordinates, No, All or COM<br>refcoord_scaling = No<br>; Random seed for Andersen thermostat<br>andersen_seed = 815131<br>
<br>; OPTIONS FOR QMMM calculations<br>QMMM = no<br>; Groups treated Quantum Mechanically<br>QMMM-grps = <br>; QM method <br>QMmethod = <br>; QMMM scheme <br>
QMMMscheme = normal<br>; QM basisset <br>QMbasis = <br>; QM charge <br>QMcharge = <br>; QM multiplicity <br>QMmult = <br>; Surface Hopping <br>
SH = <br>; CAS space options <br>CASorbitals = <br>CASelectrons = <br>SAon = <br>SAoff = <br>SAsteps = <br>; Scale factor for MM charges<br>
MMChargeScaleFactor = 1<br>; Optimization of QM subsystem<br>bOPT = <br>bTS = <br><br>; SIMULATED ANNEALING <br>; Type of annealing for each temperature group (no/single/periodic)<br>
annealing = <br>; Number of time points to use for specifying annealing in each group<br>annealing_npoints = <br>; List of times at the annealing points for each group<br>annealing_time = <br>
; Temp. at each annealing point, for each group.<br>annealing_temp = <br><br>; GENERATE VELOCITIES FOR STARTUP RUN<br>gen-vel = no<br>gen-temp = 300<br>gen-seed = 173529<br>
<br>; OPTIONS FOR BONDS <br>constraints = all-bonds<br>; Type of constraint algorithm<br>constraint-algorithm = Lincs<br>; Do not constrain the start configuration<br>continuation = no<br>; Use successive overrelaxation to reduce the number of shake iterations<br>
Shake-SOR = no<br>; Relative tolerance of shake<br>shake-tol = 0.0001<br>; Highest order in the expansion of the constraint coupling matrix<br>lincs-order = 4<br>; Number of iterations in the final step of LINCS. 1 is fine for<br>
; normal simulations, but use 2 to conserve energy in NVE runs.<br>; For energy minimization with constraints it should be 4 to 8.<br>lincs-iter = 1<br>; Lincs will write a warning to the stderr if in one step a bond<br>
; rotates over more degrees than<br>lincs-warnangle = 30<br>; Convert harmonic bonds to morse potentials<br>morse = no<br><br>; ENERGY GROUP EXCLUSIONS<br>; Pairs of energy groups for which all non-bonded interactions are excluded<br>
energygrp_excl = <br><br>; WALLS <br>; Number of walls, type, atom types, densities and box-z scale factor for Ewald<br>nwall = 0<br>wall_type = 9-3<br>wall_r_linpot = -1<br>
wall_atomtype = <br>wall_density = <br>wall_ewald_zfac = 3<br><br>; COM PULLING <br>; Pull type: no, umbrella, constraint or constant_force<br>pull = no<br><br>
; NMR refinement stuff <br>; Distance restraints type: No, Simple or Ensemble<br>disre = No<br>; Force weighting of pairs in one distance restraint: Conservative or Equal<br>disre-weighting = Conservative<br>
; Use sqrt of the time averaged times the instantaneous violation<br>disre-mixed = no<br>disre-fc = 1000<br>disre-tau = 0<br>; Output frequency for pair distances to energy file<br>
nstdisreout = 100<br>; Orientation restraints: No or Yes<br>orire = no<br>; Orientation restraints force constant and tau for time averaging<br>orire-fc = 0<br>orire-tau = 0<br>
orire-fitgrp = <br>; Output frequency for trace(SD) and S to energy file<br>nstorireout = 100<br>; Dihedral angle restraints: No or Yes<br>dihre = no<br>dihre-fc = 1000<br>
<br>; Free energy control stuff<br>free-energy = no<br>init-lambda = 0<br>delta-lambda = 0<br>foreign_lambda = <br>sc-alpha = 0<br>sc-power = 0<br>
sc-sigma = 0.3<br>nstdhdl = 10<br>separate-dhdl-file = yes<br>dhdl-derivatives = yes<br>dh_hist_size = 0<br>dh_hist_spacing = 0.1<br>couple-moltype = <br>
couple-lambda0 = vdw-q<br>couple-lambda1 = vdw-q<br>couple-intramol = no<br><br>; Non-equilibrium MD stuff<br>acc-grps = <br>accelerate = <br>freezegrps = <br>
freezedim = <br>cos-acceleration = 0<br>deform = <br><br>; Electric fields <br>; Format is number of terms (int) and for all terms an amplitude (real)<br>; and a phase angle (real)<br>
E-x = <br>E-xt = <br>E-y = <br>E-yt = <br>E-z = <br>E-zt = <br><br>; User defined thingies<br>user1-grps = <br>
user2-grps = <br>userint1 = 0<br>userint2 = 0<br>userint3 = 0<br>userint4 = 0<br>userreal1 = 0<br>userreal2 = 0<br>
userreal3 = 0<br>userreal4 = 0<br><br><br>Regards,<br><br>Ozlem<br><br><div class="gmail_quote">On Wed, May 4, 2011 at 3:44 PM, Mark Abraham <span dir="ltr"><<a href="mailto:Mark.Abraham@anu.edu.au">Mark.Abraham@anu.edu.au</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div class="im">On 4/05/2011 11:23 PM, Justin A. Lemkul wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
Ozlem Ulucan wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Dear Gromacs Users,<br>
<br>
I have been trying to simulate a protein in implicit solvent. When I used a single processor by setting -nt to 1 , I did not encounter any problem. But when I tried to run the simulations using more than 1 processor I get the following error.<br>
<br>
Fatal error:<br>
Constraint dependencies further away than next-neighbor<br>
in particle decomposition. Constraint between atoms 2177--2179 evaluated<br>
on node 3 and 3, but atom 2177 has connections within 4 bonds (lincs_order)<br>
of node 1, and atom 2179 has connections within 4 bonds of node 3.<br>
Reduce the # nodes, lincs_order, or<br>
try domain decomposition.<br>
<br>
I set the lincs_order parameter in .mdp file to different values. But it did not help. I have some questions regarding the information above.<br>
</blockquote></blockquote>
<br></div>
See comments about lincs_order in 7.3.18. Obviously, only smaller values of lincs_order can help (but if this is not obvious, please consider how obvious "it did not help" is :-))<div class="im"><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
1) Is it possible to run implicit solvent simulations in parallel?<br>
<br>
</blockquote>
<br>
Yes.<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
2) As far as I know gromacs uses domain decomposition as default. Why does in my simulations gromacs use the particle decomposition which I do not ask for.<br>
<br>
</blockquote>
<br>
Without seeing the exact commands you gave, there is no plausible explanation. DD is used by default.<br>
</blockquote>
<br></div>
Not quite true, unfortunately. With the cutoffs set to zero, the use of the all-against-all GB loops is triggered, and that silently requires PD. It should write something to the log file.<div class="im"><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
-Justin<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Any suggestions are appreciated very much.<br>
I am ussing gromacs-4.5.4 with charmm force field and the OBC implicit solvent model. If you need further informations, probably a run input file, let me know.<br>
</blockquote></blockquote>
<br></div>
A run input file would have helped me avoid guessing above about those cutoffs :-)<br>
<br>
The real issue is that not all systems can be effectively parallelized by a given implementation. How many processors and atoms are we talking about? If there's not hundreds of atoms per processor, then parallelism is not going to be worthwhile.<br>
<font color="#888888">
<br>
Mark</font><div><div></div><div class="h5"><br>
-- <br>
gmx-users mailing list <a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br>
<a href="http://lists.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-users</a><br>
Please search the archive at <a href="http://www.gromacs.org/Support/Mailing_Lists/Search" target="_blank">http://www.gromacs.org/Support/Mailing_Lists/Search</a> before posting!<br>
Please don't post (un)subscribe requests to the list. Use the www interface or send it to <a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>.<br>
Can't post? Read <a href="http://www.gromacs.org/Support/Mailing_Lists" target="_blank">http://www.gromacs.org/Support/Mailing_Lists</a><br>
</div></div></blockquote></div><br>
-- <br>gmx-users mailing list <a href="mailto:gmx-users@gromacs.org">gmx-users@gromacs.org</a><br><a href="http://lists.gromacs.org/mailman/listinfo/gmx-users">http://lists.gromacs.org/mailman/listinfo/gmx-users</a><br>Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!<br>Please don't post (un)subscribe requests to the list. Use the <br>www interface or send it to gmx-users-request@gromacs.org.<br>Can't post? Read http://www.gromacs.org/Support/Mailing_Lists</blockquote></div><br></div></body></html>