<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Verdana
}
--></style>
</head>
<body class='hmmessage'>
Hi,<br><br>If your comm group is a single molecule, it will work in serial.<br>In parallel it will only work when no part of the molecule crosses<br>the edge of the box at any time.<br><br>Berk<br><br>> Date: Wed, 31 Mar 2010 12:53:16 +0200<br>> From: aerbas@ph.tum.de<br>> To: gmx-users@gromacs.org<br>> Subject: Re: [gmx-users] Parallel pulling with Gromacs 4.0.7: COMM mode problem<br>> <br>> Berk Hess wrote:<br>> > Hi,<br>> ><br>> > I am not aware of any issues with parallel pulling in 4.0.7.<br>> ><br>> > Did you see this note in your log file:<br>> > comm-mode angular will give incorrect results when the comm group<br>> > partially crosses a periodic boundary<br>> ><br>> > If your system is periodic, you should not use comm mode angular.<br>> ><br>> > Berk<br>> Hi<br>> Should I not use comm mode angular for parallel running or never no <br>> matter it is parallel or not?<br>> But that is what I have been using (for single machine) for many <br>> simulations without any problem.<br>> I saw the warning but it looks like there is no crossing between the <br>> init_grp0 and the periodic box<br>> and this warning appears only for parallel runs not for single machine <br>> cases.<br>> <br>> thanks<br>> <br>> Aykut<br>> ><br>> > > Date: Wed, 31 Mar 2010 12:32:43 +0200<br>> > > From: aerbas@ph.tum.de<br>> > > To: gmx-users@gromacs.org<br>> > > Subject: Re: [gmx-users] Parallel pulling with Gromacs 4.0.7: COMM <br>> > mode problem<br>> > ><br>> > > Hi everybody<br>> > ><br>> > > There is still a problem about pulling code running in parallel. For<br>> > > COMM_GRP active, you have the positions (pullx.xvg) and forces<br>> > > (pullf.xvg) relative to the absolute coordinates instead of your<br>> > > reference group.<br>> > ><br>> > > comm_mode=angular<br>> > > comm_grps= surface<br>> > ><br>> > > and for pulling part<br>> > > init_grps=surface<br>> > ><br>> > > should give you the coordinates and the forces with respect to the<br>> > > surface as expected. However, in parallel running that is not the case<br>> > > as can be seen from the pull output files given below.<br>> > ><br>> > > On single machine, everything works well.<br>> > > At the moment, it seems the best thing to do is to calculate forces<br>> > > from positions by counting the motion of the reference group.<br>> > ><br>> > > thanks and<br>> > > bests<br>> > ><br>> > > > Dear Aykut:<br>> > > ><br>> > > > 1. Did you see the log file message:<br>> > > ><br>> > > > "comm-mode angular will give incorrect results when the comm group<br>> > > > partially crosses a periodic boundary"<br>> > > indeed, I saw this. But the surface which is roughly 5nm, is approx.<br>> > > 0.5 nm away from the box. There is no way of any crossing.<br>> > > And for G3 and G4 on single machine, you do not have such a warning<br>> > > ><br>> > > > 2. You say "Actually you might be right about the domain<br>> > > > decomposition", but it seems like you didn't run it on gmx 4 in <br>> > serial<br>> > > > or with particle decomposition.<br>> > > ><br>> > > very very sorry about this, I forgot to append that log for single<br>> > > machine with G4<br>> > ><br>> > ><br>> > > log file for G4 on single machine<br>> > > *******************************************<br>> > > Enabling SPC water optimization for 3021 molecules.<br>> > ><br>> > > Configuring nonbonded kernels...<br>> > > Testing x86_64 SSE2 support... present.<br>> > ><br>> > ><br>> > > Removing pbc first time<br>> > ><br>> > > Will apply umbrella COM pulling in geometry 'position'<br>> > > between a reference group and 1 group<br>> > > Pull group 0: 5181 atoms, mass 56947.551<br>> > > Pull group 1: 13 atoms, mass 116.120<br>> > ><br>> > > Initializing LINear Constraint Solver<br>> > ><br>> > ><br>> > > -------- -------- --- Thank You --- -------- --------<br>> > ><br>> > > Center of mass motion removal mode is Angular<br>> > > We have the following groups for center of mass motion removal:<br>> > > 0: DIAM<br>> > ><br>> > > There are: 14359 Atoms<br>> > > Max number of connections per atom is 94<br>> > > Total number of connections is 403131<br>> > > Max number of graph edges per atom is 4<br>> > > Total number of graph edges is 30690<br>> > ><br>> > > Constraining the starting coordinates (step 0)<br>> > ><br>> > > Constraining the coordinates at t0-dt (step 0)<br>> > > RMS relative constraint deviation after constraining: 2.35e-07<br>> > > Initial temperature: 300.447 K<br>> > ><br>> > ><br>> > > > I wish you the best of luck, I'm out of ideas here.<br>> > > ><br>> > > thanks anyways<br>> > > > Chris.<br>> > > ><br>> > > > -- original message --<br>> > > ><br>> > > > Hi<br>> > > ><br>> > > > Actually you might be right about the domain decomposition<br>> > > ><br>> > > ><br>> > > > G3 pull.pdo output file on single machine<br>> > > ><br>> > > > focus on the 2nd and 3rd columns which are x and y positions of the<br>> > > > surface: almost *unchanged* as expected for COMM_grps=surface option<br>> > > > *************<br>> > > > 20000.000000 3.149521 1.576811 5.770928<br>> > > > 7.149521 1.874820 1.676811<br>> > > > 20000.201172 3.149521 1.576812 5.761463<br>> > > > 7.149541 1.880746 1.676812<br>> > > > 20000.400391 3.149520 1.576813 5.771702<br>> > > > 7.149560 1.867692 1.676813<br>> > > > 20000.601562 3.149519 1.576813 5.797871<br>> > > > 7.149579 1.879650 1.676813<br>> > > > 20000.800781 3.149518 1.576812 5.794115<br>> > > > 7.149598 1.887728 1.676812<br>> > > > 20001.000000 3.149517 1.576813 5.778761<br>> > > > 7.149617 1.870823 1.676813<br>> > > > 20001.201172 3.149518 1.576815 5.783334<br>> > > > 7.149638 1.849283 1.676815<br>> > > > 20001.400391 3.149517 1.576815 5.780031<br>> > > > 7.149658 1.877158 1.676815<br>> > > > .....<br>> > > > .....<br>> > > > 39999.402344 3.149799 1.576911 2.249830<br>> > > > 9.149739 1.604563 1.676911<br>> > > > 39999.601562 3.149797 1.576911 2.209385<br>> > > > 9.149757 1.622380 1.676911<br>> > > > 39999.800781 3.149792 1.576911 2.215503<br>> > > > 9.149773 1.653246 1.676911<br>> > > > 40000.000000 3.149791 1.576912 2.221903<br>> > > > 9.149791 1.659781 1.676912<br>> > > ><br>> > > ><br>> > > ><br>> > > > G4 pull.xvg output (in parellel), 2nd and 3rd columns which are x <br>> > and y<br>> > > > positions of the surface: *changing*, contradiction to <br>> > COMM_grps=surface<br>> > > > option<br>> > > ><br>> > > > *********<br>> > > > 0.4000 3.1498 2.997 -0.391131 -0.331925<br>> > > > 0.8000 3.14903 2.99499 -0.391976 -0.346309<br>> > > > 1.2000 3.14753 2.99846 -0.372158 -0.407621<br>> > > > 1.6000 3.14635 3.00695 -0.337084 -0.422437<br>> > > > 2.0000 3.14465 3.00585 -0.306999 -0.474991<br>> > > > 2.4000 3.14365 3.00408 -0.30164 -0.48047<br>> > > > 2.8000 3.14338 3.00447 -0.285076 -0.483861<br>> > > > 3.2000 3.14361 3.00119 -0.226717 -0.460955<br>> > > > ........<br>> > > > ..........<br>> > > > 2838.0000 3.20024 0.662325 1.7185 0.986139<br>> > > > 2838.4000 3.19435 0.661913 1.74023 1.0404<br>> > > > 2838.8000 3.18835 0.666171 1.8073 1.02766<br>> > > > 2839.2000 3.18264 0.658261 1.81687 0.999429<br>> > > > 2839.6000 3.17766 0.668439 1.82782 1.05693<br>> > > ><br>> > > ><br>> > > > here is the log file for G4 (pulling) run in parallel<br>> > > ><br>> > > > ********************************<br>> > > > Initializing Domain Decomposition on 32 nodes<br>> > > > Dynamic load balancing: auto<br>> > > > Will sort the charge groups at every domain (re)decomposition<br>> > > > Initial maximum inter charge-group distances:<br>> > > > two-body bonded interactions: 0.507 nm, LJ-14, atoms 5186 5197<br>> > > > multi-body bonded interactions: 0.507 nm, Proper Dih., atoms 5186 5197<br>> > > > Minimum cell size due to bonded interactions: 0.557 nm<br>> > > > Maximum distance for 5 constraints, at 120 deg. angles, all-trans:<br>> > > > 0.200 nm<br>> > > > Estimated maximum distance required for P-LINCS: 0.200 nm<br>> > > > Guess for relative PME load: 0.20<br>> > > > Will use 24 particle-particle and 8 PME only nodes<br>> > > > This is a guess, check the performance at the end of the log file<br>> > > > Using 8 separate PME nodes<br>> > > > Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25<br>> > > > Optimizing the DD grid for 24 cells with a minimum initial size of<br>> > > > 0.697 nm<br>> > > > The maximum allowed number of cells is: X 9 Y 4 Z 9<br>> > > > Domain decomposition grid 4 x 2 x 3, separate PME nodes 8<br>> > > ><br>> > > > comm-mode angular will give incorrect results when the comm group<br>> > > > partially crosses a periodic boundary<br>> > > > Interleaving PP and PME nodes<br>> > > > This is a particle-particle only node<br>> > > ><br>> > > > Domain decomposition nodeid 0, coordinates 0 0 0<br>> > > ><br>> > > > Table routines are used for coulomb: TRUE<br>> > > > Table routines are used for vdw: FALSE<br>> > > > Will do PME sum in reciprocal space.<br>> > > ><br>> > > > -------- -------- --- Thank You --- -------- --------<br>> > > > Using a Gaussian width (1/beta) of 0.25613 nm for Ewald<br>> > > > Cut-off's: NS: 0.8 Coulomb: 0.8 LJ: 0.8<br>> > > > System total charge: -0.000<br>> > > > Generated table with 3600 data points for Ewald.<br>> > > > Tabscale = 2000 points/nm<br>> > > > Generated table with 3600 data points for LJ6.<br>> > > > Tabscale = 2000 points/nm<br>> > > > Generated table with 3600 data points for LJ12.<br>> > > > Tabscale = 2000 points/nm<br>> > > > Generated table with 3600 data points for 1-4 COUL.<br>> > > > Tabscale = 2000 points/nm<br>> > > > Generated table with 3600 data points for 1-4 LJ6.<br>> > > > Tabscale = 2000 points/nm<br>> > > > Generated table with 3600 data points for 1-4 LJ12.<br>> > > > Tabscale = 2000 points/nm<br>> > > ><br>> > > > Enabling SPC water optimization for 3021 molecules.<br>> > > ><br>> > > > Configuring nonbonded kernels...<br>> > > ><br>> > > ><br>> > > > Removing pbc first time<br>> > > ><br>> > > > Will apply umbrella COM pulling in geometry 'position'<br>> > > > between a reference group and 1 group<br>> > > > Pull group 0: 5181 atoms, mass 56947.551<br>> > > > Pull group 1: 13 atoms, mass 116.120<br>> > > ><br>> > > > Initializing Parallel LINear Constraint Solver<br>> > > ><br>> > > ><br>> > > ><br>> > > > Linking all bonded interactions to atoms<br>> > > > There are 85833 inter charge-group exclusions,<br>> > > > will use an extra communication step for exclusion forces for PME<br>> > > ><br>> > > > The initial number of communication pulses is: X 1 Y 1 Z 1<br>> > > > The initial domain decomposition cell size is: X 1.58 nm Y 1.58 nm Z<br>> > > > 2.23 nm<br>> > > ><br>> > > > The maximum allowed distance for charge groups involved in<br>> > > > interactions is:<br>> > > > non-bonded interactions 0.800 nm<br>> > > > (the following are initial values, they could change due to box<br>> > > > deformation)<br>> > > > two-body bonded interactions (-rdd) 0.800 nm<br>> > > > multi-body bonded interactions (-rdd) 0.800 nm<br>> > > > atoms separated by up to 5 constraints (-rcon) 1.575 nm<br>> > > ><br>> > > > When dynamic load balancing gets turned on, these settings will <br>> > change<br>> > > > to:<br>> > > > The maximum number of communication pulses is: X 1 Y 1 Z 1<br>> > > > The minimum size for domain decomposition cells is 0.800 nm<br>> > > > The requested allowed shrink of DD cells (option -dds) is: 0.80<br>> > > > The allowed shrink of domain decomposition cells is: X 0.51 Y 0.51 <br>> > Z 0.36<br>> > > > The maximum allowed distance for charge groups involved in<br>> > > > interactions is:<br>> > > > non-bonded interactions 0.800 nm<br>> > > > two-body bonded interactions (-rdd) 0.800 nm<br>> > > > multi-body bonded interactions (-rdd) 0.800 nm<br>> > > > atoms separated by up to 5 constraints (-rcon) 0.800 nm<br>> > > ><br>> > > ><br>> > > > Making 3D domain decomposition grid 4 x 2 x 3, home cell index 0 0 0<br>> > > ><br>> > > > Center of mass motion removal mode is Angular<br>> > > > We have the following groups for center of mass motion removal:<br>> > > > 0: DIAM<br>> > > ><br>> > ><br>> > > --<br>> > > gmx-users mailing list gmx-users@gromacs.org<br>> > > http://lists.gromacs.org/mailman/listinfo/gmx-users<br>> > > Please search the archive at http://www.gromacs.org/search before <br>> > posting!<br>> > > Please don't post (un)subscribe requests to the list. Use the<br>> > > www interface or send it to gmx-users-request@gromacs.org.<br>> > > Can't post? Read http://www.gromacs.org/mailing_lists/users.php<br>> ><br>> > ------------------------------------------------------------------------<br>> > New Windows 7: Find the right PC for you. Learn more. <br>> > <http://windows.microsoft.com/shop><br>> <br>> -- <br>> gmx-users mailing list gmx-users@gromacs.org<br>> http://lists.gromacs.org/mailman/listinfo/gmx-users<br>> Please search the archive at http://www.gromacs.org/search before posting!<br>> Please don't post (un)subscribe requests to the list. Use the <br>> www interface or send it to gmx-users-request@gromacs.org.<br>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php<br>                                            <br /><hr />Express yourself instantly with MSN Messenger! <a href='http://clk.atdmt.com/AVE/go/onm00200471ave/direct/01/' target='_new'>MSN Messenger</a></body>
</html>