<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Verdana
}
--></style>
</head>
<body class='hmmessage'>
Hi,<br><br>This is a silly bug with nose-hoover and pbc=no.<br>I fixed it for 4.0.8 (if we will ever release that).<br><br>To fix it you only need to move a brace up 4 lines in src/mdlib/init.c<br>Or you can use the v-rescale thermostat.<br><br>Berk<br><br>--- a/src/mdlib/init.c<br>+++ b/src/mdlib/init.c<br>@@ -119,9 +119,9 @@ static void set_state_entries(t_state *state,t_inputrec *ir,<br>int nnodes)<br> if (ir->epc != epcNO) {<br> state->flags |= (1<<estPRES_PREV);<br> }<br>- if (ir->etc == etcNOSEHOOVER) {<br>- state->flags |= (1<<estNH_XI);<br>- }<br>+ }<br>+ if (ir->etc == etcNOSEHOOVER) {<br>+ state->flags |= (1<<estNH_XI);<br> }<br> if (ir->etc == etcNOSEHOOVER || ir->etc == etcVRESCALE) {<br> state->flags |= (1<<estTC_INT);<br><br><br>> Date: Wed, 10 Mar 2010 14:16:38 +0000<br>> From: gmelaugh01@qub.ac.uk<br>> To: gmx-users@gromacs.org<br>> Subject: [gmx-users] problems with non pbc simulations in parallel<br>> <br>> Hi all<br>> <br>> I have installed gromacs-4.0.7-parallel with open mpi. I have<br>> successfully ran a few short simulations on 2,3 and 4 nodes using pbc. I<br>> am now interested in simulating a cluster of 32 molecules with no pbc in<br>> parallel and the simulation doe not proceed. I have set by box vectors<br>> to 0 0 0 in the conf.gro file, pbc = no in the mdp file, and use<br>> dparticle decomposition. The feedback I get from the following command<br>> <br>> nohup mpirun -np 2 /local1/gromacs-4.0.7-parallel/bin/mdrun -pd -s &<br>> <br>> is<br>> <br>> Back Off! I just backed up md.log to ./#md.log.1#<br>> Reading file topol.tpr, VERSION 4.0.7 (single precision)<br>> starting mdrun 'test of 32 hexylcage molecules'<br>> 1000 steps, 0.0 ps.<br>> [emerald:22662] *** Process received signal ***<br>> [emerald:22662] Signal: Segmentation fault (11)<br>> [emerald:22662] Signal code: Address not mapped (1)<br>> [emerald:22662] Failing at address: (nil)<br>> [emerald:22662] [ 0] /lib64/libpthread.so.0 [0x7fbc17eefa90]<br>> [emerald:22662] [ 1]<br>> /local1/gromacs-4.0.7-parallel/bin/mdrun(nosehoover_tcoupl+0x74) [0x436874]<br>> [emerald:22662] [ 2]<br>> /local1/gromacs-4.0.7-parallel/bin/mdrun(update+0x171) [0x4b2311]<br>> [emerald:22662] [ 3]<br>> /local1/gromacs-4.0.7-parallel/bin/mdrun(do_md+0x2608) [0x42dd38]<br>> [emerald:22662] [ 4]<br>> /local1/gromacs-4.0.7-parallel/bin/mdrun(mdrunner+0xe33) [0x430973]<br>> [emerald:22662] [ 5]<br>> /local1/gromacs-4.0.7-parallel/bin/mdrun(main+0x5b8) [0x431128]<br>> [emerald:22662] [ 6] /lib64/libc.so.6(__libc_start_main+0xe6)<br>> [0x7fbc17ba6586]<br>> [emerald:22662] [ 7] /local1/gromacs-4.0.7-parallel/bin/mdrun [0x41e1e9]<br>> [emerald:22662] *** End of error message ***<br>> --------------------------------------------------------------------------<br>> mpirun noticed that process rank 1 with PID 22662 on node emerald exited<br>> on signal 11 (Segmentation fault).<br>> <br>> p.s I have ran several of these non pbc simulations with the same system<br>> in serial and have never experienced a problem. Has anyone ever come<br>> across this sort of problem before? and if so could you please provide<br>> some advice.<br>> <br>> Many Thanks<br>> <br>> Gavin<br>> <br>> -- <br>> gmx-users mailing list gmx-users@gromacs.org<br>> http://lists.gromacs.org/mailman/listinfo/gmx-users<br>> Please search the archive at http://www.gromacs.org/search before posting!<br>> Please don't post (un)subscribe requests to the list. Use the <br>> www interface or send it to gmx-users-request@gromacs.org.<br>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php<br>                                            <br /><hr />Express yourself instantly with MSN Messenger! <a href='http://clk.atdmt.com/AVE/go/onm00200471ave/direct/01/' target='_new'>MSN Messenger</a></body>
</html>