<html>
<head>
<style>
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Verdana
}
</style>
</head>
<body class='hmmessage'>
Hi,<br><br>There could be a bug in Gromacs 4.0.2 which causes problems<br>when running in parallel.<br>We are investigating this.<br><br>A much simpler way to run shorter simulations for the moment is<br>the -maxh option of mdrun.<br>You don't need tpbconv either, you can directly read a checkpoint<br>file with the -cpi option.<br><br>PS you might want to use mdrun -deffnm dppc_01<br><br>Berk<br><br>> From: ydubief@uvm.edu<br>> To: gmx-users@gromacs.org<br>> Date: Mon, 5 Jan 2009 08:10:49 -0500<br>> Subject: [gmx-users] Gromacs 4.0.2 hangs?<br>> <br>> Dear all,<br>> <br>> I have some difficulties running mdrun_mpi for large number of <br>> iterations, typically > 40000. The code hangs, not necessarily at a <br>> fixed iteration, for all configurations I run and this behavior shows <br>> up under linux and mac OS. The simulations range from 10^4 to 10^6 <br>> coarse-grained atoms and I try to keep the amount of data generated <br>> per much smaller than 1Gb. I have worked around this issue by running <br>> 25000-iteration similations with tpbconv-restarts, as follows:<br>> mpiexec -np 256 mdrun_mpi -nosum -v -s dppc0_1.tpr -o dppc0_1.trr -c <br>> dppc0_1.gro -e dppc0_1.edr -x dppc0_1.xtc<br>> tpbconv -s dppc0_1.tpr -f dppc0_1.trr -e dppc0_1.edr -o dppc0_2.tpr - <br>> extend 250.0<br>> With such scripts, I have been able to generate 10^5 to 10^6 <br>> iterations without any problem. I was wondering if anyone has <br>> experienced similar problems and if I am missing something.<br>> <br>> I have pretty much ruled out a problem with mpi, since I have <br>> thoroughly tested these computers with other mpi codes. I am now <br>> wondering if there might be a problem with output files.<br>> <br>> I run Gromacs 4.0.2 on a linux cluster (quad core processors, myrinet <br>> and mpich, compiled with gcc, single precision) up to 256 processors, <br>> on a macpro 2 quad and on a macbook dual core using the Fink package <br>> for openmpi. All these computers have enough available disk space for <br>> any simulation I run. A typical simulation is a coarse grained MD <br>> using martini with .mdp files obtained from Marrink's website.<br>> <br>> Best,<br>> <br>> Yves<br>> <br>> <br>> <br>> --<br>> Yves Dubief, Ph.D., Assistant Professor<br>> Graduate program coordinator<br>> University of Vermont, School of Engineering<br>> Mechanical Engineering Program<br>> 201 D Votey Bldg, 33 Colchester Ave, Burlington, VT 05405<br>> Tel: (1) 802 656 1930 Fax: (1) 802 656 3358<br>> Also:<br>> Vermont Advanced Computing Center<br>> 206 Farrell Hall, 210 Colchester Ave, Burlington, VT 05405<br>> Tel: (1) 802 656 9830 Fax: (1) 802 656 9892<br>> email: ydubief@uvm.edu<br>> web: http://www.uvm.edu/~ydubief/<br>> <br>> <br>> <br>> <br>> <br>> <br>> <br>> _______________________________________________<br>> gmx-users mailing list gmx-users@gromacs.org<br>> http://www.gromacs.org/mailman/listinfo/gmx-users<br>> Please search the archive at http://www.gromacs.org/search before posting!<br>> Please don't post (un)subscribe requests to the list. Use the <br>> www interface or send it to gmx-users-request@gromacs.org.<br>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php<br><br /><hr />Express yourself instantly with MSN Messenger! <a href='http://clk.atdmt.com/AVE/go/onm00200471ave/direct/01/' target='_new'>MSN Messenger</a></body>
</html>