<html>
<head>
<style>
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Verdana
}
</style>
</head>
<body class='hmmessage'>
Hi,<br><br>I think you can replace all state-> by state_global-> within { } after the if statement<br>on line 929 of md.c.<br>Then I think it should work for serial, PD and DD.<br><br>Mark, could you test if this works and report back?<br><br>Thanks,<br><br>Berk<br><br><hr id="stopSpelling">From: gmx3@hotmail.com<br>To: gmx-users@gromacs.org<br>Subject: RE: [gmx-users] Identical energies generated in a rerun calculation        ...        but ...<br>Date: Fri, 24 Apr 2009 11:45:53 +0200<br><br>
<style>
.ExternalClass .EC_hmmessage P
{padding:0px;}
.ExternalClass body.EC_hmmessage
{font-size:10pt;font-family:Verdana;}
</style>
Hi,<br><br>I can fix, but I am currently very busy, so it might take some time.<br><br>Berk<br><br>> Date: Fri, 24 Apr 2009 17:56:56 +1000<br>> From: Mark.Abraham@anu.edu.au<br>> To: gmx-users@gromacs.org<br>> Subject: Re: [gmx-users] Identical energies generated in a rerun calculation        ...        but ...<br>> <br>> Mark Abraham wrote:<br>> > <br>> > OK I have some confirmation of a possible bug here. Using 4.0.4 to do <br>> > reruns on the same positions-only NPT peptide+water trajectory with the <br>> > same run input file:<br>> > <br>> > a) compiled without MPI, a single-processor rerun worked correctly, <br>> > including "zero" KE and temperature at each frame<br>> > <br>> > b) compiled with MPI, a single-processor run worked correctly, including <br>> > zero KE and temperature, and agreed with a) within machine precision<br>> > <br>> > c) compiled with MPI, a 4-processor run worked incorrectly : an <br>> > approximately-correct temperature and plausible positive KE were <br>> > reported, all PE terms were identical to about machine precision with <br>> > the first step of a) and b), and the reported pressure was different.<br>> > <br>> > Thus it seems that a multi-processor mdrun is not updating the structure <br>> > for subsequent steps in the loop over structures, and/or is getting some <br>> > KE from somewhere that a single-processor calculation is not.<br>> > <br>> > I'll step through c) with a debugger tomorrow.<br>> <br>> d) compiled with MPI, a 4-processor run using particle decomposition <br>> worked correctly, agreeing with a).<br>> <br>> Further, c) has the *same* plausible positive KE at each step.<br>> <br>> From stepping through a run, I think the rerun DD problem arises in <br>> that a rerun loads the data from the rerun trajectory into rerun_fr, and <br>> later copies those into state, and not into state_global. state_global <br>> is initialized to that of the .tpr file (which *has* velocities), which <br>> is used for the DD initialization, and state_global is never <br>> subsequently updated. So, for each rerun step, the same .tpr state gets <br>> propagated, which leads to all the symptoms I describe above. The KE <br>> comes from the velocities in the .tpr file, and is thus constant.<br>> <br>> So, a preliminary work-around is to use mdrun -rerun -pd to get particle <br>> decomposition.<br>> <br>> I tried to hack a fix for the DD code. It seemed that using<br>> <br>> for (i=0; i<state_global->natoms; i++)<br>> copy_rvec(rerun_fr.x[i],state_global.x[i])<br>> <br>> before about line 1060 of do_md() in src/kernel/md.c should do the <br>> trick, since with bMasterState set for a rerun, dd_partition_system() <br>> should propagate state_global to the right places. However I got a <br>> segfault in that copy_rvec with i==0, despite state_global.x being <br>> allocated and of the right dimensions according to Totalview's memory <br>> debugger.<br>> <br>> I'll file a bugzilla in any case.<br>> <br>> Mark<br>> _______________________________________________<br>> gmx-users mailing list gmx-users@gromacs.org<br>> http://www.gromacs.org/mailman/listinfo/gmx-users<br>> Please search the archive at http://www.gromacs.org/search before posting!<br>> Please don't post (un)subscribe requests to the list. Use the <br>> www interface or send it to gmx-users-request@gromacs.org.<br>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php<br><br><hr>Express yourself instantly with MSN Messenger! <a href="http://clk.atdmt.com/AVE/go/onm00200471ave/direct/01/">MSN Messenger</a><br /><hr />See all the ways you can stay connected <a href='http://www.microsoft.com/windows/windowslive/default.aspx' target='_new'>to friends and family</a></body>
</html>