My Gmail doesn't show the subject line when hitting reply, so apologies for the double post, but I thought I'd put it out there with an amended subject line in case it's of use to anyone else....<br><div class="gmail_quote">
<br><br><br>
Hi,<br>
<br>
so further to my last email, I have run a quick test on my desktop machine<br>
(4 cores, 12Gb RAM). It seems that when running the parrinello-rahman<br>
barostat with domain decomposition (-dd 2 2 1) that I'm still getting<br>
memory leak (this time with GNU compilers). I followed proper equilibration<br>
with the berendsen barostat (although I didn't think this was the problem)<br>
and also added the "constraints = all-bonds" line, but this has made no<br>
difference to my results (I'm using "[ constraints ]" in my topology file).<br>
<br>
To give an idea of the rate of memory loss, initial memory consumption was<br>
0.1% total memory per process, which rose steadily to 0.5% total memory<br>
after 5 minutes. After 17mins, memory consumption is 1.7% total memory and<br>
rising. Running with "-pd", memory usage is constant at 0.1% total memory.<br>
<br>
The system is 328 TIP4P/ice + 64 all-atomic methane. This problem has<br>
occurred for me on different architectures and with different compilers<br>
(and different system sizes). It would be good if anybody familiar with the<br>
source could take a look, or if anybody knows any compiler flags that would<br>
prevent memory leak.<br>
<br>
Thanks,<br>
Steve<br>
<br>
<br>
<br>
On 9 March 2012 13:32, Stephen Cox <<a href="mailto:stephen.cox.10@ucl.ac.uk">stephen.cox.10@ucl.ac.uk</a>> wrote:<br>
<br>
><br>
><br>
> On 9 March 2012 13:03, <a href="mailto:gmx-users-request@gromacs.org">gmx-users-request@gromacs.org</a> <<br>
> <a href="mailto:gmx-users-request@gromacs.org">gmx-users-request@gromacs.org</a>> wrote:<br>
><br>
>> Send gmx-users mailing list submissions to<br>
>> <a href="mailto:gmx-users@gromacs.org">gmx-users@gromacs.org</a><br>
>><br>
>> To subscribe or unsubscribe via the World Wide Web, visit<br>
>> <a href="http://lists.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-users</a><br>
>> or, via email, send a message with subject or body 'help' to<br>
>> <a href="mailto:gmx-users-request@gromacs.org">gmx-users-request@gromacs.org</a><br>
>><br>
>> You can reach the person managing the list at<br>
>> <a href="mailto:gmx-users-owner@gromacs.org">gmx-users-owner@gromacs.org</a><br>
>><br>
>> When replying, please edit your Subject line so it is more specific<br>
>> than "Re: Contents of gmx-users digest..."<br>
>><br>
>><br>
>> Today's Topics:<br>
>><br>
>> 1. Re: GROMACS stalls for NPT simulation when using -npme and<br>
>> -dd flags (Mark Abraham)<br>
>> 2. Error Message in Make clean command for installation. (a a)<br>
>> 3. Re: Error Message in Make clean command for installation.<br>
>> (Mark Abraham)<br>
>><br>
>><br>
>> ----------------------------------------------------------------------<br>
>><br>
>> Message: 1<br>
>> Date: Fri, 09 Mar 2012 23:42:33 +1100<br>
>> From: Mark Abraham <<a href="mailto:Mark.Abraham@anu.edu.au">Mark.Abraham@anu.edu.au</a>><br>
>> Subject: Re: [gmx-users] GROMACS stalls for NPT simulation when using<br>
>> -npme and -dd flags<br>
>> To: Discussion list for GROMACS users <<a href="mailto:gmx-users@gromacs.org">gmx-users@gromacs.org</a>><br>
>> Message-ID: <<a href="mailto:4F59FAB9.6010805@anu.edu.au">4F59FAB9.6010805@anu.edu.au</a>><br>
>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed<br>
>><br>
>> On 9/03/2012 9:43 PM, Stephen Cox wrote:<br>
>> > Dear users,<br>
>> ><br>
>> > I'm trying to run an isotropic NPT simulation on a cubic cell<br>
>> > containing TIP4P/ice water and methane. I'm using the<br>
>> > Parrinello-Rahman barostat. I've been playing around with the<br>
>> > different decomposition flags of mdrun to get better performance and<br>
>> > scaling and have found that the standard -npme (half number of cores)<br>
>> > works pretty well. I've also tried using the -dd flags, and I appear<br>
>> > to get decent performance and scaling. However, after a few<br>
>> > nanoseconds (corresponding to about 3 hours run time), the program<br>
>> > just stalls; no output and no error messages. I realise NPT may cause<br>
>> > domain decompositon some issues if the cell vectors vary wildly, but<br>
>> > this isn't happening in my system.<br>
>> ><br>
>> > Has anybody else experienced issues with domain decomposition and NPT<br>
>> > simulations? If so, are there any workarounds? For the moment, I've<br>
>> > had to resort to using -pd, which is giving relatively poor<br>
>> > performance and scaling, but at least it isn't dying!<br>
>> ><br>
>> > I'm using GROMACS 4.5.5 with an intel compiler (I followed the<br>
>> > instructions online, with static linking) and using the command:<br>
>> ><br>
>> > #!/bin/bash -f<br>
>> > # ---------------------------<br>
>> > #$ -V<br>
>> > #$ -N test<br>
>> > #$ -S /bin/bash<br>
>> > #$ -cwd<br>
>> > #$ -l vf=2G<br>
>> > #$ -pe ib-ompi 32<br>
>> > #$ -q infiniband.q<br>
>> ><br>
>> > mpirun mdrun_mpi -cpnum -cpt 60 -npme 16 -dd 4 2 2<br>
>> ><br>
>> > Below is my grompp.mdp.<br>
>> ><br>
>> > Thanks,<br>
>> > Steve<br>
>> ><br>
>> > P.S. I think that there may be an issue with memory leak that occurs<br>
>> > for domain decomposition with NPT. I seem to remember seeing this<br>
>> > happening on my desktop and my local cluster. I don't see this with<br>
>> > NVT simulations. This would be consistent with the lack of error<br>
>> > message: I've just run a short test run and the memory usage was<br>
>> > climbing streadily.<br>
>> ><br>
>> > ; run control<br>
>> > integrator = md<br>
>> > dt = 0.002<br>
>> > nsteps = -1<br>
>> > comm_mode = linear<br>
>> > nstcomm = 10<br>
>> ><br>
>> > ; energy minimization<br>
>> > emtol = 0.01<br>
>> > emstep = 0.01<br>
>> ><br>
>> > ; output control<br>
>> > nstxout = 0<br>
>> > nstvout = 0<br>
>> > nstfout = 0<br>
>> > nstlog = 0<br>
>> > nstcalcenergy = 2500<br>
>> > nstenergy = 2500<br>
>> > nstxtcout = 2500<br>
>> ><br>
>> > ; neighbour searching<br>
>> > nstlist = 1<br>
>> > ns_type = grid<br>
>> > pbc = xyz<br>
>> > periodic_molecules = no<br>
>> > rlist = 0.90<br>
>> ><br>
>> > ; electrostatics<br>
>> > coulombtype = pme<br>
>> > rcoulomb = 0.90<br>
>> ><br>
>> > ; vdw<br>
>> > vdwtype = cut-off<br>
>> > rvdw = 0.90<br>
>> > dispcorr = ener<br>
>> ><br>
>> > ; ewald<br>
>> > fourierspacing = 0.1<br>
>> > pme_order = 4<br>
>> > ewald_geometry = 3d<br>
>> > optimize_fft = yes<br>
>> ><br>
>> > ; temperature coupling<br>
>> > tcoupl = nose-hoover<br>
>> > nh-chain-length = 10<br>
>> > tau_t = 2.0<br>
>> > ref_t = 255.0<br>
>> > tc_grps = system<br>
>> ><br>
>> > ; pressure coupling<br>
>> > pcoupl = parrinello-rahman<br>
>> > pcoupltype = isotropic<br>
>> > ref_p = 400.0<br>
>> > tau_p = 2.0<br>
>> > compressibility = 6.5e-5<br>
>> ><br>
>> > ; constraints<br>
>> > constraint_algorithm = shake<br>
>> > shake_tol = 0.0001<br>
>> > lincs_order = 8<br>
>> > lincs_iter = 2<br>
>> ><br>
>> > ; velocity generation<br>
>> > gen_vel = yes<br>
>> > gen_temp = 255.0<br>
>> > gen_seed = -1<br>
>><br>
>> You're generating velocities and immediately using a barostat that is<br>
>> unsuitable for equilibration.<br>
><br>
><br>
> Sorry, this is unclear: the system has already been equilibrated using NVT<br>
> then berendsen before using the parrinello-rahman barostat. I use the<br>
> generate velocities options to give me uncorrelated trajectories (I'm<br>
> investigating a stochastic process and want statistics). I appreciate the<br>
> concerns about poor equilibration, but I'm pretty sure this isn't the case<br>
> - in my experience, poor equilibration usually results in a fairly prompt<br>
> crash, with a lot of error messages and step.pdb files. Furthermore, the<br>
> cell volume seems stable over the time that I manage to simulate,<br>
> suggesting equilibration is OK.<br>
><br>
><br>
>> You're using an integration step that<br>
>> requires constraints=all-bonds but I don't see that.<br>
><br>
><br>
> Could you clarify this point for me please? Thanks.<br>
><br>
><br>
>> You may have better<br>
>> stability if you equilibrate with Berendsen barostat and then switch.<br>
>> I've seen no other reports of memory usage growing without bounds, but<br>
>> if you can observe it happening after choosing a better integration<br>
>> regime then it suggests a code problem that wants fixing.<br>
>><br>
><br>
> I can run some tests on my desktop with a smaller system and report if I<br>
> see memory loss.<br>
><br>
><br>
>><br>
>> Mark<br>
>><br>
>><br>
>><br>
> Thanks for your quick response<br>
><br>
><br></div><br>