So I've been able to get access to Gromacs v4.0.4 on another supercomputer cluster. However, I've been told that there are compatibility issues regarding the MPI with Gromacs v4. Also, I'm using the MARTINI force field with Gromacs, but I'm not sure how well it is tested with v4? <br>
<br><div class="gmail_quote">2009/8/2 Mark Abraham <span dir="ltr"><<a href="mailto:Mark.Abraham@anu.edu.au">Mark.Abraham@anu.edu.au</a>></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="im">Justin A. Lemkul wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
<br>
<a href="mailto:rainy908@yahoo.com" target="_blank">rainy908@yahoo.com</a> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hi Mark,<br>
<br>
I originally specified my $NSLOTS as 4 in my original scripts (didn't include the header portion in my previous email). Are you saying that even though I specified 4 nodes as "-np 4", I'm still running my job on a single processor, based on the way I've written my script?<br>
<br>
</blockquote>
<br>
Correct. As I originally indicated, mdrun must be called as an mpirun process, i.e.:<br>
<br>
mpirun -np 4 mdrun -options<br>
<br>
I believe I once read (in a previous thread) that the -np option of mdrun is actually ignored. My memory could be failing, though :)<br>
</blockquote>
<br></div>
Something like that, yes. It certainly can find out from the mpirun environment.<br><font color="#888888">
<br>
Mark</font><div><div></div><div class="h5"><br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
I'm pretty new to using GROMACS so I figured it would be faster for me to do a test run on the GROMACS 3.3.1 that's currently available on the cluster I'm using rather to compile the new version 4(?) that is out. Thanks for your recommendation though.<br>
<br>
</blockquote>
<br>
It would probably be best to use the most recent version to take advantage of all the nice new features, among which is a major speed upgrade. Getting used to Gromacs is not version-dependent. Most tutorial material is broadly applicable.<br>
<br>
-Justin<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Lili<br>
<br>
2009/8/2 Mark Abraham <<a href="mailto:Mark.Abraham@anu.edu.au" target="_blank">Mark.Abraham@anu.edu.au</a> <mailto:<a href="mailto:Mark.Abraham@anu.edu.au" target="_blank">Mark.Abraham@anu.edu.au</a>>><br>
<br>
<a href="mailto:rainy908@yahoo.com" target="_blank">rainy908@yahoo.com</a> <mailto:<a href="mailto:rainy908@yahoo.com" target="_blank">rainy908@yahoo.com</a>> wrote:<br>
<br>
Justin,<br>
<br>
You are correct about my attempt to run grompp using MPIRUN - it<br>
doesn't<br>
work. Actually I realized that I was using a version of Gromacs<br>
that wasn't<br>
compiled for MPI! Gromacs-3.3.1-dev is compiled for MPI,<br>
however. The<br>
submission script that worked is as follows:<br>
<br>
# Define locations of MPIRUN, MDRUN<br>
MPIRUN=/usr/local/topspin/mpi/mpich/bin/mpirun<br>
MDRUN=/share/apps/gromacs-3.3.1-dev/bin/mdrun<br>
<br>
cd /nas2/lpeng/nexil/gromacs/cg_setup<br>
<br>
# Run MD<br>
$MDRUN -v -nice 0 -np $NSLOTS -s md3.tpr -o md3.trr -c<br>
confout.gro -g<br>
md3.log -x md3.xtc<br>
<br>
..This was carried out after I ran grompp on a single node.<br>
<br>
<br>
That still won't run a parallel mdrun. Justin indicated the correct<br>
approach.<br>
<br>
Also, unless you know a good reason for using a version of GROMACS<br>
3(?) years old, use an up-to-date one. It'll be heaps faster.<br>
<br>
Mark<br>
<br>
_______________________________________________<br>
gmx-developers mailing list<br>
<a href="mailto:gmx-developers@gromacs.org" target="_blank">gmx-developers@gromacs.org</a> <mailto:<a href="mailto:gmx-developers@gromacs.org" target="_blank">gmx-developers@gromacs.org</a>><br>
<a href="http://lists.gromacs.org/mailman/listinfo/gmx-developers" target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-developers</a><br>
Please don't post (un)subscribe requests to the list. Use the www<br>
interface or send it to <a href="mailto:gmx-developers-request@gromacs.org" target="_blank">gmx-developers-request@gromacs.org</a><br>
<mailto:<a href="mailto:gmx-developers-request@gromacs.org" target="_blank">gmx-developers-request@gromacs.org</a>>.<br>
<br>
<br>
<br>
<br>
<br>
------------------------------------------------------------------------<br>
<br>
_______________________________________________<br>
gmx-developers mailing list<br>
<a href="mailto:gmx-developers@gromacs.org" target="_blank">gmx-developers@gromacs.org</a><br>
<a href="http://lists.gromacs.org/mailman/listinfo/gmx-developers" target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-developers</a><br>
Please don't post (un)subscribe requests to the list. Use the www interface or send it to <a href="mailto:gmx-developers-request@gromacs.org" target="_blank">gmx-developers-request@gromacs.org</a>.<br>
</blockquote>
<br>
</blockquote>
_______________________________________________<br>
gmx-developers mailing list<br>
<a href="mailto:gmx-developers@gromacs.org" target="_blank">gmx-developers@gromacs.org</a><br>
<a href="http://lists.gromacs.org/mailman/listinfo/gmx-developers" target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-developers</a><br>
Please don't post (un)subscribe requests to the list. Use the www interface or send it to <a href="mailto:gmx-developers-request@gromacs.org" target="_blank">gmx-developers-request@gromacs.org</a>.<br>
</div></div></blockquote></div><br><br>