Hi Mark,<br><br>Thank you for your comments! They are very helpful.<br><br>I still have several questions regarding your comments:<br>1. Which constraints should I apply for 2fs stepsize? hbonds, all-bonds, h-angles, all-angles<br>
I am simulating molecules that contain carbon nanotubes and polymer chains.<br>May I have your suggestions based on your experience? Thanks.<br><br>2.If I understand correctly, it is enough to use the default value for xtc-precision = 1000. <br>
I am not quite understand about this. Does 1000 means that it is calculated with precision of 0.001 nm? and 1000000 for 1e-6 nm? Thanks.<br><br>3. Regarding the MPI, may I have your suggestions on any illustrative examples of parallelization with MPI for gromacs?<br>
How can I know if the cluster support MPI? The cluster I am using have linux system.<br> It seems from the FAQ that I need to re-configure the Gromacs to enable mpi. Do I need additional software/program to support the parallelization? Can I implement the parallelization just like I implement the single processor task? or with several more bash commands? Thanks.<br>
<br>Thank you very much for your time and help!<br><br>Young<br><br><br><br><br><div class="gmail_quote">On Mon, May 24, 2010 at 1:02 PM, Mark Abraham <span dir="ltr"><<a href="mailto:mark.abraham@anu.edu.au" target="_blank">mark.abraham@anu.edu.au</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div>----- Original Message -----<br>
From: Yan Gao <<a href="mailto:y1gao@ucsd.edu" target="_blank">y1gao@ucsd.edu</a>><br>
Date: Tuesday, May 25, 2010 3:02<br>
Subject: [gmx-users] large sim box<br>
To: Discussion list for GROMACS users <<a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a>><br>
<br>
> Hi There,<br>
><br>
> I want to use a large simulation box. I did a trial<br>
with 15 * 15 * 15 nm box for 100 steps. genbox_d generates 110k water<br>
molecules, or 330k atoms.<br>
><br>
> It looks like that gromacs can run that<br>
large number of atoms. I am sure it will take a long long long time.<br>
However if I really want to simulate it, is there any way that I can<br>
increase the speed? (except using a better cpu, or paralleling it)<br>
Thanks.<br>
<br>
</div>You can control the cost through choice of algorithm and implementation. That means you need to learn how they work and whether some trade-offs are suitable for you. That's going to mean lots of reading, and some experimentation on more tractable systems. Learn to walk before you try to run! However, the only serious way to approach a system this large is with parallelization. Also, reconsider your use of double precision.<br>
<div><br>
> My second question is that: If I have to use clusters or super<br>
computer, which one is better? and, do I need a particular<br>
software/program to paralleling it? Thanks.<br>
<br>
</div>GROMACS does parallelization using MPI, which will be available on any machine you can find. There are platforms for which GROMACS does not have the specially-optimized non-bonded inner loops - avoid such platforms if you have the choice. You should read the 2008 GROMACS JCTC paper.<br>
<div><br>
><br>
> I put my .mdp below:<br>
> integrator = md<br>
> dt <br>
= 0.002<br>
> ; duration 2000 ps<br>
> nsteps = 100<br>
> comm_mode <br>
= linear<br>
> nstcomm = 1<br>
> ; dump config every 300 fs<br>
><br>
nstxout = 10<br>
> nstvout = 10<br>
> nstfout <br>
= 10<br>
<br>
</div>Writing output of all of energies, forces and velocities this often is a waste of time in production simulations. Adjacent data points 10 MD steps apart will be strongly correlated, even if you plan to use the force and/or velocity data. Consider the needs of your analysis, and probably plan to use nstxtcout instead of any of these.<br>
<div><br>
> nstcheckpoint = 100<br>
> nstlog <br>
= 10<br>
> nstenergy = 10<br>
> nstxtcout = 10<br>
><br>
xtc-precision = 1000000<br>
<br>
</div>Read what this does.<br>
<div><br>
> nstlist = 1<br>
> ns_type <br>
= grid<br>
> pbc = xyz<br>
> rlist =<br>
1.0 ;1.0<br>
> coulombtype = PME<br>
> rcoulomb <br>
= 1.0 ;1.0<br>
><br>
fourierspacing = 0.2 ;0.1<br>
<br>
</div>That will noticeably reduce the cost of PME, but its effect on accuracy is not well known.<br>
<div><br>
> pme_order = 4<br>
> ewald_rtol<br>
= 1e-5<br>
> optimize_fft = yes<br>
> vdwtype =<br>
cut-off<br>
> rvdw = 1.0 ;1.0<br>
> tcoupl <br>
= Nose-Hoover<br>
><br>
tc_grps = system<br>
<br>
</div>This is often a poor choice. grompp probably told you that.<br>
<div><br>
> tau_t = 0.5<br>
> ref_t <br>
= 300.0<br>
> Pcoupl = no<br>
> annealing = no<br>
> gen_vel <br>
= no<br>
> gen_temp = 300.0<br>
><br>
gen_seed = 173529<br>
> constraints = none<br>
<br>
</div>You must use constraints if you wish a 2fs timestep.<br>
<div><br>
> ;energy_excl <br>
= C_H C_H<br>
> constraint_algorithm = lincs<br>
> unconstrained_start <br>
= no<br>
> lincs_order = 4<br>
> lincs_iter = 1<br>
<br>
</div>Mark<br>
<font color="#888888">--<br>
gmx-users mailing list <a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br>
<a href="http://lists.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-users</a><br>
Please search the archive at <a href="http://www.gromacs.org/search" target="_blank">http://www.gromacs.org/search</a> before posting!<br>
Please don't post (un)subscribe requests to the list. Use the<br>
www interface or send it to <a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>.<br>
Can't post? Read <a href="http://www.gromacs.org/mailing_lists/users.php" target="_blank">http://www.gromacs.org/mailing_lists/users.php</a><br>
</font></blockquote></div><br><br clear="all"><br>-- <br>Yan Gao<br>Jacobs School of Engineering<br>University of California, San Diego<br>Tel: 858-952-2308<br>Email: <a href="mailto:Yan.Gao.2001@gmail.com" target="_blank">Yan.Gao.2001@gmail.com</a><br>