Hi Mark,<br><br>Yes thats one way to go about it. But it would have been great if i could get a rough estimation.<br><br>Thank you.<br><br>amit<br><br><br><div class="gmail_quote">On Tue, Mar 2, 2010 at 8:06 PM, Mark Abraham <span dir="ltr"><<a href="mailto:Mark.Abraham@anu.edu.au">Mark.Abraham@anu.edu.au</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div class="im">On 3/03/2010 12:53 PM, Amit Choubey wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hi Mark,<br>
<br>
I quoted the memory usage requirements from a presentation by Berk<br>
Hess, Following is the link to it<br>
<br>
<br>
<a href="http://www.csc.fi/english/research/sciences/chemistry/courses/cg-2009/berk_csc.pdf" target="_blank">http://www.csc.fi/english/research/sciences/chemistry/courses/cg-2009/berk_csc.pdf</a><br>
<br>
l. In that presentation on pg 27,28 Berk does talk about memory<br>
usage but then I am not sure if he referred to any other specific thing.<br>
<br>
My system only contains SPC water. I want Berendsen T coupling and<br>
Coulomb interaction with Reaction Field.<br>
<br>
I just want a rough estimate of how big of a system of water can be<br>
simulated on our super computers.<br>
</blockquote>
<br></div>
Try increasingly large systems until it runs out of memory. There's your answer.<br>
<br>
Mark<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div class="im">
On Fri, Feb 26, 2010 at 3:56 PM, Mark Abraham <<a href="mailto:mark.abraham@anu.edu.au" target="_blank">mark.abraham@anu.edu.au</a><br></div><div class="im">
<mailto:<a href="mailto:mark.abraham@anu.edu.au" target="_blank">mark.abraham@anu.edu.au</a>>> wrote:<br>
<br>
----- Original Message -----<br></div><div class="im">
From: Amit Choubey <<a href="mailto:kgp.amit@gmail.com" target="_blank">kgp.amit@gmail.com</a> <mailto:<a href="mailto:kgp.amit@gmail.com" target="_blank">kgp.amit@gmail.com</a>>><br>
Date: Saturday, February 27, 2010 10:17<br>
Subject: Re: [gmx-users] gromacs memory usage<br>
To: Discussion list for GROMACS users <<a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br></div><div class="im">
<mailto:<a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a>>><br>
<br>
> Hi Mark,<br>
> We have few nodes with 64 GB memory and many other with 16 GB of<br>
memory. I am attempting a simulation of around 100 M atoms.><br>
<br>
Well, try some smaller systems and work upwards to see if you have a<br>
limit in practice. 50K atoms can be run in less than 32GB over 64<br>
processors. You didn't say whether your simulation system can run on<br>
1 processor... if it does, then you can be sure the problem really<br>
is related to parallelism.<br>
<br>
> I did find some document which says one need (50bytes)*NATOMS on<br>
master node, also one needs<br>
> (100+4*(no. of atoms in cutoff)*(NATOMS/nprocs) for compute<br>
nodes. Is this true?><br>
<br>
In general, no. It will vary with the simulation algorithm you're<br>
using. Quoting such without attributing the source or describing the<br>
context is next to useless. You also dropped a parenthesis.<br>
<br>
Mark<br>
--<br>
gmx-users mailing list <a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br></div>
<mailto:<a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a>><div class="im"><br>
<a href="http://lists.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-users</a><br>
Please search the archive at <a href="http://www.gromacs.org/search" target="_blank">http://www.gromacs.org/search</a> before<br>
posting!<br>
Please don't post (un)subscribe requests to the list. Use the<br>
www interface or send it to <a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a><br></div>
<mailto:<a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>>.<div class="im"><br>
Can't post? Read <a href="http://www.gromacs.org/mailing_lists/users.php" target="_blank">http://www.gromacs.org/mailing_lists/users.php</a><br>
<br>
<br>
</div></blockquote>
-- <br><div><div></div><div class="h5">
gmx-users mailing list <a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br>
<a href="http://lists.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-users</a><br>
Please search the archive at <a href="http://www.gromacs.org/search" target="_blank">http://www.gromacs.org/search</a> before posting!<br>
Please don't post (un)subscribe requests to the list. Use the www interface or send it to <a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>.<br>
Can't post? Read <a href="http://www.gromacs.org/mailing_lists/users.php" target="_blank">http://www.gromacs.org/mailing_lists/users.php</a><br>
</div></div></blockquote></div><br>