<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Dear Roland,<br>
<br>
We need to run the GROMACS on the base of the nodes of our cluster
(in order to use all computational resources of the cluster), that's
why we need MPI (instead of using thread or OpenMP within the SMP
node).<br>
I can run simple MPI examples, so I guess the problem on the
implementation of the Gromacs.<br>
<br>
<br>
Regads,<br>
Hrach<br>
<br>
On 4/27/11 11:29 PM, Roland Schulz wrote:
<blockquote
cite="mid:BANLkTimebqT1KTTkYjd=oGW90zE5SiMFMQ@mail.gmail.com"
type="cite">This seems to be a problem with your MPI library. Test
to see whether other MPI programs don't have the same problem. If
it is not GROMACS specific please ask on the mailinglist of your
MPI library. If it only happens with GROMACS be more specific
about what your setup is (what MPI library, what hardware, ...).
<div>
<br>
</div>
<div>Also you could use the latest GROMACS 4.5.x. It has built in
thread support and doesn't need MPI as long as you only run on n
cores within one SMP node.</div>
<div><br>
</div>
<div>Roland<br>
<br>
<div class="gmail_quote">
On Wed, Apr 27, 2011 at 2:13 PM, Hrachya Astsatryan <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:hrach@sci.am">hrach@sci.am</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt
0.8ex; border-left: 1px solid rgb(204, 204, 204);
padding-left: 1ex;">
Dear Mark Abraham & all,<br>
<br>
We used another benchmarking systems, such as d.dppc on 4
processors, but we have the same problem (1 proc use about
100%, the others 0%).<br>
After for a while we receive the following error:<br>
<br>
Working directory is /localuser/armen/d.dppc<br>
Running on host <a moz-do-not-send="true"
href="http://wn1.ysu-cluster.grid.am" target="_blank">wn1.ysu-cluster.grid.am</a><br>
Time is Fri Apr 22 13:55:47 AMST 2011<br>
Directory is /localuser/armen/d.dppc<br>
____START____<br>
Start: Fri Apr 22 13:55:47 AMST 2011<br>
p2_487: p4_error: Timeout in establishing connection to
remote process: 0<br>
rm_l_2_500: (301.160156) net_send: could not write to fd=5,
errno = 32<br>
p2_487: (301.160156) net_send: could not write to fd=5,
errno = 32<br>
p0_32738: p4_error: net_recv read: probable EOF on socket:
1<br>
p3_490: (301.160156) net_send: could not write to fd=6,
errno = 104<br>
p3_490: p4_error: net_send write: -1<br>
p3_490: (305.167969) net_send: could not write to fd=5,
errno = 32<br>
p0_32738: (305.371094) net_send: could not write to fd=4,
errno = 32<br>
p1_483: p4_error: net_recv read: probable EOF on socket: 1<br>
rm_l_1_499: (305.167969) net_send: could not write to fd=5,
errno = 32<br>
p1_483: (311.171875) net_send: could not write to fd=5,
errno = 32<br>
Fri Apr 22 14:00:59 AMST 2011<br>
End: Fri Apr 22 14:00:59 AMST 2011<br>
____END____<br>
<br>
We tried new version of Gromacs, but receive the same error.<br>
Please, help us to overcome the problem.<br>
<br>
<br>
With regards,<br>
Hrach
<div>
<div class="h5"><br>
<br>
On 4/22/11 1:41 PM, Mark Abraham wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt
0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204);
padding-left: 1ex;">
On 4/22/2011 5:40 PM, Hrachya Astsatryan wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt
0pt 0.8ex; border-left: 1px solid rgb(204, 204,
204); padding-left: 1ex;">
Dear all,<br>
<br>
I would like to inform you that I have installed the
gromacs4.0.7 package on the cluster (nodes of the
cluster are 8 core Intel, OS: RHEL4 Scientific
Linux) with the following steps:<br>
<br>
yum install fftw3 fftw3-devel<br>
./configure --prefix=/localuser/armen/gromacs
--enable-mpi<br>
<br>
Also I have downloaded gmxbench-3.0 package and try
to run d.villin to test it.<br>
<br>
Unfortunately it wok fine until np is 1,2,3, if I
use more than 3 procs I receive low CPU balancing
and the process in hanging.<br>
<br>
Could you, please, help me to overcome the problem?<br>
</blockquote>
<br>
Probably you have only four physical cores
(hyperthreading is not normally useful), or your MPI
is configured to use only four cores, or these
benchmarks are too small to scale usefully.<br>
<br>
Choosing to do a new installation of a GROMACS version
that is several years old is normally less productive
than the latest version.<br>
<br>
Mark<br>
<br>
<br>
<br>
</blockquote>
<br>
-- <br>
gmx-users mailing list <a moz-do-not-send="true"
href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br>
<a moz-do-not-send="true"
href="http://lists.gromacs.org/mailman/listinfo/gmx-users"
target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-users</a><br>
Please search the archive at <a moz-do-not-send="true"
href="http://www.gromacs.org/Support/Mailing_Lists/Search"
target="_blank">http://www.gromacs.org/Support/Mailing_Lists/Search</a>
before posting!<br>
Please don't post (un)subscribe requests to the list.
Use the www interface or send it to <a
moz-do-not-send="true"
href="mailto:gmx-users-request@gromacs.org"
target="_blank">gmx-users-request@gromacs.org</a>.<br>
Can't post? Read <a moz-do-not-send="true"
href="http://www.gromacs.org/Support/Mailing_Lists"
target="_blank">http://www.gromacs.org/Support/Mailing_Lists</a><br>
<br>
<br>
</div>
</div>
</blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
ORNL/UT Center for Molecular Biophysics <a
moz-do-not-send="true" href="http://cmb.ornl.gov">cmb.ornl.gov</a><br>
865-241-1537, ORNL PO BOX 2008 MS6309<br>
</div>
</blockquote>
<br>
</body>
</html>