[gmx-users] Parallel running problem

Danny K dannyko at stanford.edu
Fri Sep 16 21:23:11 CEST 2005


Hi Eric,

I'm between jobs right now and looking for an RA ship. Have you been in
contact with Dr. Levitt? I tried emailing him but got no response. Please
suggest another way to contact him if possible. Thanks,

Danny

P.S. I'm attaching my resume in case that helps.


----- Original Message ----- 
From: "Erik Lindahl" <lindahl at sbc.su.se>
To: "Discussion list for GROMACS users" <gmx-users at gromacs.org>
Sent: Friday, September 16, 2005 6:47 AM
Subject: Re: [gmx-users] Parallel running problem


> Hi,
>
> This is an problem with your MPI installation.
>
> The installation is broken/misconfigured, or you have your PATH
> settings mixed up. Ask your sysadm or check the documentation for the
> MPI package you are using.
>
> The configure script checks for the normal MPI cc scripts (and finds
> mpicc).
>
> However, when it tries to compile a small test program that calls
> MPI_Init() it doesn't work, even if it tries to link with -lmpi
> explicitly.
>
> This is essentially what the script is testing to build:
>
> -----
> #include <mpi.h>
>
> int
> main(int argc,char **argv)
> {
>      MPI_Init(&argc,&argv);
>      exit 0;
> }
> ----
>
> Cheers,
>
> Erik
>
>
> On Sep 16, 2005, at 2:44 PM, Alan Dodd wrote:
>
> > For completeness, this is the configure command I'm
> > using for installing fftw:
> > ./configure --prefix=/home/ad0303/fftw --enable-float
> > --enable-type-prefix --enable-mpi
> > And this is the result:
> >
> > checking for mpicc... mpicc
> > checking for MPI_Init... no
> > checking for MPI_Init in -lmpi... no
> > configure: error: couldn't find mpi library for
> > --enable-mpi
> >
> > I'm certain there's some critical command I'm not
> > specifying here - what did you mean by "I also linked
> > FFTW against MPICH"?
> >
> > --- "Peter C. Lai" <sirmoo at cowbert.2y.net> wrote:
> >
> >
> >> On Fri, Sep 16, 2005 at 03:23:07AM -0700, Alan Dodd
> >> wrote:
> >>
> >>> Thanks for all your help, I thought I had compiled
> >>> with MPI but from trying to reinstall, it appears
> >>>
> >> not.
> >>
> >>> The system I'm trying to install on is using
> >>>
> >> mpich,
> >>
> >>> rather than lammpi.  I wouldn't have thought this
> >>> would be a problem, but installing fftw locally
> >>> doesn't work - it can't find the mpi libraries.
> >>>
> >> Both
> >>
> >>> using the rpms and compiling from source seem to
> >>> produce similar errors.  I'm pretty sure others
> >>>
> >> have
> >>
> >>> used mpich, so have any of you come across a
> >>>
> >> similar
> >>
> >>> problem (and, ideally, a solution?)
> >>> Thanks,
> >>> Alan
> >>>
> >>>
> >>
> >> I dunno, I recently (a few months ago, really)
> >> compiled 3.2.1 against
> >> mpich1.2.5.2 initially and then against mpich1.2.6
> >> on a dual-socket p3
> >> with gcc3.4.2 and ran all the test suites with no
> >> problems (I was actually
> >> running gromacs-mpich to debug the new FreeBSD
> >> kernel scheduler while getting
> >> some side work out of the otherwise idle cpus), so I
> >> really don't know what
> >> your problem is either [i.e. WORKSFORME albeit on a
> >> different platform] :(
> >>
> >> Note that I also linked FFTW against MPICH - I think
> >> this is a critical step
> >> (and everything was built as single precision, but I
> >> vaguely remember running
> >> double over mpich without any problems either).
> >>
> >>
> >>> --- David van der Spoel <spoel at xray.bmc.uu.se>
> >>>
> >> wrote:
> >>
> >>>
> >>>
> >>>> On Wed, 2005-09-07 at 05:18 -0700, Alan Dodd
> >>>>
> >> wrote:
> >>
> >>>>> Hello Gromacs users,
> >>>>> I gather this problem is similar to many
> >>>>>
> >> previous,
> >>
> >>>> but
> >>>>
> >>>>> can't see an obvious solution in the replies
> >>>>>
> >> to
> >>
> >>>> any of
> >>>>
> >>>>> them.  I've been trying to get GROMACS to run
> >>>>>
> >> on
> >>
> >>>> this
> >>>>
> >>>>> sample dual-core, dual-socket opteron box that
> >>>>>
> >> we
> >>
> >>>> have
> >>>>
> >>>>> on loan.  Despite my best efforts, I seem
> >>>>>
> >> unable
> >>
> >>>> to
> >>>>
> >>>>> get mdrun to understand that it's supposed to
> >>>>>
> >> run
> >>
> >>>> on
> >>>>
> >>>>> more than one node.  I'm telling it to do so,
> >>>>>
> >> and
> >>
> >>>> it
> >>>>
> >>>>> even appreciates it's supposed to in the
> >>>>>
> >> output
> >>
> >>>> (see
> >>>>
> >>>>> below), but then decides I've told it to run
> >>>>>
> >> on
> >>
> >>>> just
> >>>>
> >>>>> the one and dies.  Has anyone any idea what's
> >>>>>
> >>>> going
> >>>>
> >>>>> wrong?  Is it just some kind of
> >>>>>
> >> incompatibility
> >>
> >>>> with
> >>>>
> >>>>> mpich/the hardware?
> >>>>>
> >>>> Have you compiled with MPI?
> >>>>
> >>>> you can check by typing
> >>>> ldd `which mdrun`
> >>>> It should show some MPI libraries.
> >>>>
> >>>> Dual core opterons run fine by the way. We have
> >>>>
> >> a
> >>
> >>>> brand new cluster
> >>>> humming along at 85 decibel.
> >>>>
> >>>>>
> >>>>> Input:
> >>>>> mpirun -np 4 -machinefile machines mdrun -np 4
> >>>>>
> >>
> >>
> >>>>>
> >>>>> mdrun output:
> >>>>> for all file options
> >>>>>          -np    int      4  Number of nodes,
> >>>>>
> >> must
> >>
> >>>> be
> >>>>
> >>>>> the same as used for
> >>>>>                             grompp
> >>>>>          -nt    int      1  Number of threads
> >>>>>
> >> to
> >>
> >>>> start
> >>>>
> >>>>> on each node
> >>>>>       -[no]v   bool     no  Be loud and noisy
> >>>>> -[no]compact   bool    yes  Write a compact
> >>>>>
> >> log
> >>
> >>>> file
> >>>>
> >>>>>   -[no]multi   bool     no  Do multiple
> >>>>>
> >>>> simulations in
> >>>>
> >>>>> parallel (only with
> >>>>>                             -np > 1)
> >>>>>    -[no]glas   bool     no  Do glass
> >>>>>
> >> simulation
> >>
> >>>> with
> >>>>
> >>>>> special long range
> >>>>>                             corrections
> >>>>>  -[no]ionize   bool     no  Do a simulation
> >>>>>
> >>>> including
> >>>>
> >>>>> the effect of an X-Ray
> >>>>>                             bombardment on
> >>>>>
> >> your
> >>
> >>>> system
> >>>>
> >>>>>
> >>>>>
> >>>>> Back Off! I just backed up md.log to
> >>>>>
> >> ./#md.log.5#
> >>
> >>>>> Reading file short.tpr, VERSION 3.2.1 (single
> >>>>> precision)
> >>>>> Fatal error: run input file short.tpr was made
> >>>>>
> >> for
> >>
> >>>> 4
> >>>>
> >>>>> nodes,
> >>>>>              while mdrun expected it to be for
> >>>>>
> >> 1
> >>
> >>>>> nodes.
> >>>>>
> >>>>>
> >>>>> Alan Dodd (University of Bristol)
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>> _
> >>>> _______________________________________________
> >>>> gmx-users mailing list
> >>>> gmx-users at gromacs.org
> >>>>
> >>>>
> >> http://www.gromacs.org/mailman/listinfo/gmx-users
> >>
> >>>> Please don't post (un)subscribe requests to the
> >>>> list. Use the
> >>>> www interface or send it to
> >>>> gmx-users-request at gromacs.org.
> >>>>
> >>>>
> >>>
> >>>
> >>>
> >>>
> >>> __________________________________
> >>> Yahoo! Mail - PC Magazine Editors' Choice 2005
> >>> http://mail.yahoo.com
> >>> _______________________________________________
> >>> gmx-users mailing list
> >>> gmx-users at gromacs.org
> >>> http://www.gromacs.org/mailman/listinfo/gmx-users
> >>> Please don't post (un)subscribe requests to the
> >>>
> >> list. Use the
> >>
> >>> www interface or send it to
> >>>
> >> gmx-users-request at gromacs.org.
> >>
> >> -- 
> >> Peter C. Lai
> >> Cesium Hyperfine Enterprises
> >> http://cowbert.2y.net/
> >>
> >>
> >>
> >
> >
> >
> >
> > __________________________________
> > Yahoo! Mail - PC Magazine Editors' Choice 2005
> > http://mail.yahoo.com
> > _______________________________________________
> > gmx-users mailing list
> > gmx-users at gromacs.org
> > http://www.gromacs.org/mailman/listinfo/gmx-users
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-request at gromacs.org.
> >
> >
>
> _______________________________________________
> gmx-users mailing list
> gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: danielkorenblum090605.doc
Type: application/octet-stream
Size: 21504 bytes
Desc: not available
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20050916/2b29cc36/attachment.obj>


More information about the gromacs.org_gmx-users mailing list