Thanks Dr.David..I figured it out sometime after i mailed the group. <br>
Thanks again.<br>
regards,<br>kota.<br><div><span class="gmail_quote"></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div><span class="e" id="q_1093960ccd420734_2">
<br><div><span class="gmail_quote">On 2/5/06, <b class="gmail_sendername">David van der Spoel</b> <<a href="mailto:spoel@xray.bmc.uu.se" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">spoel@xray.bmc.uu.se
</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Pradeep Kota wrote:<br>> Thanks Dr.David. I could compile gromacs successfully. But when i run an<br>> md simulation on four processors, it returns the following error.<br>><br>><br>> [0] MPI Abort by user Aborting program !
<br>> [0] Aborting program!<br>> p4_error: latest msg from perror: No such file or directory<br>> p0_2057: p4_error: : -1<br>> -------------------------------------------------------<br>> Program mdrun_mpi, VERSION
3.3<br>> Source code file: futil.c, line: 308<br>><br>> File input/output error:<br>> md.log<br>> -------------------------------------------------------<br>><br>> Thanx for Using GROMACS - Have a Nice Day
<br>><br>> Halting program mdrun_mpi<br>><br>> gcq#1768121632: Thanx for Using GROMACS - Have a Nice Day<br>><br>> [0] MPI Abort by user Aborting program !<br>> [0] Aborting program!<br>> p4_error: latest msg from perror: No such file or directory
<br>> -----------------------------------------------------------------------------<br>> It seems that [at least] one of the processes that was started with<br>> mpirun did not invoke MPI_INIT before quitting (it is possible that
<br>> more than one process did not invoke MPI_INIT -- mpirun was only<br>> notified of the first one, which was on node n8952p0_<br>> 058: p4_error: : -1<br>> mpirun can *only* be used with MPI programs (i.e
., programs that<br>> invoke MPI_INIT and MPI_FINALIZE). You can use the "lamexec" program<br>> to run non-MPI programs over the lambooted nodes.<br>> -----------------------------------------------------------------------------
<br>><br>><br>> i had lambooted all the nodes properly and it did not have any problems<br>> at that stage. and as it said, i tried using lamexec too. still no luck.<br>> i tried using other switches with mpirun too..couldnt quite figure out
<br>> what the error could be.<br><br>you are still running an MPICH exacutable here. lamboot is used for LAM<br>only. Are you inadvertedly mixing LAM and MPICH?<br><br><br><br>><br>> thanks in anticipation,<br>
> kota.
<br>><br>> On 2/2/06, *David van der Spoel* <<a href="mailto:spoel@xray.bmc.uu.se" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">spoel@xray.bmc.uu.se</a><br>> <mailto:<a href="mailto:spoel@xray.bmc.uu.se" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
spoel@xray.bmc.uu.se</a>>> wrote:<br>>
<br>> Pradeep Kota wrote:<br>> > thanks for the info, Mr.David.. but when i try to compile gromacs, it<br>> > returns an error. the following is an excerpt from the output of<br>> 'make'.
<br>> ><br>> seems like you have two different versions of fftw3 installed, or mixed<br>> single and double precision. Otherwise I don't know.<br>><br>><br>><br>> > /usr/local/lib/libfftw3f.a(
the-planner.o) definition of<br>> > _fftwf_the_planner in section (__TEXT,__text)<br>> > /usr/local/lib/libfftw3f.a( version.o) definition of _fftwf_cc in<br>> section<br>> > (__TEXT,__cstring)
<br>> > /usr/local/lib/libfftw3f.a(version.o) definition of<br>> _fftwf_codelet_optim<br>> > in section (__TEXT,__cstring)<br>> > /usr/local/lib/libfftw3f.a( version.o) definition of
<br>> _fftwf_version in<br>> > section (__TEXT,__cstring)<br>> > /usr/bin/libtool: internal link edit command failed<br>> > make[4]: *** [<a href="http://libgmx.la" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
libgmx.la</a> <
<a href="http://libgmx.la" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">http://libgmx.la</a>> < <a href="http://libgmx.la" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
http://libgmx.la</a>>]<br>> Error 1<br>> > make[3]: *** [all-recursive] Error 1<br>> > make[2]: *** [all-recursive] Error 1
<br>> > make[1]: *** [all] Error 2<br>> > make: *** [all-recursive] Error 1<br>> ><br>> > i think Mr.Jack Howarth had already pointed the same thing out,<br>> sometime<br>
> > back. any suggestion!?<br>> ><br>> > regards,<br>> > kota.<br>> ><br>> > On 2/2/06, *Pradeep Kota* < <a href="mailto:kotanmd@gmail.com" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
kotanmd@gmail.com
</a><br>> <mailto:<a href="mailto:kotanmd@gmail.com" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">kotanmd@gmail.com</a>><br>> > <mailto:<a href="mailto:kotanmd@gmail.com" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
kotanmd@gmail.com</a> <mailto:<a href="mailto:kotanmd@gmail.com" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
kotanmd@gmail.com</a>>>> wrote:<br>> ><br>> >
Thank you for your response Mr.David. I compiled lam successfully<br>> >
without fortran support. all i wanted to know was whether it<br>> would<br>> >
make any difference to the running time of gromacs. I am curious<br>> >
because, it is well-known that fortran loops are faster than<br>> loops<br>> > of other languages..so, i wanted to clarify ! moreover, i<br>> would want<br>> > to know how different this is, from mpich.
<br>> > thanks for the support.<br>> > regards,<br>> > kota.<br>> ><br>> ><br>> > On 2/2/06, *David van der Spoel* < <a href="mailto:spoel@xray.bmc.uu.se" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
spoel@xray.bmc.uu.se</a><br>> <mailto:<a href="mailto:spoel@xray.bmc.uu.se" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">spoel@xray.bmc.uu.se</a>><br>> >
<mailto:<a href="mailto:spoel@xray.bmc.uu.se" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">spoel@xray.bmc.uu.se</a>
<mailto:<a href="mailto:spoel@xray.bmc.uu.se" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">spoel@xray.bmc.uu.se</a>>>><br>> wrote:<br>> ><br>> > Pradeep Kota wrote:
<br>> >> Thank you for your response, itamar..but the cluster isolated
<br>> > from the<br>> >> internet for security reasons. i dont think there is any<br>> > chance to use<br>> >> fink on the head node either. any other alternatives?
<br>> >> regards,<br>> ><br>> >
compile LAM without fortran, there's a flag for it<br>> ><br>> >> kota.<br>> >><br>> >> On 2/2/06, *Itamar Kass* < <a href="mailto:ikass@cc.huji.ac.il" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
ikass@cc.huji.ac.il
</a><br>> <mailto:<a href="mailto:ikass@cc.huji.ac.il" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">ikass@cc.huji.ac.il</a>><br>> >
<mailto: <a href="mailto:ikass@cc.huji.ac.il" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">ikass@cc.huji.ac.il</a> <mailto:<a href="mailto:ikass@cc.huji.ac.il" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
ikass@cc.huji.ac.il</a>>><br>> >> <mailto: <a href="mailto:ikass@cc.huji.ac.il" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
ikass@cc.huji.ac.il</a> <mailto:<a href="mailto:ikass@cc.huji.ac.il" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">ikass@cc.huji.ac.il</a>><br>> <mailto:<a href="mailto:ikass@cc.huji.ac.il" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
ikass@cc.huji.ac.il</a> <mailto:<a href="mailto:ikass@cc.huji.ac.il" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
ikass@cc.huji.ac.il</a>> >> ><br>> > wrote:<br>> >><br>> >> Why not using fortan?<br>> >> install it using Fink and let lam have it.<br>> >>
<br>> >> Itamar.<br>> >><br>> >> ===========================================<br>> >> | Itamar Kass<br>> >> | The Alexander Silberman
<br>> >> | Institute of Life Sciences<br>> >> | Department of Biological Chemistry<br>> >> | The Hebrew University, Givat-Ram<br>> >> | Jerusalem, 91904, Israel
<br>> >> | Tel: +972-(0)2-6585194<br>> >> | Fax: +972-(0)2-6584329<br>> >>
| Email: <a href="mailto:ikass@cc.huji.ac.il" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">ikass@cc.huji.ac.il</a> <mailto:<a href="mailto:ikass@cc.huji.ac.il" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
ikass@cc.huji.ac.il</a>><br>> <mailto: <a href="mailto:ikass@cc.huji.ac.il" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">ikass@cc.huji.ac.il
</a> <mailto:<a href="mailto:ikass@cc.huji.ac.il" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">ikass@cc.huji.ac.il</a>>><br>> >
<mailto:<a href="mailto:ikass@cc.huji.ac.il" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">ikass@cc.huji.ac.il</a> <mailto:<a href="mailto:ikass@cc.huji.ac.il" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
ikass@cc.huji.ac.il</a>><br>> <mailto:<a href="mailto:ikass@cc.huji.ac.il" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">ikass@cc.huji.ac.il
</a> <mailto:<a href="mailto:ikass@cc.huji.ac.il" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">ikass@cc.huji.ac.il</a>> >><br>> >> | Net:<br>> >><br>> >
<br>> <a href="http://www.ls.huji.ac.il/%7Emembranelab/itamar/itamar_homepage.html" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
http://www.ls.huji.ac.il/~membranelab/itamar/itamar_homepage.html</a><br>> > <<br>> <a href="http://www.ls.huji.ac.il/%7Emembranelab/itamar/itamar_homepage.html" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
http://www.ls.huji.ac.il/%7Emembranelab/itamar/itamar_homepage.html
</a>><br>> >> <<br>> ><br>> <a href="http://www.ls.huji.ac.il/%7Emembranelab/itamar/itamar_homepage.html" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">http://www.ls.huji.ac.il/%7Emembranelab/itamar/itamar_homepage.html
</a>>
<br>> >> ============================================<br>> >><br>> >> ----- Original Message -----<br>> >>
*From:* Pradeep Kota <mailto: <a href="mailto:kotanmd@gmail.com" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">kotanmd@gmail.com</a><br>> <mailto:<a href="mailto:kotanmd@gmail.com" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
kotanmd@gmail.com</a>><br>> >
<mailto:<a href="mailto:kotanmd@gmail.com" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">kotanmd@gmail.com</a> <mailto:<a href="mailto:kotanmd@gmail.com" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
kotanmd@gmail.com</a>>>><br>> >>
*To:* Discussion list for GROMACS users<br>> >>
<mailto: <a href="mailto:gmx-users@gromacs.org" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">gmx-users@gromacs.org</a><br>> <mailto:<a href="mailto:gmx-users@gromacs.org" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
gmx-users@gromacs.org</a>><br>> > <mailto:<a href="mailto:gmx-users@gromacs.org" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
gmx-users@gromacs.org</a><br>> <mailto:<a href="mailto:gmx-users@gromacs.org" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">gmx-users@gromacs.org</a>>>><br>> >>
*Sent:* Thursday, February 02, 2006 4:43 AM<br>> >>
*Subject:* [gmx-users] mdrun with mpich<br>> >><br>> >> Dear users,<br>> >>
I have compiled gromacs on our dual core mac os x G5<br>> >>
cluster, and tried running a simulation on a 540<br>> > residue protein<br>> >>
for 1 ns on 8 processors. i used mpich as the mpi<br>> > environment<br>> >>
for parallelising gromacs. it worked fine, and the job<br>> > was split<br>> >>
properly and assigned to nodes. now, the cpu usage per<br>> > processor<br>> >>
is not more than 50% on any of the processors. and the<br>> > total<br>> >>
running time for this was 13hrs. though output is written<br>> >>
properly onto the specified output file, mdirun doesnot<br>> >>
terminate even after running through all the steps. it<br>> > still<br>> >>
shows two mdrun_mpi processes running on the head node,<br>> > with a<br>> >>
0% cpu usage. was going through gmx-users mailing list and<br>> >>
somehow figured out that mpich is not a good idea for<br>> > running<br>> >>
gromacs. so, i wanted to switch over to lam. now when i<br>> > complie<br>> >>
gromacs using lam, it is not able to link the libraries<br>> >>
properly. so, i tried reinstalling lam on my cluster.<br>> > now, lam<br>> >>
keeps complaining about not being able to find a fortran<br>> >>
compiler. i should not need a fortran compiler unless<br>> > i'm using<br>> >>
SUN or SGI for this purpose..(am i going wrong here?). what<br>> >>
flags do i need to compile lam with, in order to<br>> > compile gromacs<br>> >>
successfully? any help is very much appreciated..<br>> >> thanks in advance,<br>> >> regards,<br>> >> kota.<br>> >><br>> >><br>
> ><br>> ------------------------------------------------------------------------<br></blockquote></div><br>
</span></div></blockquote></div><br>