Did you booted your lam?<br>Go to a specific node of your cluster, for example node 3, and create a file with a simple name like lamhosts. This file must contain just the nodes you may want to boot (the name of the node, like no3, or something like this). Than, run
<br><br>lamboot lamhosts -v<br><br>If you have two processors in each node you may use -np 2, if you have four -np 4.<br><br><div><span class="gmail_quote">2007/8/6, Anupam Nath Jha <<a href="mailto:anupam@mbu.iisc.ernet.in">
anupam@mbu.iisc.ernet.in</a>>:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>Now i am using this much only:<br><br>#This is a PBS script for Parallel Jobs
<br>#!/bin/bash -f<br>#PBS -o /home/anupam/MD/lyd<br>#PBS -e /home/anupam/MD/lyd<br>#PBS -q default<br><br>cd /home/anupam/MD/lyd<br><br>mpirun -np 4 /usr/local/gromacs/bin/mdrun_mpi -v -deffnm em<br><br>but still it's giving this error:
<br>totalnum=2 numhosts=1<br>there are not enough hosts on which to start all processes<br><br><br><br><br><br><br><br>> Anupam Nath Jha wrote:<br>>> Dear all<br>>><br>>> i am new in gromacs. i have installed
gromacs-3.3.3 in our cluster(mpi is<br>>> already there) with parallel version (using the following command):<br>>><br>>><br>>> make clean<br>>> ./configure --enable-mpi --disable-nice --program-suffix="_mpi"
<br>>> make mdrun<br>>> make install-mdrun<br>>><br>>> it went fine.<br>>><br>>> but when i ran the command -<br>>> qsub pbs_submit<br>>><br>>> it's the pbs_submit file:
<br>>><br>>> #This is a PBS script for Parallel Jobs<br>>> #!/bin/bash -f<br>>> #PBS -l nodes=2:ppn=2<br>>> #PBS -o /home/anupam/MD/lyd<br>>> #PBS -e /home/anupam/MD/lyd<br>><br>> These latter two should probably be file names, not directory names...
<br>><br>>> #PBS -q default<br>>><br>>> cd /home/anupam/MD/lyd<br>><br>> ... since this seems to be a directory? Check out the #PBS -wd option too...<br>><br>>> mpirun grompp -f em.mdp -p
topol.top -c solvated.gro -np 4 -o em.tpr<br>><br>> grompp is not an MPI code, run it without mpirun<br>><br>>> mpirun mdrun_mpi -v -deffnm -np 4 em<br>><br>> Unless you have a PBS-customized install of mpirun, you should use
<br>> "mpirun -np 4 mdrun_mpi -v deffnm em" since mdrun_mpi will pick up the<br>> four processors from the environment, but mpirun won't.<br>><br>>> but it's not doing anything,except writing this:
<br>>><br>>> totalnum=3 numhosts=2<br>>> there are not enough hosts on which to start all processes<br>>> totalnum=3 numhosts=2<br>>> there are not enough hosts on which to start all processes
<br>><br>> First get grompp working interactively, then get grompp working in a<br>> script, (neither of which need MPI), and only then worry about mpirun<br>> -np 4 mdrun<br>><br>> Mark<br>> _______________________________________________
<br>> gmx-users mailing list <a href="mailto:gmx-users@gromacs.org">gmx-users@gromacs.org</a><br>> <a href="http://www.gromacs.org/mailman/listinfo/gmx-users">http://www.gromacs.org/mailman/listinfo/gmx-users</a>
<br>> Please search the archive at <a href="http://www.gromacs.org/search">http://www.gromacs.org/search</a> before posting!<br>> Please don't post (un)subscribe requests to the list. Use the<br>> www interface or send it to
<a href="mailto:gmx-users-request@gromacs.org">gmx-users-request@gromacs.org</a>.<br>> Can't post? Read <a href="http://www.gromacs.org/mailing_lists/users.php">http://www.gromacs.org/mailing_lists/users.php</a><br>
><br>> --<br>> This message has been scanned for viruses and<br>> dangerous content by MailScanner, and is<br>> believed to be clean.<br>><br>><br><br><br>--<br>Science is facts; just as houses are made of stone, so is science is made of
<br>facts; but a pile of stones is not a house, and a collection of facts is not<br>necessarily sciece.<br><br>Anupam Nath Jha<br>Ph. D. Student<br>Saraswathi Vishveshwara Lab<br>Molecular Biophysics Unit<br>IISc,Bangalore-560012
<br>Karnataka<br>Ph. no.-22932611<br><br><br><br>--<br>This message has been scanned for viruses and<br>dangerous content by MailScanner, and is<br>believed to be clean.<br><br>_______________________________________________
<br>gmx-users mailing list <a href="mailto:gmx-users@gromacs.org">gmx-users@gromacs.org</a><br><a href="http://www.gromacs.org/mailman/listinfo/gmx-users">http://www.gromacs.org/mailman/listinfo/gmx-users</a><br>Please search the archive at
<a href="http://www.gromacs.org/search">http://www.gromacs.org/search</a> before posting!<br>Please don't post (un)subscribe requests to the list. Use the<br>www interface or send it to <a href="mailto:gmx-users-request@gromacs.org">
gmx-users-request@gromacs.org</a>.<br>Can't post? Read <a href="http://www.gromacs.org/mailing_lists/users.php">http://www.gromacs.org/mailing_lists/users.php</a><br></blockquote></div><br>