Could someone tell me what tell the below error<br><br>Getting Loaded...<br>Reading file MD_100.tpr, VERSION 4.5.4 (single precision)<br>Loaded with Money<br><br><br>Will use 30 particle-particle and 18 PME only nodes<br>This is a guess, check the performance at the end of the log file<br>
[ib02:22825] *** Process received signal ***<br>[ib02:22825] Signal: Segmentation fault (11)<br>[ib02:22825] Signal code: Address not mapped (1)<br>[ib02:22825] Failing at address: 0x10<br>[ib02:22825] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0xf030) [0x7f535903e03$<br>
[ib02:22825] [ 1] /usr/lib/openmpi/lib/openmpi/mca_pml_ob1.so(+0x7e23) [0x7f535$<br>[ib02:22825] [ 2] /usr/lib/openmpi/lib/openmpi/mca_pml_ob1.so(+0x8601) [0x7f535$<br>[ib02:22825] [ 3] /usr/lib/openmpi/lib/openmpi/mca_pml_ob1.so(+0x8bab) [0x7f535$<br>
[ib02:22825] [ 4] /usr/lib/openmpi/lib/openmpi/mca_btl_sm.so(+0x42af) [0x7f5353$<br>[ib02:22825] [ 5] /usr/lib/libopen-pal.so.0(opal_progress+0x5b) [0x7f535790506b]<br>[ib02:22825] [ 6] /usr/lib/libmpi.so.0(+0x37755) [0x7f5359282755]<br>
[ib02:22825] [ 7] /usr/lib/openmpi/lib/openmpi/mca_coll_tuned.so(+0x1c3a) [0x7f$<br>[ib02:22825] [ 8] /usr/lib/openmpi/lib/openmpi/mca_coll_tuned.so(+0x7fae) [0x7f$<br>[ib02:22825] [ 9] /usr/lib/libmpi.so.0(ompi_comm_split+0xbf) [0x7f535926de8f]<br>
[ib02:22825] [10] /usr/lib/libmpi.so.0(MPI_Comm_split+0xdb) [0x7f535929dc2b]<br>[ib02:22825] [11] /usr/lib/libgmx_mpi_d.openmpi.so.6(gmx_setup_nodecomm+0x19b) $<br>[ib02:22825] [12] mdrun_mpi_d.openmpi(mdrunner+0x46a) [0x40be7a]<br>
[ib02:22825] [13] mdrun_mpi_d.openmpi(main+0x1256) [0x407206]<br>[ib02:22825] [14] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x7f$<br>[ib02:22825] [15] mdrun_mpi_d.openmpi() [0x407479]<br>[ib02:22825] *** End of error message ***<br>
--------------------------------------------------------------------------<br>mpiexec noticed that process rank 36 with PID 22825 on node ib02 exited on sign$<br>--------------------------------------------------------------------------<br>
<br><br>I've obtained it when I've tried to use my system on multi-node station ( there is no problem on single node). Does this problem with the cluster system or something wrong with parameters of my simulation?<br>
<br><br>JAmes<br><br><div class="gmail_quote">15 อมาิม 2012ว. 15:25 ะฯฬฺุฯืมิลฬุ James Starlight <span dir="ltr"><<a href="mailto:jmsstarlight@gmail.com">jmsstarlight@gmail.com</a>></span> ฮมะษำมฬ:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Mark, Peter,<br><br><br>I've tried to do .tpr file on my local CPU and launch only<br><br>mpiexec -np 24 mdrun_mpi_d.openmpi -v -deffnm MD_100<br><br>on the cluster with 2 nodes.<br><br>I see my job as working but when I've checking the MD_100.log (attached) file there are no any information about simulation steps in that file ( when I use just one node I see in that file step-by-step progression of my simulation like below wich was find in the same log file for ONE NODE simulation ):<br>
<br>Started mdrun on node 0 Thu Mar 15 11:22:35 2012<br><br> Step Time Lambda<br> 0 0.00000 0.00000<br><br>Grid: 12 x 9 x 12 cells<br> Energies (kJ/mol)<br> G96Angle Proper Dih. Improper Dih. LJ-14 Coulomb-14<br>
1.32179e+04 3.27485e+03 2.53267e+03 4.06443e+02 6.15315e+04<br> LJ (SR) LJ (LR) Disper. corr. Coulomb (SR) Coul. recip.<br> 4.12152e+04 -5.51788e+03 -1.70930e+03 -4.54886e+05 -1.46292e+05<br>
Dis. Rest. D.R.Viol. (nm) Dih. Rest. Potential Kinetic En.<br> 2.14240e-02 3.46794e+00 1.33793e+03 -4.84889e+05 9.88771e+04<br> Total Energy Conserved En. Temperature Pres. DC (bar) Pressure (bar)<br>
-3.86012e+05 -3.86012e+05 3.11520e+02 -1.14114e+02 3.67861e+02<br> Constr. rmsd<br> 3.75854e-05<br><br> Step Time Lambda<br> 2000 4.00000 0.00000<br><br>
Energies (kJ/mol)<br> G96Angle Proper Dih. Improper Dih. LJ-14 Coulomb-14<br> 1.31741e+04 3.25280e+03 2.58442e+03 3.51371e+02 6.15913e+04<br> LJ (SR) LJ (LR) Disper. corr. Coulomb (SR) Coul. recip.<br>
4.16349e+04 -5.53474e+03 -1.70930e+03 -4.56561e+05 -1.46485e+05<br> Dis. Rest. D.R.Viol. (nm) Dih. Rest. Potential Kinetic En.<br> 4.78276e+01 3.38844e+00 9.82735e+00 -4.87644e+05 9.83280e+04<br>
Total Energy Conserved En. Temperature Pres. DC (bar) Pressure (bar)<br> -3.89316e+05 -3.87063e+05 3.09790e+02 -1.14114e+02 7.25905e+02<br> Constr. rmsd<br> 1.88008e-05<br><br>end etc...<br><br><br>
<br>What's wrong can be with multi-node computations?<br><br><br>James<br><br><br><div class="gmail_quote">15 อมาิม 2012ว. 11:25 ะฯฬฺุฯืมิลฬุ Mark Abraham <span dir="ltr"><<a href="mailto:Mark.Abraham@anu.edu.au" target="_blank">Mark.Abraham@anu.edu.au</a>></span> ฮมะษำมฬ:<div>
<div class="h5"><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>On 15/03/2012 6:13 PM, Peter C. Lai wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Try separating your grompp run from your mpirun:<br>
You should not really be having the scheduler execute the grompp. Run<br>
your grompp step to generate a .tpr either on the head node or on your local<br>
machine (then copy it over to the cluster).<br>
</blockquote>
<br></div>
Good advice.<div><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
(The -p that the scheduler is complaining about only appears in the grompp<br>
step, so don't have the scheduler run it).<br>
</blockquote>
<br></div>
grompp is running successfully, as you can see from the output<br>
<br>
I think "mpiexec -np 12" is being interpreted as "mpiexec -n 12 -p", and the process of separating the grompp stage from the mdrun stage would help make that clear - read documentation first, however.<span><font color="#888888"><br>
<br>
Mark</font></span><div><div><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
On 2012-03-15 10:04:49AM +0300, James Starlight wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Dear Gromacs Users!<br>
<br>
<br>
I have some problems with running my simulation on multi-modes station wich<br>
use open_MPI<br>
<br>
I've launch my jobs by means of that script. The below example of running<br>
work on 1 node ( 12 cpu).<br>
<br>
#!/bin/sh<br>
#PBS -N gromacs<br>
#PBS -l nodes=1:red:ppn=12<br>
#PBS -V<br>
#PBS -o gromacs.out<br>
#PBS -e gromacs.err<br>
<br>
cd /globaltmp/xz/job_name<br>
grompp -f md.mdp -c nvtWprotonated.gro -p topol.top -n index.ndx -o job.tpr<br>
mpiexec -np 12 mdrun_mpi_d.openmpi -v -deffnm job<br>
<br>
All nodes of my cluster consist of 12 CPU. When I'm using just 1 node on<br>
that cluster I have no problems with running of my jobs but when I try to<br>
use more than one nodes I've obtain error ( the example is attached in the<br>
gromacs.err file as well as mmd.mdp of that system). Another outcome of<br>
such multi-node simulation is that my job has been started but no<br>
calculation were done ( the name_of_my_job.log file was empty and no update<br>
of .trr file was seen ). Commonly this error occurs when I uses many nodes<br>
(8-10) Finally sometimes I've obtain some errors with the PME order ( this<br>
time I've used 3 nodes). The exactly error differs when I varry the number<br>
of nodes.<br>
<br>
<br>
Could you tell me whats wrong could be with my cluster?<br>
<br>
Thanks for help<br>
<br>
James<br>
</blockquote>
<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
-- <br>
gmx-users mailing list <a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br>
<a href="http://lists.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://lists.gromacs.org/<u></u>mailman/listinfo/gmx-users</a><br>
Please search the archive at <a href="http://www.gromacs.org/Support/Mailing_Lists/Search" target="_blank">http://www.gromacs.org/<u></u>Support/Mailing_Lists/Search</a> before posting!<br>
Please don't post (un)subscribe requests to the list. Use the<br>
www interface or send it to <a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>.<br>
Can't post? Read <a href="http://www.gromacs.org/Support/Mailing_Lists" target="_blank">http://www.gromacs.org/<u></u>Support/Mailing_Lists</a><br>
</blockquote>
<br>
</blockquote>
<br>
-- <br>
gmx-users mailing list <a href="mailto:gmx-users@gromacs.org" target="_blank">gmx-users@gromacs.org</a><br>
<a href="http://lists.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://lists.gromacs.org/<u></u>mailman/listinfo/gmx-users</a><br>
Please search the archive at <a href="http://www.gromacs.org/Support/Mailing_Lists/Search" target="_blank">http://www.gromacs.org/<u></u>Support/Mailing_Lists/Search</a> before posting!<br>
Please don't post (un)subscribe requests to the list. Use the www interface or send it to <a href="mailto:gmx-users-request@gromacs.org" target="_blank">gmx-users-request@gromacs.org</a>.<br>
Can't post? Read <a href="http://www.gromacs.org/Support/Mailing_Lists" target="_blank">http://www.gromacs.org/<u></u>Support/Mailing_Lists</a><br>
</div></div></blockquote></div></div></div><br>
</blockquote></div><br>