Carsten thank you for your response. <br><br>I did same benchmark with 8 node and 16 node . But these experiments were done with PME instead of cutt-off. To optimize I changed cutt-of and fourier spacing. I wonder this results are acceptable and if need more optimization.<br>
<br>Thanks.<br><br>Deniz<br><br>====================================================<br><br>8 node and cutt-of = 0.9 nm fourier_spacing=0.12<br><br> Average load imbalance: 4.0 %<br> Part of the total run time spent waiting due to load imbalance: 1.4 %<br>
Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 %<br> Average PME mesh/force load: 1.758<br> Part of the total run time spent waiting due to PP/PME imbalance: 15.7 %<br><br>NOTE: 15.7 % performance was lost because the PME nodes<br>
had more work to do than the PP nodes.<br> You might want to increase the number of PME nodes<br> or increase the cut-off and the grid spacing.<br><br><br> R E A L C Y C L E A N D T I M E A C C O U N T I N G<br>
<br> Computing: Nodes Number G-Cycles Seconds %<br>-----------------------------------------------------------------------<br> Domain decomp. 4 1001 36.253 15.5 1.4<br>
Vsite constr. 4 5001 3.237 1.4 0.1<br> Send X to PME 4 5001 10.365 4.4 0.4<br> Comm. coord. 4 5001 15.193 6.5 0.6<br> Neighbor search 4 1001 279.944 120.0 10.8<br>
Force 4 5001 451.185 193.5 17.4<br> Wait + Comm. F 4 5001 63.147 27.1 2.4<br> PME mesh 4 5001 940.073 403.1 36.3<br> Wait + Comm. X/F 4 5001 356.494 152.9 13.7<br>
Wait + Recv. PME F 4 5001 345.820 148.3 13.3<br> Vsite spread 4 10002 6.568 2.8 0.3<br> Write traj. 4 1 0.350 0.2 0.0<br> Update 4 5001 20.525 8.8 0.8<br>
Constraints 4 5001 42.245 18.1 1.6<br> Comm. energies 4 5001 3.377 1.4 0.1<br> Rest 4 18.393 7.9 0.7<br>-----------------------------------------------------------------------<br>
Total 8 2593.170 1112.0 100.0<br>-----------------------------------------------------------------------<br><br> Parallel run - timing based on wallclock.<br><br> NODE (s) Real (s) (%)<br>
Time: 139.000 139.000 100.0<br> 2:19<br> (Mnbf/s) (GFlops) (ns/day) (hour/ns)<br>Performance: 127.854 9.458 12.434 1.930<br>Finished mdrun on node 0 Mon Feb 15 17:34:48 2010<br>
<br>====================================================<br>8 node cut-off = 1.0 nm and fourier_spacing=0.13<br><br> Average load imbalance: 3.4 %<br> Part of the total run time spent waiting due to load imbalance: 1.7 %<br>
Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 %<br> Average PME mesh/force load: 1.129<br> Part of the total run time spent waiting due to PP/PME imbalance: 3.7 %<br><br><br> R E A L C Y C L E A N D T I M E A C C O U N T I N G<br>
<br> Computing: Nodes Number G-Cycles Seconds %<br>-----------------------------------------------------------------------<br> Domain decomp. 4 1001 35.777 15.3 1.5<br>
Vsite constr. 4 5001 2.620 1.1 0.1<br> Send X to PME 4 5001 10.182 4.4 0.4<br> Comm. coord. 4 5001 15.727 6.7 0.7<br> Neighbor search 4 1001 275.561 117.9 11.8<br>
Force 4 5001 576.720 246.7 24.7<br> Wait + Comm. F 4 5001 69.631 29.8 3.0<br> PME mesh 4 5001 752.485 321.8 32.2<br> Wait + Comm. X/F 4 5001 416.550 178.2 17.8<br>
Wait + Recv. PME F 4 5001 91.857 39.3 3.9<br> Vsite spread 4 10002 6.456 2.8 0.3<br> Write traj. 4 1 0.426 0.2 0.0<br> Update 4 5001 20.577 8.8 0.9<br>
Constraints 4 5001 41.959 17.9 1.8<br> Comm. energies 4 5001 2.967 1.3 0.1<br> Rest 4 18.612 8.0 0.8<br>-----------------------------------------------------------------------<br>
Total 8 2338.108 1000.0 100.0<br>-----------------------------------------------------------------------<br><br> Parallel run - timing based on wallclock.<br><br> NODE (s) Real (s) (%)<br>
Time: 125.000 125.000 100.0<br> 2:05<br> (Mnbf/s) (GFlops) (ns/day) (hour/ns)<br>Performance: 190.198 11.789 13.827 1.736<br>Finished mdrun on node 0 Mon Feb 15 22:10:46 2010<br>
<br>====================================================<br>8 node cut-off = 1.1 nm, fourier_spacing=0.135 <br><br> Average load imbalance: 0.7 %<br> Part of the total run time spent waiting due to load imbalance: 0.4 %<br>
Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 %<br> Average PME mesh/force load: 0.872<br> Part of the total run time spent waiting due to PP/PME imbalance: 4.2 %<br><br><br> R E A L C Y C L E A N D T I M E A C C O U N T I N G<br>
<br> Computing: Nodes Number G-Cycles Seconds %<br>-----------------------------------------------------------------------<br> Domain decomp. 4 1001 30.117 12.9 1.3<br>
Vsite constr. 4 5001 1.739 0.7 0.1<br> Send X to PME 4 5001 9.944 4.3 0.4<br> Comm. coord. 4 5001 16.964 7.3 0.7<br> Neighbor search 4 1001 269.553 115.8 11.4<br>
Force 4 5001 708.179 304.2 29.9<br> Wait + Comm. F 4 5001 50.572 21.7 2.1<br> PME mesh 4 5001 671.310 288.3 28.4<br> Wait + Comm. X/F 4 5001 511.451 219.7 21.6<br>
Wait + Recv. PME F 4 5001 10.333 4.4 0.4<br> Vsite spread 4 10002 4.222 1.8 0.2<br> Write traj. 4 1 0.348 0.1 0.0<br> Update 4 5001 19.821 8.5 0.8<br>
Constraints 4 5001 39.736 17.1 1.7<br> Comm. energies 4 5001 3.181 1.4 0.1<br> Rest 4 18.084 7.8 0.8<br>-----------------------------------------------------------------------<br>
Total 8 2365.556 1016.0 100.0<br>-----------------------------------------------------------------------<br><br> Parallel run - timing based on wallclock.<br><br> NODE (s) Real (s) (%)<br>
Time: 127.000 127.000 100.0<br> 2:07<br> (Mnbf/s) (GFlops) (ns/day) (hour/ns)<br>Performance: 244.853 13.855 13.609 1.764<br>Finished mdrun on node 0 Mon Feb 15 22:24:07 2010<br>
<br>====================================================<br>16 node cut-off = 1.1 nm, fourier_spacing=0.135 <br><br> Average load imbalance: 7.0 %<br> Part of the total run time spent waiting due to load imbalance: 3.5 %<br>
Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 % Y 0 %<br> Average PME mesh/force load: 0.872<br> Part of the total run time spent waiting due to PP/PME imbalance: 4.2 %<br><br><br> R E A L C Y C L E A N D T I M E A C C O U N T I N G<br>
<br> Computing: Nodes Number G-Cycles Seconds %<br>-----------------------------------------------------------------------<br> Domain decomp. 8 1001 55.569 23.8 1.9<br>
Vsite constr. 8 5001 3.334 1.4 0.1<br> Send X to PME 8 5001 24.192 10.4 0.8<br> Comm. coord. 8 5001 49.191 21.1 1.7<br> Neighbor search 8 1001 300.578 128.8 10.3<br>
Force 8 5001 734.497 314.9 25.2<br> Wait + Comm. F 8 5001 166.258 71.3 5.7<br> PME mesh 8 5001 809.589 347.1 27.8<br> Wait + Comm. X/F 8 5001 640.310 274.5 22.0<br>
Wait + Recv. PME F 8 5001 12.332 5.3 0.4<br> Vsite spread 8 10002 11.558 5.0 0.4<br> Write traj. 8 1 0.685 0.3 0.0<br> Update 8 5001 18.789 8.1 0.6<br>
Constraints 8 5001 47.320 20.3 1.6<br> Comm. energies 8 5001 12.562 5.4 0.4<br> Rest 8 24.538 10.5 0.8<br>-----------------------------------------------------------------------<br>
Total 16 2911.302 1248.0 100.0<br>-----------------------------------------------------------------------<br><br> Parallel run - timing based on wallclock.<br><br> NODE (s) Real (s) (%)<br>
Time: 78.000 78.000 100.0<br> 1:18<br> (Mnbf/s) (GFlops) (ns/day) (hour/ns)<br>Performance: 398.725 22.539 22.158 1.083<br>Finished mdrun on node 0 Mon Feb 15 22:54:31 2010<br>
<br><br><br><br><div class="gmail_quote">On Mon, Feb 15, 2010 at 5:36 PM, Carsten Kutzner <span dir="ltr"><<a href="mailto:ckutzne@gwdg.de">ckutzne@gwdg.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hi,<br>
<br>
18 seconds real time is a bit short for such a test. You should run<br>
at least several minutes. The performance you can expect depends<br>
a lot on the interconnect you are using. You will definitely need a<br>
really low-latency interconnect if you have less then 1000 atoms<br>
per core.<br>
<font color="#888888"><br>
Carsten<br>
</font><div class="im"><br>
<br>
On Feb 15, 2010, at 3:13 PM, Deniz KARASU wrote:<br>
<br>
> Hi All,<br>
><br>
> I'm trying to d.lzm gromacs benchmarks with 64 node machine, but dynamic load balancing performance is very low.<br>
><br>
> Any suggestion will be of great help.<br>
><br>
> Thanks.<br>
><br>
> Deniz KARASU<br>
><br>
<br>
</div><div><div></div><div class="h5">--<br>
gmx-users mailing list <a href="mailto:gmx-users@gromacs.org">gmx-users@gromacs.org</a><br>
<a href="http://lists.gromacs.org/mailman/listinfo/gmx-users" target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-users</a><br>
Please search the archive at <a href="http://www.gromacs.org/search" target="_blank">http://www.gromacs.org/search</a> before posting!<br>
Please don't post (un)subscribe requests to the list. Use the<br>
www interface or send it to <a href="mailto:gmx-users-request@gromacs.org">gmx-users-request@gromacs.org</a>.<br>
Can't post? Read <a href="http://www.gromacs.org/mailing_lists/users.php" target="_blank">http://www.gromacs.org/mailing_lists/users.php</a><br>
</div></div></blockquote></div><br>