<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Verdana
}
--></style>
</head>
<body class='hmmessage'>
<br><br>> From: zhao0139@ntu.edu.sg<br>> To: gmx-users@gromacs.org<br>> Date: Tue, 6 Apr 2010 19:39:30 +0800<br>> Subject: [gmx-users] Re: loab imbalance<br>> <br>> <br>> > <br>> > On 6/04/2010 5:39 PM, lina wrote:<br>> > > Hi everyone,<br>> > ><br>> > > Here is the result of the mdrun which was performed on 16cpus. I am not<br>> > > clear about it, was it due to using MPI reason? or some other reasons.<br>> > ><br>> > > Writing final coordinates.<br>> > ><br>> > > Average load imbalance: 1500.0 %<br>> > > Part of the total run time spent waiting due to load imbalance: 187.5 %<br>> > > Steps where the load balancing was limited by -rdd, -rcon and/or -dds:<br>> > > X 0 % Y 0 %<br>> > ><br>> > > NOTE: 187.5 % performance was lost due to load imbalance<br>> > > in the domain decomposition.<br>> > <br>> > You ran an inefficient but otherwise valid computation. Check out the <br>> > manual section on domain decomposition to learn why it was inefficient, <br>> > and whether you can do better.<br>> > <br>> > Mark<br>> <br>> I search the "decomposition" keyword on Gromacs manual, no match found.<br>> Are you positive about that? Thanks any way, but can you make it more<br>> problem-solved-oriented, so I can easily understand.<br>> <br>> Thanks and regards,<br>> <br>> lina<br><br>This looks strange.<br>You have 1 core doing something and 15 cores doing nothing.<br>Do you only have one small molecule?<br>How many steps was this simulation?<br><br>Berk<br><br>                                            <br /><hr />Express yourself instantly with MSN Messenger! <a href='http://clk.atdmt.com/AVE/go/onm00200471ave/direct/01/' target='_new'>MSN Messenger</a></body>
</html>