<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>On Aug 31, 2010, at 04:50 , xuji wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><span class="Apple-style-span" style="border-collapse: separate; font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="margin-top: 10px; margin-right: 10px; margin-bottom: 10px; margin-left: 10px; font-family: verdana; font-size: 10pt; "><div><font color="#000080" size="2" face="Verdana"><div>Hi all:</div><div> </div><div>It is a very good news that Gromacs-4.5 is coming soon!</div><div> </div><div></div><div>I have two problems not very clearly.</div><div></div><div>First, Gromacs-4.5 can use GPU to accelerate simulations. But in </div><div>the "Limitations", it's said that "Multiple GPU cards are not supported".</div><div>So can't I accelerate the simulations with parallel mdrun? </div></font></div></div></span></blockquote><div>GPUs are inherently parallel so a single GPU-accelerated mdrun is 'parallel' by its very nature. This restriction reflects a restriction in GPU hardware: multiple GPU cards don't share memory, so supporting them would require an entirely new set of algorithms beyond what has already been done to support single GPUs.</div><blockquote type="cite"><span class="Apple-style-span" style="border-collapse: separate; font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="margin-top: 10px; margin-right: 10px; margin-bottom: 10px; margin-left: 10px; font-family: verdana; font-size: 10pt; "><div><font color="#000080" size="2" face="Verdana"><div> </div><div></div><div>Second, in Gromacs-4.5 new features, it's said:</div><div>"Running on multi-core nodes now automatically uses thread-based parallelization".</div><div>If there're two nodes with 8 CPU cores each, and I use all of the CPU cores, </div><div>is there 8 threads other than 8 MPI processes in one node? </div><div>If true, do the 8 threads in one node share with one large memory? </div><div>And what is the parallelization scheme inner the node and inter the nodes?</div></font></div></div></span></blockquote><div><br></div><div>Right now, threads and inter-node MPI communication are mutually exclusive: either you run a single node parallel run (with 8 threads in your case), or you run on multiple nodes with MPI. In that case, you specify the total number of cores as the number of MPI processes.</div></div><br></body></html>