[gmx-users] Hyper-threading Gromacs 5.0.1

Johnny Lu johnny.lu128 at gmail.com
Thu Sep 11 15:45:54 CEST 2014


The gromacs wiki also says that mixing mpi and openmp is bad on small
computers.

On Thu, Sep 11, 2014 at 9:44 AM, Johnny Lu <johnny.lu128 at gmail.com> wrote:

> Ah. Thanks a lot.
> As suggested by (
> https://www.ibm.com/developerworks/community/blogs/brian/entry/linux_show_the_number_of_cpu_cores_on_your_system17?lang=en),
>
> $ cat /proc/cpuinfo | grep "physical id" | sort | uniq | wc -l
> 2
> $ cat /proc/cpuinfo | egrep "core id|physical id" | tr -d "\n" | sed
> s/physical/\\nphysical/g | grep -v ^$ | sort | uniq | wc -l
> 12
>
> There are 12 real cores.
> Type "top" and then press 1 sometimes give double the number of real
> cores, but sometimes doesn't double the number (tested on different
> machines).
>
> How to run "an MPI  rank per core" ? By this way? "OMP_NUM_THREADS=12
> mdrun" on a 12 core machine?
>
> I tried openmp threads instead of mpi thread because gromacs wiki says
> openmp threads are faster than mpi based parallelization.
>
> from the gromacs wiki (
> http://www.gromacs.org/Documentation/Acceleration_and_parallelization#Multi-level_parallelization.3a_MPI_and_OpenMP
> ):
>
> In GROMACS 4.6 compiled with thread-MPI, OpenMP-only parallelization is
> the default with Verlet scheme when using up to 8 cores on AMD platforms
> and up to 12 and 16 cores on Intel Nehalem and Sandy Bridge, respectively.
> Note that even running across two CPUs (in different sockets) on Intel
> platforms OpenMP mutithreading is, in the majority of the cases,
> significantly faster than MPI-based parallelization.
>
> ...
>
> Assuming that there are N cores available, the following commands are
> equivalent:
>
> mdrun -ntomp N -ntmpi 1
> OMP_NUM_THREADS=N mdrun
> mdrun #assuming that N <= 8 on AMD or N <= 12/16 on Intel Nehalem/Sandy Bridge
>
>
>
>


More information about the gromacs.org_gmx-users mailing list