:-) GROMACS - gmx mdrun, VERSION 5.2-dev (-: GROMACS is written by: Emile Apol Rossen Apostolov Herman J.C. Berendsen Par Bjelkmar Aldert van Buuren Rudi van Drunen Anton Feenstra Gerrit Groenhof Christoph Junghans Anca Hamuraru Vincent Hindriksen Dimitrios Karkoulis Peter Kasson Jiri Kraus Carsten Kutzner Per Larsson Justin A. Lemkul Magnus Lundborg Pieter Meulenhoff Erik Marklund Teemu Murtola Szilard Pall Sander Pronk Roland Schulz Alexey Shvetsov Michael Shirts Alfons Sijbers Peter Tieleman Teemu Virolainen Christian Wennberg Maarten Wolf and the project leaders: Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel Copyright (c) 1991-2000, University of Groningen, The Netherlands. Copyright (c) 2001-2015, The GROMACS development team at Uppsala University, Stockholm University and the Royal Institute of Technology, Sweden. check out http://www.gromacs.org for more information. GROMACS is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. GROMACS: gmx mdrun, VERSION 5.2-dev Executable: /home/cc/vfaculty/puneets.vfaculty/Gromacs-dev/gromacs_2/build/bin/gmx_mpi Data prefix: /home/cc/vfaculty/puneets.vfaculty/Gromacs-dev/gromacs_2 (source tree) Command line: gmx_mpi mdrun -ntomp 2 Back Off! I just backed up md.log to ./#md.log.4# Number of logical cores detected (24) does not match the number reported by OpenMP (1). Consider setting the launch configuration manually! Running on 1 node with total 24 cores, 24 logical cores, 2 compatible GPUs Hardware detected on host gpulogin01.hpc.iitd.ac.in (the node of MPI rank 0): CPU info: Vendor: Intel Brand: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz SIMD instructions most likely to fit this hardware: AVX2_256 SIMD instructions selected at GROMACS compile time: AVX2_256 GPU info: Number of GPUs detected: 2 #0: NVIDIA Tesla K40m, compute cap.: 3.5, ECC: yes, stat: compatible #1: NVIDIA Tesla K40m, compute cap.: 3.5, ECC: yes, stat: compatible Compiled SIMD instructions: AVX2_256, GROMACS could use AVX2_256 on this machine, which is better. Reading file topol.tpr, VERSION 5.2-dev-20151216-b6c32b0-dirty (single precision) The number of OpenMP threads was set by environment variable OMP_NUM_THREADS to 2 (and the command-line setting agreed with that) Using 1 MPI process Using 2 OpenMP threads 2 compatible GPUs are present, with IDs 0,1 1 GPU auto-selected for this run. Mapping of GPU ID to the 1 PP rank in this node: 0 NOTE: potentially sub-optimal launch configuration, gmx mdrun started with less PP MPI process per node than GPUs available. Each PP MPI process can use only one GPU, 1 GPU per node will be used. NOTE: GROMACS was configured without NVML support hence it can not exploit application clocks of the detected Tesla K40m GPU to improve performance. Recompile with the NVML library (compatible with the driver used) or set application clocks manually. Non-default thread affinity set probably by the OpenMP library, disabling internal thread affinity Back Off! I just backed up traj.trr to ./#traj.trr.3# Back Off! I just backed up ener.edr to ./#ener.edr.3# starting mdrun 'p11-fsi' 1000 steps, 1.0 ps.