<div xmlns="http://www.w3.org/1999/xhtml">HI,</div><div xmlns="http://www.w3.org/1999/xhtml"> </div><div xmlns="http://www.w3.org/1999/xhtml">Yes, I think, because it seems to be working with nam-cuda right now:</div><div xmlns="http://www.w3.org/1999/xhtml"> </div><div xmlns="http://www.w3.org/1999/xhtml"><span style="font-family:courier new,monospace;">Wed Jan 30 10:39:34 2019       <br />+-----------------------------------------------------------------------------+<br />| NVIDIA-SMI 390.77                 Driver Version: 390.77                    |<br />|-------------------------------+----------------------+----------------------+<br />| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |<br />| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |<br />|===============================+======================+======================|<br />|   0  TITAN Xp            Off  | 00000000:65:00.0  On |                  N/A |<br />| 53%   83C    P2   175W / 250W |   2411MiB / 12194MiB |     47%      Default |<br />+-------------------------------+----------------------+----------------------+<br />                                                                               <br />+-----------------------------------------------------------------------------+<br />| Processes:                                                       GPU Memory |<br />|  GPU       PID   Type   Process name                             Usage      |<br />|=============================================================================|<br />|    0      1258      G   /usr/lib/xorg/Xorg                            40MiB |<br />|    0      1378      G   /usr/bin/gnome-shell                          15MiB |<br />|    0      7315      G   /usr/lib/xorg/Xorg                           403MiB |<br />|    0      7416      G   /usr/bin/gnome-shell                         284MiB |<br />|    0     12510      C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |<br />|    0     12651      C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |<br />|    0     12696      C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |<br />|    0     12737      C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |<br />|    0     12810      C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |<br />|    0     12868      C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |<br />|    0     20688      C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   251MiB |<br />+-----------------------------------------------------------------------------+</span><br /> </div><div xmlns="http://www.w3.org/1999/xhtml">After unsuccesful gromacs run, I ran namd</div><div xmlns="http://www.w3.org/1999/xhtml"><br />Best,</div><div xmlns="http://www.w3.org/1999/xhtml"> </div><div xmlns="http://www.w3.org/1999/xhtml">Vlad</div><div xmlns="http://www.w3.org/1999/xhtml"> </div><div xmlns="http://www.w3.org/1999/xhtml"> </div><div xmlns="http://www.w3.org/1999/xhtml">30.01.2019, 10:59, "Mark Abraham" &lt;mark.j.abraham@gmail.com&gt;:</div><blockquote xmlns="http://www.w3.org/1999/xhtml" type="cite"><p>Hi,<br /><br />Does nvidia-smi report that your GPUs are available to use?<br /><br />Mark<br /><br />On Wed, 30 Jan 2019 at 07:37 Владимир Богданов &lt;<a rel="noopener noreferrer" href="mailto:bogdanov-vladimir@yandex.ru">bogdanov-vladimir@yandex.ru</a>&gt;<br />wrote:<br /> </p><blockquote> Hey everyone!<br /><br /> I need help, please. When I try to run MD with GPU I get the next error:<br /><br /> Command line:<br /><br /> gmx_mpi mdrun -deffnm md -nb auto<br /><br /><br /><br /> Back Off! I just backed up md.log to ./#md<br /> &lt;<a rel="noopener noreferrer" href="https://vk.com/im?sel=15907114&amp;st=%23md">https://vk.com/im?sel=15907114&amp;st=%23md</a>&gt;.log.4#<br /><br /> NOTE: Detection of GPUs failed. The API reported:<br /><br /> GROMACS cannot run tasks on a GPU.<br /><br /> Reading file md.tpr, VERSION 2018.2 (single precision)<br /><br /> Changing nstlist from 20 to 80, rlist from 1.224 to 1.32<br /><br /><br /><br /> Using 1 MPI process<br /><br /> Using 16 OpenMP threads<br /><br /><br /><br /> Back Off! I just backed up md.xtc to ./#md<br /> &lt;<a rel="noopener noreferrer" href="https://vk.com/im?sel=15907114&amp;st=%23md">https://vk.com/im?sel=15907114&amp;st=%23md</a>&gt;.xtc.2#<br /><br /><br /><br /> Back Off! I just backed up md.trr to ./#md<br /> &lt;<a rel="noopener noreferrer" href="https://vk.com/im?sel=15907114&amp;st=%23md">https://vk.com/im?sel=15907114&amp;st=%23md</a>&gt;.trr.2#<br /><br /><br /><br /> Back Off! I just backed up md.edr to ./#md<br /> &lt;<a rel="noopener noreferrer" href="https://vk.com/im?sel=15907114&amp;st=%23md">https://vk.com/im?sel=15907114&amp;st=%23md</a>&gt;.edr.2#<br /><br /> starting mdrun 'Protein in water'<br /><br /> <span>30000000</span> steps, 60000.0 ps.<br /><br /> I built gromacs with MPI=on and CUDA=on and the compilation process looked<br /> good. I ran gromacs 2018.2 with CUDA 5 months ago and it worked, but now it<br /> doesn't work.<br /><br /> Information from *.log file:<br /><br /> GROMACS version: 2018.2<br /><br /> Precision: single<br /><br /> Memory model: 64 bit<br /><br /> MPI library: MPI<br /><br /> OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)<br /><br /> GPU support: CUDA<br /><br /> SIMD instructions: AVX_512<br /><br /> FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512<br /><br /> RDTSCP usage: enabled<br /><br /> TNG support: enabled<br /><br /> Hwloc support: disabled<br /><br /> Tracing support: disabled<br /><br /> Built on: <span>2018-06-24 02</span>:55:16<br /><br /> Built by: vlad@vlad [CMAKE]<br /><br /> Build OS/arch: Linux 4.13.0-45-generic x86_64<br /><br /> Build CPU vendor: Intel<br /><br /> Build CPU brand: Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz<br /><br /> Build CPU family: 6 Model: 85 Stepping: 4<br /><br /> Build CPU features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl<br /> clfsh cmov cx8 cx16 f16c fma hle htt intel lahf mmx msr nonstop_tsc pcid<br /> pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm sse2 sse3 sse4.1 sse4.2<br /> ssse3 tdt x2apic<br /><br /> C compiler: /usr/bin/cc GNU 5.4.0<br /><br /> C compiler flags: -mavx512f -mfma -O3 -DNDEBUG -funroll-all-loops<br /> -fexcess-precision=fast<br /><br /> C++ compiler: /usr/bin/c++ GNU 5.4.0<br /><br /> C++ compiler flags: -mavx512f -mfma -std=c++11 -O3 -DNDEBUG<br /> -funroll-all-loops -fexcess-precision=fast<br /><br /> CUDA compiler: /usr/local/cuda-9.2/bin/nvcc nvcc: NVIDIA (R) Cuda compiler<br /> driver;Copyright (c) <span>2005-2018</span> NVIDIA Corporation;Built on<br /> Wed_Apr_11_23:16:29_CDT_2018;Cuda compilation tools, release 9.2, V9.2.88<br /><br /> CUDA compiler<br /> flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70;-use_fast_math;-D_FORCE_INLINES;;<br /> ;-mavx512f;-mfma;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;<br /><br /> CUDA driver: 9.10<br /><br /> CUDA runtime: 32.64<br /><br /><br /><br /> NOTE: Detection of GPUs failed. The API reported:<br /><br /> GROMACS cannot run tasks on a GPU.<br /><br /><br /> Any idea what I am doing wrong?<br /><br /><br /> Best,<br /> Vlad<br /><br /> --<br /> C уважением, Владимир А. Богданов<br /><br /> --<br /> Gromacs Users mailing list<br /><br /> * Please search the archive at<br /> <a rel="noopener noreferrer" href="http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List">http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List</a> before<br /> posting!<br /><br /> * Can't post? Read <a rel="noopener noreferrer" href="http://www.gromacs.org/Support/Mailing_Lists">http://www.gromacs.org/Support/Mailing_Lists</a><br /><br /> * For (un)subscribe requests visit<br /> <a rel="noopener noreferrer" href="https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users">https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users</a> or<br /> send a mail to <a rel="noopener noreferrer" href="mailto:gmx-users-request@gromacs.org">gmx-users-request@gromacs.org</a>.</blockquote><span>--<br />Gromacs Users mailing list</span><p><br />* Please search the archive at <a rel="noopener noreferrer" href="http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List">http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List</a> before posting!<br /><br />* Can't post? Read <a rel="noopener noreferrer" href="http://www.gromacs.org/Support/Mailing_Lists">http://www.gromacs.org/Support/Mailing_Lists</a><br /><br />* For (un)subscribe requests visit<br /><a rel="noopener noreferrer" href="https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users">https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users</a> or send a mail to <a rel="noopener noreferrer" href="mailto:gmx-users-request@gromacs.org">gmx-users-request@gromacs.org</a>.</p></blockquote><div xmlns="http://www.w3.org/1999/xhtml"> </div><div xmlns="http://www.w3.org/1999/xhtml"> </div><div xmlns="http://www.w3.org/1999/xhtml">-- </div><div xmlns="http://www.w3.org/1999/xhtml">C уважением, Владимир А. Богданов</div><div xmlns="http://www.w3.org/1999/xhtml"> </div>