<div class="gmail_quote">On Fri, Jul 27, 2012 at 4:26 AM, Roland Schulz <span dir="ltr"><<a href="mailto:roland@utk.edu" target="_blank">roland@utk.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im">On Thu, Jul 26, 2012 at 10:12 PM, Szilárd Páll <<a href="mailto:szilard.pall@cbr.su.se">szilard.pall@cbr.su.se</a>> wrote:<br>
> On Fri, Jul 27, 2012 at 12:07 AM, Roland Schulz <<a href="mailto:roland@utk.edu">roland@utk.edu</a>> wrote:<br>
>><br>
</div><div><div class="h5">>> On Thu, Jul 26, 2012 at 8:09 AM, Jochen Hub <<a href="mailto:jhub@gwdg.de">jhub@gwdg.de</a>> wrote:<br>
>> > Hi,<br>
>> ><br>
>> > I am trying to compile and run the git master on my Macbook air (OS X<br>
>> > Lion). Without success. If I compile with a newer gcc (4.5 or newer,<br>
>> > installed from Macports), I get errors like (does this have to do with<br>
>> > AVX?)<br>
>> ><br>
>> > [ 1%] Building C object<br>
>> > src/gromacs/CMakeFiles/libgromacs.dir/gmxpreprocess/add_par.c.o<br>
>> > /var/folders/ys/rh9lzqpj7854h34d2__mznph0000gn/T//ccPxJmjg.s:66:no such<br>
>> > instruction: `vmovups 0(%r13), %ymm0'<br>
>> > /var/folders/ys/rh9lzqpj7854h34d2__mznph0000gn/T//ccPxJmjg.s:69:no such<br>
>> > instruction: `vmovups %ymm0, 24(%rdi)'<br>
>> > /var/folders/ys/rh9lzqpj7854h34d2__mznph0000gn/T//ccPxJmjg.s:79:no such<br>
>> > instruction: `vmovss 0(%r13), %xmm1'<br>
>> > /var/folders/ys/rh9lzqpj7854h34d2__mznph0000gn/T//ccPxJmjg.s:83:no such<br>
>> > instruction: `vmovss %xmm1, 24(%rdi,%r9,4)'<br>
>> > /var/folders/ys/rh9lzqpj7854h34d2__mznph0000gn/T//ccPxJmjg.s:99:no such<br>
>> > instruction: `vmovss 0(%r13), %xmm2'<br>
>> > /var/folders/ys/rh9lzqpj7854h34d2__mznph0000gn/T//ccPxJmjg.s:102:no such<br>
>> > instruction: `vmovss %xmm2, 24(%rdi,%r9,4)'<br>
>><br>
>> What is GMX_ACCELERATION set to? Make sure it isn't set to AVX or that<br>
>> if it is that your cflags contain -mavx.<br>
>><br>
>> > On a gcc 4.4 or earlier, compiling works fine, but mdruns stops with a<br>
>> > segfault. A backtrace in gdb gives the following. Seems like something<br>
>> > goes wrong in FFTW (which was compiled with the same gcc and with<br>
>> > --enable-threads --enable-sse --enable-sse2).<br>
>> ><br>
>> > Program received signal EXC_BAD_ACCESS, Could not access memory.<br>
>> > Reason: KERN_INVALID_ADDRESS at address: 0x0000000000000048<br>
>> > [Switching to process 44300 thread 0x1b03]<br>
>> > 0x0000000100050ebd in gomp_resolve_num_threads ()<br>
>> > (gdb) bt<br>
>> > #0 0x0000000100050ebd in gomp_resolve_num_threads ()<br>
>> > #1 0x0000000100050fc3 in GOMP_parallel_start ()<br>
>> > #2 0x00000001004c0bc2 in fft5d_plan_3d ()<br>
>> > #3 0x0000000100434a52 in gmx_parallel_3dfft_init ()<br>
>> > #4 0x000000010046b6fc in gmx_pme_init ()<br>
>> > #5 0x0000000100026ab3 in mdrunner (nthreads_requested=4, fplog=0x0,<br>
>> > cr=0x102100b40, nfile=36, fnm=0x103808200, oenv=0x101000c10, bVerbose=0,<br>
>> > bCompact=1, nstglobalcomm=-1, ddxyz=0x1013c0e04, dd_node_order=1, rdd=0,<br>
>> > rconstr=0, dddlb_opt=0x10002e26a "auto", dlb_scale=0.800000012,<br>
>> > ddcsx=0x0, ddcsy=0x0, ddcsz=0x0, nstepout=100, resetstep=-1,<br>
>> > nmultisim=0, repl_ex_nst=0, repl_ex_nex=0, repl_ex_seed=-1, pforce=-1,<br>
>> > cpt_period=15, max_hours=-1, deviceOptions=0x10002e276 "", Flags=7168)<br>
>> > at /Users/jhub/src/gmx/gromacs/src/programs/mdrun/runner.c:844<br>
>> > #6 0x0000000100024f2d in mdrunner_start_fn (arg=0x101005d60) at<br>
>> > /Users/jhub/src/gmx/gromacs/src/programs/mdrun/runner.c:173<br>
>> > #7 0x0000000100242bfb in tMPI_Thread_starter ()<br>
>> > #8 0x00007fff9785f8bf in _pthread_start ()<br>
>> > #9 0x00007fff97862b75 in thread_start ()<br>
>><br>
>> Did you try a version which included the bugfix for issue 900<br>
>> (002c4985c1d839810816b5c1ba347634b7d7cabb)?<br>
>> What exact compiler did you try? Is it LLVM-gcc or gcc with gcc<br>
>> backend (not llvm)? Also so far we only saw OpenMP problems with<br>
>> llvm-gcc 4.2 not 4.4. So more details would be useful to know.<br>
>><br>
>> > Can anyone give me a hint how to fix this? Or is the master so<br>
>> > experimental that it is not interned to be used at all right now?<br>
>> No it should work pretty well and the testsuite is run before any<br>
>> commit. And the Jenkins configuration does include gcc 4.2 and 4.6 on<br>
><br>
><br>
</div></div><div class="im">> It does, but it uses only the auto-detected GMX_ACCELERATION which gets set<br>
> to SSE4.1 (as the CPUs in the machine don't support AVX).<br>
><br>
> This does suggest to me that we might want to have more thorough (probably<br>
> nightly) builds with virtually all-vs-all important settings (compilers,<br>
> platforms, mandatory libraries, etc.).<br>
<br>
</div>anywhere close to all-vs-all will never be possible unless we<br>
drastically reduce the possible compile time options.<br>
10 compilers (including different versions) * 3 parallelization * gsl<br>
on/off * xml on/off * 3 different FFT * double on/off * 5<br>
accelerations * openmp on/off * 5 os (including version)</blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
is over 30,000 possible configurations and this doesn't even include<br>
yet (gpu, different library versions, more exotic os, older os<br>
versions, cmake versions, ....)<br>
<br>
Of course that doesn't mean it wouldn't be useful to have more<br>
different options (or better combinations) and that nightly tests<br>
might help with that.</blockquote><div><br></div><div><div>Note the *important* adjective above ;) </div><div><br></div><div>XML and GSL is irrelevant and I think we can be a bit smart and prune some configs that we can be fairly confident that they're more or less equivalent. </div>
<div><br></div><div>Not sure what you mean by "3 parallelization"; if it's MPI, tMPI, and OpenMP than it's two or four (depending on weather we want to test both MPI/tMPI + OpenMP on/off). The number of (CPU) acceleration types can be slightly reduced as I wouldn't consider anything but SSE/AVX that highly important. Also, I'd consider only three OS-es and on some of them not all compilers are available.</div>
<div><br></div><div>So we're left with ~ 10x2x3x2x4 = 480 configs per OS (and less on Win). Assuming that each build takes ~3 min which is the case on 2-3 cores (note that I mean build only!), all builds would take 24h per OS. Now, we'll have to dedicate a few cores on a Mac to the task (as it can't be easily virtualized), so the rest, Linux + Win, we can easily run on half of a build server (8 out of 16 cores) in 24h or so. So I don't think it's really unfeasible, even if we'd run such thorough tests for two branches. </div>
</div><div><br></div><div>Even if we multiply the above 24h/4cores/OS with 3-4, it's still a rather manageable number especially that AFAIK hardware resources are not the biggest issue.</div><div><br></div><div>Of course if we want to be strict about all-vs-all we'll lose quickly against the sheer number of combinations. The above exercise was only meant to show that even a *very* extensive weekly building can be feasible. Also note that IMO we should separate builds and tests for everything but gerrit auto-triggered stuff which should be kept at the necessary minimum.</div>
<div><br></div><div>Cheers,</div><div>Sz.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="HOEnZb"><font color="#888888"><br>
Roland<br>
<br>
<br>
<br>
><br>
> --<br>
> Szilárd<br>
</font></span><div class="HOEnZb"><div class="h5">><br>
>><br>
>> Mac. So it is somewhat surprising that you have 2 independent<br>
>> problems.<br>
>><br>
>> Roland<br>
>><br>
>> ><br>
>> > Many thanks,<br>
>> > Jochen<br>
>> ><br>
>> ><br>
>> > --<br>
>> > ---------------------------------------------------<br>
>> > Dr. Jochen Hub<br>
>> > Computational Molecular Biophysics Group<br>
>> > Institute for Microbiology and Genetics<br>
>> > Georg-August-University of Göttingen<br>
>> > Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.<br>
>> > Phone: <a href="tel:%2B49-551-39-14189" value="+495513914189">+49-551-39-14189</a><br>
>> > <a href="http://cmb.bio.uni-goettingen.de/" target="_blank">http://cmb.bio.uni-goettingen.de/</a><br>
>> > ---------------------------------------------------<br>
>> > --<br>
>> > gmx-developers mailing list<br>
>> > <a href="mailto:gmx-developers@gromacs.org">gmx-developers@gromacs.org</a><br>
>> > <a href="http://lists.gromacs.org/mailman/listinfo/gmx-developers" target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-developers</a><br>
>> > Please don't post (un)subscribe requests to the list. Use the<br>
>> > www interface or send it to <a href="mailto:gmx-developers-request@gromacs.org">gmx-developers-request@gromacs.org</a>.<br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>><br>
>><br>
>><br>
>> --<br>
>> ORNL/UT Center for Molecular Biophysics <a href="http://cmb.ornl.gov" target="_blank">cmb.ornl.gov</a><br>
>> 865-241-1537, ORNL PO BOX 2008 MS6309<br>
>> --<br>
>> gmx-developers mailing list<br>
>> <a href="mailto:gmx-developers@gromacs.org">gmx-developers@gromacs.org</a><br>
>> <a href="http://lists.gromacs.org/mailman/listinfo/gmx-developers" target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-developers</a><br>
>> Please don't post (un)subscribe requests to the list. Use the<br>
>> www interface or send it to <a href="mailto:gmx-developers-request@gromacs.org">gmx-developers-request@gromacs.org</a>.<br>
><br>
><br>
<br>
<br>
<br>
--<br>
ORNL/UT Center for Molecular Biophysics <a href="http://cmb.ornl.gov" target="_blank">cmb.ornl.gov</a><br>
865-241-1537, ORNL PO BOX 2008 MS6309<br>
--<br>
gmx-developers mailing list<br>
<a href="mailto:gmx-developers@gromacs.org">gmx-developers@gromacs.org</a><br>
<a href="http://lists.gromacs.org/mailman/listinfo/gmx-developers" target="_blank">http://lists.gromacs.org/mailman/listinfo/gmx-developers</a><br>
Please don't post (un)subscribe requests to the list. Use the<br>
www interface or send it to <a href="mailto:gmx-developers-request@gromacs.org">gmx-developers-request@gromacs.org</a>.<br>
</div></div></blockquote></div><br>