[gmx-users] GROMACS API?

Mark Abraham Mark.Abraham at anu.edu.au
Wed Feb 7 01:30:31 CET 2007


Ben FrantzDale wrote:
> On 2/6/07, *Mark Abraham* <Mark.Abraham at anu.edu.au 
> <mailto:Mark.Abraham at anu.edu.au>> wrote:
> 
>     Ben FrantzDale wrote:
>      > I am interested in using GROMACS as a library. I get the
>     impression from
>      > its use in Folding at Home that this is possible, but I don't see any
>      > documentation on the subject. Is there a C/C++ API for GROMACS so
>     that I
>      > could initialize GROMACS with an MPI communicator then send it
>     jobs to
>      > process? If so, did I just not find the documentation?
> 
>     Well, from an engineering point of view, of course libgmx can be used
>     "as a library". You just have to work out how to call it intelligently.
>     The only available documentation for this is reading the source code to
>     see what the data types are and understanding the algorithms as well.
>     You would have to write a bunch of stuff to be able to "initialize
>     GROMACS with an MPI communicator then send it jobs" and you'd have to
>     ask yourself why you'd bother doing that when you could just start a
>     new
>     mdrun process each time you have something new to do...
> 
> 
> Thanks for the response. Any suggestions as to which source file(s) to 
> start with to do this?

Unfortunately the organization into subdirectories of src is a bit 
haphazard... I can't make much sense of why things go in mdlib gmxlib or 
kernel, for example. gmxlib/nonbonded is rational though :-)

Thus I can only suggest what I did... start with main in kernel/mdrun.c 
and construct your own picture of what is going on.

> In response to the last question, I am doing multiscale modeling, 
> linking atomistic to continuum. In that context, the cost of 
> initializing a parallel run, especially using files, can become 
> prohibitive. For example, I want to ask for the forces on all atoms, 
> then programatically move the atoms a little, then ask for the forces 
> again.

Well that's fair enough... but make sure you get it working without the 
complexities of MPI and parallel stuff before you attempt the latter!

Mark



More information about the gromacs.org_gmx-users mailing list