[gmx-users] large scaling required to acheive optimal mesh load

Jennifer Williams Jennifer.Williams at ed.ac.uk
Thu Sep 10 16:35:49 CEST 2009


Hello user,

I am simulating a unit cell with dimensions 70x70x38 nm using PME. I  
started out with a cut-off of rvdw = rcoloumb = rlist=0.9 and a  
Spacing for the PME/PPPM FFT grid= 0.12, optimize fft = yes

I get the following output when I compile the .tpr file:

Using a fourier grid of 60x60x33, spacing 0.117 0.117 0.117
Estimate for the relative computational load of the PME mesh part: 0.97

NOTE 1 [file SMO_CO2.top, line 2159]:
   The optimal PME mesh load for parallel simulations is below 0.5
   and for highly parallel simulations between 0.25 and 0.33,
   for higher performance, increase the cut-off and the PME grid spacing

I did a number of test-runs increasing the cut-offs and the grid  
spacing by a factor of themselves. However I had to nearly double the  
cut-off and grid spacing in order to get the PME mesh load below 50.  
 From the forum notes on the topic I got the impression that only a  
small scaling factor was needed.

My question is, are the values which I have achieved reasonable?

Cut-off: 1.665 and grid spacing 0.222

This is the output using these values....

Checking consistency between energy and charge groups...
Calculating fourier grid dimensions for X Y Z
Using a fourier grid of 32x32x18, spacing 0.219 0.219 0.215
Estimate for the relative computational load of the PME mesh part: 0.38
This run will generate roughly 63 Mb of data
writing run input file...

Does changing these values have any effect on the results of the mdrun  
or only on the speed?

Thanks in advance,

Jenny



-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.





More information about the gromacs.org_gmx-users mailing list