GROMACS

From ACENET
Jump to: navigation, search
Description
GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins and lipids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.
Modulefile
gromacs
Installed Versions
2018.1 , 2016.5 , 5.1.4 , 5.0.7 , 4.6.7 , 4.5.5 , 4.0.7
Documentation
GROMACS homepage

Running GROMACS

  • In order to run gromacs in parallel you have to use the preprocessor grompp to read in your configuration, *.gro, your run parameters, *.mdp and your topology file, *.top, to produce your run input file *.tpr
  • In order to do this correctly you need to type
 # for Gromacs 4.6.7 and older:
 $ grompp 
 # for Gromacs 5.0 and newer:
 $ gmx grompp
  • If you wish to use ngmx for graphical display you will need to use the X11 forwarding.

About suffixes

  • The double precision binaries have the suffix _d.
  • The parallel single and double precision mdrun binaries are:
  • mdrun_mpi
  • mdrun_mpi_d

GROMACS 5 and newer

Gromacs 5 and newer releases consist of only four binaries that contain the full functionality. Please refer to the GROMACS Manual for your version of choice.

  • gmx - serial version containing all Gromacs commands in single precision.
  • mdrun_mpi - parallel mdrun in single precision.
  • gmx_d - serial version containing all Gromacs commands in double precision.
  • mdrun_mpi_d - parallel mdrun in double precision.

Here is an example of a submission script for Gromacs 5.1.4 in parallel:

 #$ -cwd
 #$ -l h_rt=01:00:00
 #$ -pe ompi* 8

 module purge
 module load intel openmpi/intel/1.4  gromacs/5.1.4
 mpirun mdrun_mpi

Here is an example of a submission script for Gromacs 5.1.4 in serial:

 #$ -cwd
 #$ -l h_rt=01:00:00
 
 module purge
 module load intel openmpi/intel/1.4  gromacs/5.1.4
 gmx mdrun

Gromacs 5.0.7 and 5.1.2 are configured and installed the exact same way as 5.1.4, so one only needs to adjust the version on the module load line, e.g. for 5.0.7:

 module purge
 module load intel openmpi/intel/1.4  gromacs/5.0.7

GROMACS 4.6.7

Gromacs 4.6.7 has been compiled with Intel compilers and is linked against a more recent version of Open MPI. Therefore, it requires a different set of pre-loaded modules to be run properly. Here is an example of a submission script for Gromacs 4.6.7:

 #$ -cwd
 #$ -l h_rt=01:00:00
 #$ -pe ompi* 8

 module purge
 module load intel openmpi/intel/1.4  gromacs/4.6.7
 mpirun mdrun_mpi

Here is how to run a serial version of Gromacs 4.6.7 on four cores. Please note that older version of Gromacs might only be available as serial applications.

 #$ -cwd
 #$ -l h_rt=01:00:00
 
 module purge
 module load intel openmpi/intel/1.4  gromacs/4.6.7
 mdrun 

GROMACS 4.6.1

Gromacs 4.6.1 has been compiled with Intel compilers and is linked against a more recent version of Open MPI. Therefore, it requires a different set of pre-loaded modules to be run properly. Here is an example of a submission script for Gromacs 4.6.1:

 #$ -cwd
 #$ -l h_rt=01:00:00
 #$ -pe ompi* 8

 module purge
 module load intel openmpi/intel/1.4 gromacs/4.6.1
 mpirun mdrun_mpi

Here is how to run a threaded version of Gromacs 4.6.1 on four cores. Please note that older version of Gromacs might only be available as serial applications.

 #$ -cwd
 #$ -l h_rt=01:00:00
 #$ -pe openmp 4

 module purge
 module load intel openmpi/intel/1.4 gromacs/4.6.1
 mdrun -nt $NSLOTS

GROMACS 4.5.5 and older

Older Gromacs versions can be run either as MPI or serial jobs. Here are examples for version 4.5.5:

 #$ -cwd
 #$ -l h_rt=01:00:00
 #$ -pe ompi* 8

 module load gromacs 
 mpirun mdrun_mpi
 #$ -cwd
 #$ -l h_rt=01:00:00

 module load gromacs 
 mdrun