Table des matières

Gromacs

GROMACS is an engine to perform molecular dynamics simulations and energy minimization.

Utilisation

Voir les manuels d'utilisation sur http://manual.gromacs.org/documentation/current/index.html

Sélection de la version

Pour sélectionner la version voulue : utiliser les modules

Par exemple :

module load gromacs/2019

Un seul fichier module existe. Le nom de l'exécutable change en fonction du mode de fonctionnement souhaité :

Performance

La dernière version (2019 ou plus) est à privilégier, car les performances sont bien meilleures qu'avec les anciennes.

J'ai effectué de nombreux benchmarks et on en tire des enseignements intéressants. Le mieux est de consulter la présentation de ces résultats de benchmark. M'écrire à gabin.fabre@unilim.fr pour en discuter.

gromacs_2019_benchmarks.pdf

Exemples de fichiers batch slurm

Sur CPU

#!/bin/bash
#SBATCH --partition=normal
#SBATCH --ntasks=16
#SBATCH --cpus-per-task=1
#SBATCH --threads-per-core=1
#SBATCH --mem-per-cpu=1000
#SBATCH --time=2-00:00:00
#SBATCH --nodes=1-4

# the --nodes option sets the minimum and maximum number of cores. It is good to set the maximum to ntasks/16 to limit jobs spread on many many nodes.

module load gromacs
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
mdrun="srun --kill-on-bad-exit=1 mdrun_mpi" #without the option, the job hangs and is not terminated when it fails.
grompp="gmx grompp" # for gromacs version 5 and later

if [ -r "state.cpt" ] ; then
# if state.cpt exists in the current directory, that means we are in scratch and the calculation has run already
restart="TRUE"
else
restart="FALSE"
fi
# restart="FALSE" # you can force a restart or not with this line

if [ "$restart" == "TRUE" ] ; then
MIN=false # we assume min step is done the first day
#WORKDIR has to be where the data is, i.e. in scratch / inside the results.* folder. So submit this batch file from there.
WORKDIR="$PWD"
sleep 30 # to make sure all processes from the previous job are killed
restartoptions="-cpi state.cpt -append"
else # options for the beginning of the run
MIN=true
simname=`basename $PWD`
WORKDIR="$HOME/scratch/gromacs/run.$SLURM_JOB_ID.gromacs.$simname"
#
# Directory used to store the results
#
mkdir -p $WORKDIR || {
echo "ERROR Creating the working directory"
exit 1
}
cp * $WORKDIR
ln -sfn $WORKDIR "results.$SLURM_JOB_ID" # create symbolic link to scratch
cd $WORKDIR
restartoptions=""
fi

cd $WORKDIR

if [ $MIN = "true" ]; then

$grompp -f em.mdp -c md.gro -n md.ndx -p md.top -o em.tpr
$mdrun -s em.tpr -o em.trr -c em.gro -e em.edr -g em.log

$grompp -f pr.mdp -c em.gro -n md.ndx -p md.top -r em.gro -o pr.tpr
$mdrun -s pr.tpr -o pr.trr -c pr.gro -e pr.edr -g pr.log

mv pr.gro 0.gro

fi

#use the following line if you want to prolong a simulation that crashed or terminated normally.
#if you just want to finish it after a crash, comment it.
#adjust the -until option to the total amount of ps you want to have.
#gmx convert-tpr -s md.tpr -o md.tpr -until 1000000

# tricky part: submit the same job, that will only be run after the current one crashes.
sbatch -d afternotok:$SLURM_JOB_ID $0

if [ "$restart" == "FALSE" ] ; then
$grompp -f md.mdp -c 0.gro -n md.ndx -p md.top -o md.tpr
fi

#the -cpi option will use your checkpoint to restart the calculation and continue writing to your files.
$mdrun -s md.tpr -o md.trr -x md.xtc -c md_out.gro -e md.edr -g md.log $restartoptions

rm -f md.trr # remove the big file (>1GB) after the calculation
rm -f #* # remove the backup files

Exemple de fichier batch slurm sur GPU

Sur 1 GPU

#!/bin/bash
#SBATCH --partition=gpu
#SBATCH --qos=gpu
#SBATCH --gres=gpu:1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
#SBATCH --threads-per-core=1
#SBATCH --mem-per-cpu=1000
#SBATCH --time=7-00:00:00

module load gromacs/5.0.2-gpu
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

WORKDIR="$HOME/scratch/$SLURM_JOB_ID"

MIN=true
grompp=grompp
mdrun="mdrun"

# Directory used to store the results
mkdir $WORKDIR || {
        echo "ERROR Creating the working directory"
        exit 1
}

cp * $WORKDIR
cd $WORKDIR

if [ $MIN = "true" ]; then

  $grompp -f em.mdp -c md.gro -n md.ndx -p md.top -o em.tpr
  $mdrun -s em.tpr -o em.trr -c em.gro -e em.edr -g em.log

  $grompp -f pr.mdp -c em.gro -n md.ndx -p md.top -r em.gro -o pr.tpr
  $mdrun -s pr.tpr -o pr.trr -c pr.gro -e pr.edr -g pr.log

  mv pr.gro 0.gro

fi

$grompp -f md.mdp -c 0.gro -n md.ndx -p md.top -o md.tpr
$mdrun -s md.tpr -o md.trr -x md.xtc -c md_out.gro -e md.edr -g md.log
rm -f md.trr # remove the uncompressed trajectory after the calculation
rm -f \#* # remove backup files

Sur 2 GPUs

#!/bin/bash
#SBATCH --partition=gpu
#SBATCH --qos=gpu
#SBATCH --gres=gpu:2
#SBATCH --ntasks=2
#SBATCH --cpus-per-task=8
#SBATCH --threads-per-core=1
#SBATCH --mem-per-cpu=1000
#SBATCH --time=7-00:00:00

module load gromacs/5.0.2-gpu
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

WORKDIR="$HOME/scratch/$SLURM_JOB_ID"

MIN=true
grompp=grompp
mdrun="mdrun"

# Directory used to store the results
mkdir $WORKDIR || {
        echo "ERROR Creating the working directory"
        exit 1
}

cp * $WORKDIR
cd $WORKDIR

if [ $MIN = "true" ]; then

  $grompp -f em.mdp -c md.gro -n md.ndx -p md.top -o em.tpr
  $mdrun -s em.tpr -o em.trr -c em.gro -e em.edr -g em.log

  $grompp -f pr.mdp -c em.gro -n md.ndx -p md.top -r em.gro -o pr.tpr
  $mdrun -s pr.tpr -o pr.trr -c pr.gro -e pr.edr -g pr.log

  mv pr.gro 0.gro

fi

$grompp -f md.mdp -c 0.gro -n md.ndx -p md.top -o md.tpr
$mdrun -s md.tpr -o md.trr -x md.xtc -c md_out.gro -e md.edr -g md.log
rm -f md.trr # remove the uncompressed trajectory after the calculation
rm -f \#* # remove backup files