Outils pour utilisateurs

Outils du site


logiciels:amber

Amber

Assisted Model Building with Energy Refinement : “Amber” refers to two things: a set of molecular mechanical force fields for the simulation of biomolecules (AmberTools, software in the public domain) ; and a package of molecular simulation programs.

  • Versions installées :
    • AmberTools (libre) : v14
    • Amber (payant) : v12 avec AmberTools v13

La compilation a été faite avec le support OpenMP (utilisable pour certains programmes comme paramfit ou NAB) et MPI. Les versions MPI des programmes sont postfixés avec .MPI, par exemple sandr.MPI : il s'agit de l'installation standard d'Amber, la documentation d'Amber s'applique.

Citation

Si vous utilisez le support GPU : vous devez citer des articles dans vos publications. Voir http://ambermd.org/gpus12/ section Citing the GPU Code

Utilisation

Sélection de la version

Pour sélectionner la version voulue : utiliser les modules

Pour utiliser Amber 12 avec AmberTools 13 :

module load amber/12

Pour utiliser Amber 12 avec AmberTools 14 :

module load amber/12-tools14

Travailler avec slurm

Si vous utilisez un des programmes Amber parallélisé via OpenMP, vous devez suivre l' exemple Slurm pour OpenMP pour en tirer profit.

Version GPU

Une version GPU est compilée (http://ambermd.org/gpus12/). L'exécutable s'appelle pmemd.cuda et doit être exécuté sur le neoud GPU, voir les instructions s'appliquant à CALI sur la page GPU.

Vous devez charger en plus le module CUDA version 5.0 :

module load nvidia/cuda/5.0

Exemples

Serial

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --time=02:00:00
#SBATCH --partition=normal
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=2000 
topname=del
crdname=del
CURR=$(pwd)
RESTART=YES
STEP=0
FINAL=1
module load amber/16-patched-10062016
# MD are performed in Scratch 
# Export running directectory
export RUN_DIR="${HOME}/scratch/Amber/run.${SLURM_JOB_ID}.amber16.${topname}"
# Link the scratch directory to the current directory 
# Link name is related to JOB ID and NOT TO JOBNAME --> Prevent issues related to new MDs with same name
ln -sfn ${RUN_DIR} ${SLURM_JOB_ID}.results
echo ${RUN_DIR} 
# Create Scratch directory
mkdir -p ${RUN_DIR}
# Copy Topology and CRD names into Scratch directory
cp ${topname}.prmtop *.in ${RUN_DIR}
# Copy starting CRD files according to restart procedure or not
if [ $RESTART = 'YES' ]; then
    cp ${topname}_md${STEP}.rst ${RUN_DIR}/
else
    STEP=0
    cp ${topname}.inpcrd ${RUN_DIR}/
fi
# Enter into Scratch directory
cd ${RUN_DIR}
# Run calculation 
# Keep in mind: Never run a long MD into a single trajectory file - Split it into several trajectories
for i in `seq $STEP 1 $FINAL`
do
    j=$(( $i + 1 ))
    # Run MD
    pmemd -O -i md.in -o ${crdname}_md${j}.out -p $topname.prmtop -c ${crdname}_md${i}.rst -r ${crdname}_md${j}.rst -x ${crdname}_${j}.mdcrd
    # Compress mdcrd file to prevent storage issues and remove the uncompressed trajectory
    tar -zcvf ${crdname}_${j}.mdcrd.tar.gz ${crdname}_${j}.mdcrd
    rm -rf ${crdname}_${j}.mdcrd
done

MPI

#!/bin/bash
#SBATCH --nodes=1-X
#SBATCH --ntasks=X
#SBATCH --time=XX:00:00
#SBATCH --partition=normal
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=XXX 
topname=del
crdname=del
CURR=$(pwd)
RESTART=YES
STEP=0
FINAL=1
module load amber/16-patched-10062016
# MD are performed in Scratch 
# Export running directectory
export RUN_DIR="${HOME}/scratch/Amber/run.${SLURM_JOB_ID}.amber16.${topname}"
# Link the scratch directory to the current directory 
# Link name is related to JOB ID and NOT TO JOBNAME --> Prevent issues related to new MDs with same name
ln -sfn ${RUN_DIR} ${SLURM_JOB_ID}.results
echo ${RUN_DIR} 
# Create Scratch directory
mkdir -p ${RUN_DIR}
# Copy Topology and CRD names into Scratch directory
cp ${topname}.prmtop *.in ${RUN_DIR}
# Copy starting CRD files according to restart procedure or not
if [ $RESTART = 'YES' ]; then
    cp ${topname}_md${STEP}.rst ${RUN_DIR}/
else
    STEP=0
    cp ${topname}.inpcrd ${RUN_DIR}/
fi
# Enter into Scratch directory
cd ${RUN_DIR}
# Run calculation 
# Keep in mind: Never run a long MD into a single trajectory file - Split it into several trajectories
for i in `seq $STEP 1 $FINAL`
do
    j=$(( $i + 1 ))
    # Run MD
    srun premed.MPI -O -i md.in -o ${crdname}_md${j}.out -p $topname.prmtop -c ${crdname}_md${i}.rst -r ${crdname}_md${j}.rst -x ${crdname}_${j}.mdcrd
    # Compress mdcrd file to prevent storage issues and remove the uncompressed trajectory
    tar -zcvf ${crdname}_${j}.mdcrd.tar.gz ${crdname}_${j}.mdcrd
    rm -rf ${crdname}_${j}.mdcrd
done

GPU

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --time=02:00:00
#SBATCH --partition=gpu
#SBATCH --cpus-per-task=1
#SBATCH --gres=gpu
topname=del
crdname=del
CURR=$(pwd)
RESTART=YES
STEP=0
FINAL=1
module load amber/16-patched-10062016
module load nvidia/cuda/7.5
# MD are performed in Scratch 
# Export running directectory
export RUN_DIR="${HOME}/scratch/Amber/run.${SLURM_JOB_ID}.amber16.${topname}"
# Link the scratch directory to the current directory 
# Link name is related to JOB ID and NOT TO JOBNAME --> Prevent issues related to new MDs with same name
ln -sfn ${RUN_DIR} ${SLURM_JOB_ID}.results
echo ${RUN_DIR} 
# Create Scratch directory
mkdir -p ${RUN_DIR}
# Copy Topology and CRD names into Scratch directory
cp ${topname}.prmtop *.in ${RUN_DIR}
# Copy starting CRD files according to restart procedure or not
if [ $RESTART = 'YES' ]; then
    cp ${topname}_md${STEP}.rst ${RUN_DIR}/
else
    STEP=0
    cp ${topname}.inpcrd ${RUN_DIR}/
fi
# Enter into Scratch directory
cd ${RUN_DIR}
# Run calculation 
# Keep in mind: Never run a long MD into a single trajectory file - Split it into several trajectories
for i in `seq $STEP 1 $FINAL`
do
    j=$(( $i + 1 ))
    # Run MD
    pmemd.cuda -O -i md.in -o ${crdname}_md${j}.out -p $topname.prmtop -c ${crdname}_md${i}.rst -r ${crdname}_md${j}.rst -x ${crdname}_${j}.mdcrd
    # Compress mdcrd file to prevent storage issues and remove the uncompressed trajectory
    tar -zcvf ${crdname}_${j}.mdcrd.tar.gz ${crdname}_${j}.mdcrd
    rm -rf ${crdname}_${j}.mdcrd
done
logiciels/amber.txt · Dernière modification: 2016/06/10 15:08 de 164.81.157.90