Wednesday, 28 March 2007

AMBER on clustervison

This is a brief introduction on AMBER for clustervision users (clustervision1.shef.ac.uk).

1. set up amber on clustervision

In your home directory, put the following lines into .bashrc file

export AMBERHOME=/usr/local/bin/amber

export PATH=$PATH:$AMBERHOME/exe

2. run amber (serial and parallel)

One will need at least three files for running a simulation with AMBER:

  1. coordinate file, which contains the Cartesian coordinates of all atoms

  2. topology file, which specifies the connection among the atoms and details of the interaction terms

  3. parameter file, which tells how to run the simulation

For instance, to run a md simulation of ethane, the details of the three files can be found:

  1. ethane.inpcrd: http://docs.google.com/Doc?id=dfvdq4zs_104gqz2np
  2. ethane.prmtop: http://docs.google.com/Doc?id=dfvdq4zs_105c99fcx
  3. ethane.mdin: http://docs.google.com/Doc?id=dfvdq4zs_106c2nmkt

There are two scripts used to run serial and parallel amber jobs on clustervision, respectively.

  1. serial: (serial.sh): http://docs.google.com/Doc?id=dfvdq4zs_108drh4qw
  2. parallel: (parallel.sh): http://docs.google.com/Doc?id=dfvdq4zs_109zzsd5s

submit to clustervision1 by command:

qsub serial.sh or qsub parallel.sh

3. make input file (with leap)

xleap -s -f leaprc.glycam04

model=sequence {OME 4MA 4MA 0MA}

charge model

saveamberparm model admanp3.top admanp3.crd

savePDB model admanp3.pdb

model2=copy model

loadoff solvents.lib

solvatebox model TIP3PBOX 10.0

saveamberparm model admanp3water.top admanp3water.crd

savePDB model admanp3water.pdb

quit

4. work with dlpoly

antechamber -i cn.pdb -fi pdb -o cn.mol2 -fo mol2 -c bcc

parmchk -i cn.mol2 -f mol2 -o frcmod

with tleap (or xleap) program:

tleap -s -f leaprc.gaff

mods=loadamberparams frcmod

model=loadmol2 cn.mol2

saveamberparm model cn.top cn.crd

quit

after that, run

amb2dl.pl cn

you will get the CONFIG and FIELD files (cn.cfg and cn.fld) for DLPOLY.

Want more?

http://amber.scripps.edu/tutorials/

http://amber.scripps.edu/doc9/amber9.pdf

Monday, 19 March 2007

how to run parallel dlpoly job on clustervision

#!/bin/bash
#$ -cwd
#$ -v MPI_HOME
#$ -N jobname
#$ -pe mpich_mx 4
export MYAPP=/usr/local/bin/DLPOLY.X
export MPICH_PROCESS_GROUP=no
. /etc/profile.d/modules.sh
module add mx pgi mpich/mx/pgi
/usr/local/Cluster-Apps/mpich/mx/pgi/64/1.2.6..0.94/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines $MYAPP

Tuesday, 13 March 2007

amber v8 compile on mott2:

1. download files from http://www.pgroup.com/resources/amber/amber8_pgi60.htm

2. run: ./configure -mpich -opteron -acml -lapack pgf90

make -e YACC="/usr/bin/bison -y" X11LIBDIR="lib64" parallel

for amber v9 on cluster and mott2

Whenever you see this type of message:

/usr/bin/ld: skipping incompatible
/home/ytang/gdata/whli/openmpi/lib/libmpi_f90.a when searching for -lmpi_f90
/usr/bin/ld: cannot find -lmpi_f90

It means that the type of executable you are building (32 bit vs 64 bit)
does not match with the way the library you are linking against was built. I
suspect that you build openmpi using pgf90 running in 64 bit mode. However,
at the time of Amber9's release the latest pgf90 version was pgf90 6.1-3
64-bit target on x86-64 Linux - this has numerous bugs in the 64 bit
implementation and as a result the amber executables fail a large number of
the test cases. We managed to work around this problem by forcing 32 bit
compilation with the Portland group compilers on x86_64 architectures. This
means that you need to build MPI in 32 bit as well (-tpp7). However, my
advice to you on Opteron is, suprisingly enough, to download the Intel
compilers (the em64t ones) and use these. These will build amber and pmemd
fine in 64 bit mode and will probably run faster (Get version 9.0.033 from
premier.intel.com after you setup an account).

For pmemd the problem is that the configure script is looking for mpich and
not openmpi - the installations have very different library names. I don't
think there is a configure option available for openmpi - perhaps Bob Duke
can comment further. For the moment I would edit the config.h and replace
the MPI_LIBS line with:

-L$(MPI_LIBDIR) -lmpi_f90 -lmpi -lorte -lopal -lutil -lnsl -lpthread -ldl
-Wl,--export-dynamic -lm -lutil -lnsl -lpthread -ldl

This should probably work. Note, pmemd 'does' work with the Portland group
compiler in 64 bit mode, although I have not rigorously tested it to be sure
of this. This means that if you want to build both Sander and PMEMD with the
Portland group compilers you will need to either build both a 32 bit and a
64 bit version of openmpi or edit pmemd's config.h and add -tp p7 to the
F90_OPT_* lines to force 32 bit compilation.

Note if you have a different Portland group compiler to 6.1-3 you could
always try compiling sander in 64 bit mode - just edit the
$AMBERHOME/src/config.h file and just delete the -tp p7 and change all
occurances of -m32 to -m64. Just make sure you run ALL of the test cases and
carefully check the output.