Tuesday, 15 February 2011

About clustervision

The cluster (know as clustervsion.shef.ac.uk) can only be accessible within the firewall of Sheffield University.

  • Hardware

18 dual core AMD Opteron 275 (12) and Opteron 285 (6) ( 4 CPUs each) nodes
Dual core AMD Opteron 275 font end
1TB file server
16 GB memory per node
Myrinet switch
Gigabit Ethernet

  • Submit job to clustervision

Before you submit any job to the cluster, you need to load the sge module with command

module load sge

1. Serial job submission
A sample script “serial.sh” for a serial job is shown below.
#!/bin/bash
#$ -cwd
#$ -pe seq 1
MYAPP

You need to replace MYAPP with the executable of your program (with full path), and you can submit it to the cluster by


qsub serial.sh

2. Parallel job submission
A sample script “parallel.sh” for a parallel job is shown below.
#!/bin/bash
source /etc/profile.d/modules.sh
#$ -cwd
#$ -v MPI_HOME
#$ -pe mpich_mx 4
module add sge mx mpich/mx/pgi
${MPI_HOME}/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines MYAPP

You need to replace MYAPP with the executable of your program (with full path) and the number of CPUs you want (4 in the script), and you can submit it to the cluster by
qsub parallel.sh

3. check your job
You can check the status of the jobs running on the cluster with command
qstat

  • Scientific applications

1. DL_POLY (2.16 and 3.06)
Serial (DL_POLY 2.18):
#!/bin/bash
#$ -cwd
#$ -pe seq 1
/usr/local/bin/DLPOLY.seq
Parallel (DL_POLY 2.18)
#!/bin/bash
source /etc/profile.d/modules.sh
#$ -cwd
#$ -v MPI_HOME
#$ -pe mpich_mx 4
module add sge mx mpich/mx/pgi
$MYAPP=/usr/local/bin/DLPOLY.X
${MPI_HOME}/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines $MYAPP
Parallel (DL_POLY 3.08)
#!/bin/bash
source /etc/profile.d/modules.sh
#$ -cwd
#$ -v MPI_HOME
#$ -pe mpich_mx 4
module add sge mx mpich/mx/pgi
$MYAPP=/usr/local/bin/DLPOLY.Y
${MPI_HOME}/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines $MYAPP

2. AMBER 9.0
Serial (sander)
#!/bin/bash
#$ -cwd
#$ -pe seq 1
source /etc/profile.d/modules.sh
export AMBERHOME=/usr/local/bin/amber
export PATH=$PATH:$AMBERHOME/exe
/usr/local/bin/amber/exe/sander
Parallel (sander.MPI)
#!/bin/bash
source /etc/profile.d/modules.sh
#$ -cwd
#$ -v MPI_HOME
#$ -pe mpich_mx 4
module add sge mx mpich/mx/pgi
$MYAPP=/usr/local/bin/amber/exe/sander.MPI
${MPI_HOME}/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines $MYAPP

3. CASTEP 3.1
Serial (castep.seq)
#!/bin/bash
#$ -cwd
#$ -pe seq 1
/usr/local/bin/castep.seq
Parallel (castep.mpi)
#!/bin/bash
source /etc/profile.d/modules.sh
#$ -cwd
#$ -v MPI_HOME
#$ -pe mpich_mx 4
module add sge mx mpich/mx/pgi
$MYAPP=/usr/local/bin/castep.mpi
${MPI_HOME}/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines $MYAPP

  • Compilers

GCC 4.0 (gcc)
Portland Group Compilers (pgcc, pgf90, mpif90)
In order to use the compilers, you need to load the modules first. For example, if you want to use pgf90 compiler, you need first load the module (pgi/6.1.0) with command
module load pgi/6.1.0
You can use command “module list” to see the modules already loaded, and use “module av” to check availability of the modules.

Tuesday, 8 January 2008

From amber to gromacs

There is a useful tool to use amber potential in gromacs.

http://chemistry.csulb.edu/ffamber/

There some changes might need to apply before using a pdb file with pdb2gmx program. For exmaple, we build a peptide with sequence (use amber03 force field)

NARG ARG GLU GLU TRP TRP ASP ASP ARG ARG GLU GLU TRP TRP ASP CASP

The atom types in the pdb file are slightly different from that listed in ffamber (ffamber03.rtp):

AMBER ffamber

...

N --- N
H --- H
CA --- CA
HA --- HA
CB --- CB
2HB --- HB1
3HB --- HB2
CG --- CG
OD1 --- OD1
OD2 --- OD2
C --- C
O --- O

...

So we need to rename "2HB" and "3HB" to "HB1" and "HB2", Similarly we need to the following:

2HB -> HB1

3HB -> HB2

2HG -> HG1

3HG -> HG2

1HH1 -> HH11

2HH1 -> HH12

1HH2 -> HH21

2HH2 -> HH22

2HD -> HD1

3HD -> HD2

...

The following bash script can be used to process such a pdb file.

http://docs.google.com/Doc?id=dfvdq4zs_161ctvzp2gx

We also have to rename the first residue name from "ARG" to "NARG", and the last residue name from "ASP" to CASP.

Friday, 29 June 2007

Compiling Quantum ESPRESSO on clustervision

The package Quantum ESPRESSO (http://www.quantum-espresso.org/) is composed of several quantum computation codes.
Installation is completed as following steps: (Note: since it uses fftw2 -- fftw3 might crash the program, therefore we need to unload the fftw3 module first, and use ESPRESSO's built-in fftw2 library instead)
modules loaded on clustervision
module add pgi/6.1.0
module add mpich/mx/pgi/64


First, we configure the Makefile by the following command
./configure --prefix=/home/mingjun LIBDIRS=/home/mingjun/acml/pgi64/lib -FFLAGS="-fastsse" CFLAG=-fast LDFLAGS=-static
It will generate the makefile for parallel executables since it uses mpif90, and then run
make all

Tuesday, 19 June 2007

ACML library on clustervision

If there is any problems caused by compilation of LAPACK and BLAS on clustervision (AMD opteron CPU), a simple way is to use ACML library provided by AMD on the following website
http://developer.amd.com/acml.aspx

Wednesday, 4 April 2007

Compiling castep 4.0.1 on mott2

The compilation can be implemented with the following steps.

1. chmod +x ./bin/arch

2. edit Makefile

COMMS_ARCH :=mpi

FFT :=fftw3

MATHLIBS :=acml

3. mathlibdir

/home/mingjun/acml/pgi64/lib

4. fftlibdir

/home/mingjun/lib

5. make

Wednesday, 28 March 2007

AMBER on clustervison

This is a brief introduction on AMBER for clustervision users (clustervision1.shef.ac.uk).

1. set up amber on clustervision

In your home directory, put the following lines into .bashrc file

export AMBERHOME=/usr/local/bin/amber

export PATH=$PATH:$AMBERHOME/exe

2. run amber (serial and parallel)

One will need at least three files for running a simulation with AMBER:

  1. coordinate file, which contains the Cartesian coordinates of all atoms

  2. topology file, which specifies the connection among the atoms and details of the interaction terms

  3. parameter file, which tells how to run the simulation

For instance, to run a md simulation of ethane, the details of the three files can be found:

  1. ethane.inpcrd: http://docs.google.com/Doc?id=dfvdq4zs_104gqz2np
  2. ethane.prmtop: http://docs.google.com/Doc?id=dfvdq4zs_105c99fcx
  3. ethane.mdin: http://docs.google.com/Doc?id=dfvdq4zs_106c2nmkt

There are two scripts used to run serial and parallel amber jobs on clustervision, respectively.

  1. serial: (serial.sh): http://docs.google.com/Doc?id=dfvdq4zs_108drh4qw
  2. parallel: (parallel.sh): http://docs.google.com/Doc?id=dfvdq4zs_109zzsd5s

submit to clustervision1 by command:

qsub serial.sh or qsub parallel.sh

3. make input file (with leap)

xleap -s -f leaprc.glycam04

model=sequence {OME 4MA 4MA 0MA}

charge model

saveamberparm model admanp3.top admanp3.crd

savePDB model admanp3.pdb

model2=copy model

loadoff solvents.lib

solvatebox model TIP3PBOX 10.0

saveamberparm model admanp3water.top admanp3water.crd

savePDB model admanp3water.pdb

quit

4. work with dlpoly

antechamber -i cn.pdb -fi pdb -o cn.mol2 -fo mol2 -c bcc

parmchk -i cn.mol2 -f mol2 -o frcmod

with tleap (or xleap) program:

tleap -s -f leaprc.gaff

mods=loadamberparams frcmod

model=loadmol2 cn.mol2

saveamberparm model cn.top cn.crd

quit

after that, run

amb2dl.pl cn

you will get the CONFIG and FIELD files (cn.cfg and cn.fld) for DLPOLY.

Want more?

http://amber.scripps.edu/tutorials/

http://amber.scripps.edu/doc9/amber9.pdf

Monday, 19 March 2007

how to run parallel dlpoly job on clustervision

#!/bin/bash
#$ -cwd
#$ -v MPI_HOME
#$ -N jobname
#$ -pe mpich_mx 4
export MYAPP=/usr/local/bin/DLPOLY.X
export MPICH_PROCESS_GROUP=no
. /etc/profile.d/modules.sh
module add mx pgi mpich/mx/pgi
/usr/local/Cluster-Apps/mpich/mx/pgi/64/1.2.6..0.94/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines $MYAPP