- Hardware
18 dual core AMD Opteron 275 (12) and Opteron 285 (6) ( 4 CPUs each) nodes
Dual core AMD Opteron 275 font end
1TB file server
16 GB memory per node
Myrinet switch
Gigabit Ethernet
- Submit job to clustervision
Before you submit any job to the cluster, you need to load the sge module with command
module load sge
1. Serial job submission
A sample script “serial.sh” for a serial job is shown below.
#!/bin/bash
#$ -cwd
#$ -pe seq 1
MYAPP
You need to replace MYAPP with the executable of your program (with full path), and you can submit it to the cluster by
qsub serial.sh
2. Parallel job submission
A sample script “parallel.sh” for a parallel job is shown below.
#!/bin/bash
source /etc/profile.d/modules.sh
#$ -cwd
#$ -v MPI_HOME
#$ -pe mpich_mx 4
module add sge mx mpich/mx/pgi
${MPI_HOME}/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines MYAPP
You need to replace MYAPP with the executable of your program (with full path) and the number of CPUs you want (4 in the script), and you can submit it to the cluster by
qsub parallel.sh
3. check your job
You can check the status of the jobs running on the cluster with command
qstat
- Scientific applications
1. DL_POLY (2.16 and 3.06)
Serial (DL_POLY 2.18):
#!/bin/bash
#$ -cwd
#$ -pe seq 1
/usr/local/bin/DLPOLY.seq
Parallel (DL_POLY 2.18)
#!/bin/bash
source /etc/profile.d/modules.sh
#$ -cwd
#$ -v MPI_HOME
#$ -pe mpich_mx 4
module add sge mx mpich/mx/pgi
$MYAPP=/usr/local/bin/DLPOLY.X
${MPI_HOME}/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines $MYAPP
Parallel (DL_POLY 3.08)
#!/bin/bash
source /etc/profile.d/modules.sh
#$ -cwd
#$ -v MPI_HOME
#$ -pe mpich_mx 4
module add sge mx mpich/mx/pgi
$MYAPP=/usr/local/bin/DLPOLY.Y
${MPI_HOME}/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines $MYAPP
2. AMBER 9.0
Serial (sander)
#!/bin/bash
#$ -cwd
#$ -pe seq 1
source /etc/profile.d/modules.sh
export AMBERHOME=/usr/local/bin/amber
export PATH=$PATH:$AMBERHOME/exe
/usr/local/bin/amber/exe/sander
Parallel (sander.MPI)
#!/bin/bash
source /etc/profile.d/modules.sh
#$ -cwd
#$ -v MPI_HOME
#$ -pe mpich_mx 4
module add sge mx mpich/mx/pgi
$MYAPP=/usr/local/bin/amber/exe/sander.MPI
${MPI_HOME}/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines $MYAPP
3. CASTEP 3.1
Serial (castep.seq)
#!/bin/bash
#$ -cwd
#$ -pe seq 1
/usr/local/bin/castep.seq
Parallel (castep.mpi)
#!/bin/bash
source /etc/profile.d/modules.sh
#$ -cwd
#$ -v MPI_HOME
#$ -pe mpich_mx 4
module add sge mx mpich/mx/pgi
$MYAPP=/usr/local/bin/castep.mpi
${MPI_HOME}/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines $MYAPP
- Compilers
GCC 4.0 (gcc)
Portland Group Compilers (pgcc, pgf90, mpif90)
In order to use the compilers, you need to load the modules first. For example, if you want to use pgf90 compiler, you need first load the module (pgi/6.1.0) with command
module load pgi/6.1.0
You can use command “module list” to see the modules already loaded, and use “module av” to check availability of the modules.