Submitting jobs
PBS comes with very complete man pages. Therefore, for complete documentation of PBS commands you are encouraged to type man pbs
and go from there. Jobs are submitted using the qsub
command. Type man qsub
for information on the plethora of options that it offers.
Let’s say I have an executable called “myprog”. Let me try and submit it to PBS:
[username@launch ~]$ qsub myprog qsub: file must be an ascii script
Oops… That didn’t work because qsub expects a shell script. Any shell should work, so use your favorite one. So I write a simple script called “myscript.sh”
#!/bin/bash
cd $PBS_O_WORKDIR
./myprog argument1 argument2
and then I submit it:
[username@launch ~]$ qsub myscript.sh 4681.mn01
That worked! Note the use of the $PBS_O_WORKDIR
environment variable. This is important, since by default PBS on our cluster will start executing the commands in your shell script from your home directory. To go to the directory in which you executed qsub
, cd
to $PBS_O_WORKDIR
. There are several other useful PBS environment variables that we will encounter later.
Editing files
Editing files on the cluster can be done through a couple of different methods…
Native Editors
vim
– The visual editor (vi) is the traditional Unix editor. However, it is not necessarily the most intuitive editor. That being the case, if you are unfamiliar with it, there is a vi tutorial,vimtutor
.pico
– While pico is not installed on the system, nano is installed, and is a pico work-a-like.nano
– Nano has a good bit of on-screen help to make it easier to use.
External Editors
You can also use your favourite editor on your local machine and then transfer the files over to the HPC afterwards. One caveat to this is that files created on Windows machines usually contain unprintable characters which may be misinterpreted by Linux command interpreters (shells). If this happens, there is a utility called dos2unix
that you can use to convert the text file from DOS/Windows formatting to Linux formatting.
$ dos2unix script.sub dos2unix: converting file script.sub to UNIX format ...
Specifying job parameters
By default, any script you submit will run on a single processor for a maximum of 5 minutes. The name of the job will be the name of the script, and it will not email you when it starts, finishes, or is interrupted. stdout and stderr are collected into separate files named after the job number. You can affect the default behaviour of PBS by passing it parameters. These parameters can be specified on the command line or inside the shell script itself. For example, let’s say I want to send stdout and stderr to a file that is different from the default:
[username@launch ~]$ qsub -e myprog.err -o myprog.out myscript.sh
Alternatively, I can actually edit myscript.sh to include these parameters. I can specify any PBS command line parameter I want in a line that begins with “#PBS”:
#!/bin/bash
#PBS -e myprog.err
#PBS -o myprog.out
cd $PBS_O_WORKDIR
./myprog argument1 argument2
Now I just submit my modified script with no command-line arguments
[username@launch ~]$ qsub myscript.csh
Useful PBS parameters
Here is an example of a more involved script that requests only 1 hour of execution time, renames the job, and sends email when the job begins, ends, or aborts:
#!/bin/bash
# Name of my job:
#PBS -N My-Program
# Run for 1 hour:
#PBS -l walltime=1:00:00
# Where to write stderr:
#PBS -e myprog.err
# Where to write stdout:
#PBS -o myprog.out
# Send me email when my job aborts, begins, or ends
#PBS -m abe
# This command switched to the directory from which the "qsub" command was run:
cd $PBS_O_WORKDIR
# Now run my program
./myprog argument1 argument2
echo Done!
Some more useful PBS parameters:
- -M: Specify your email address (defaults to campus email).
- -j oe: merge standard output and standard error into standard output file.
- -V: export all your environment variables to the batch job.
- -I: run an interactive job (see below).
Once again, you are encouraged to consult the qsub manpage for more options.
Special concerns for running OpenMP programs
By default, PBS assigns you 1 core on 1 node. You can, however, run your job on up to 64 cores per node. Therefore, if you want to run an OpenMP program, you must specify the number of processors per node. This is done with the flag -l select=1:ncpus=<cores>
where <cores>
is the number of OpenMP threads you wish to use. Keep in mind that you still must set the OMP_NUM_THREADS environment variable within your script, e.g.:
#!/bin/bash
#PBS -N My-OpenMP-Script
#PBS -l select=1:ncpus=8
#PBS -l walltime=1:00:00
cd $PBS_O_WORKDIR
export OMP_NUM_THREADS=8
./MyOpenMPProgram
Jobs with large output files
Instead of a job submission like this:
#!/bin/bash
#PBS -N massiveJob
cd $PBS_O_WORKDIR
myprogram -i /home/me/inputfile -o /home/me/outputfile
change it to something like this:
#!/bin/bash
#PBS -l select=1:ncpus=1:scratch=true
#PBS -N massiveJob
# make sure I'm the only one that can read my output
umask 0077
# create a temporary directory with the job ID as name in /scratch
TMP=/scratch/${PBS_JOBID}
mkdir -p ${TMP}
echo "Temporary work dir: ${TMP}"
# copy the input files to ${TMP}
echo "Copying from ${PBS_O_WORKDIR}/ to ${TMP}/"
/usr/bin/rsync -vax "${PBS_O_WORKDIR}/" ${TMP}/
cd ${TMP}
# write my output to my new temporary work directory
myprogram -i inputfile -o outputfile
# job done, copy everything back
echo "Copying from ${TMP}/ to ${PBS_O_WORKDIR}/"
/usr/bin/rsync -vax ${TMP}/ "${PBS_O_WORKDIR}/"
# delete my temporary files
[ $? -eq 0 ] && /bin/rm -rf ${TMP}
Any job that has to write massive amounts of data will benefit from the above. Take note of the:
scratch=true
that was added to the node request line. If you do not add that feature request to the script, your job may be assigned to a node without scratch space.
Using the PBS_NODEFILE for multi-threaded jobs
Until now, we have only dealt with serial jobs. In a serial job, your PBS script will automatically be executed on the target node assigned by the scheduler. If you asked for more than one node, however, your script will only execute on the first node of the set of nodes allocated to you. To access the remainder of the nodes, you must either use MPI or manually launch threads. But which nodes to run on? PBS gives you a list of nodes in a file at the location pointed to by the PBS_NODEFILE
environment variable. In your shell script, you may thereby ascertain the nodes on which your job can run by looking at the file in the location specified by this variable:
#!/bin/bash
#PBS -l select=2:mpiprocs=8
echo The nodefile for this job is stored at $(echo ${PBS_NODEFILE})
cat ${PBS_NODEFILE}
When you run this job, you should then get output similar to:
The nodefile for this job is stored at /var/spool/PBS/aux/33.pbsserver.hpc comp001.hpc comp001.hpc comp001.hpc comp001.hpc comp001.hpc comp001.hpc comp001.hpc comp001.hpc comp002.hpc comp002.hpc comp002.hpc comp002.hpc comp002.hpc comp002.hpc comp002.hpc comp002.hpc
If you have an application that manually forks processes onto the nodes of your job, you are responsible for parsing the PBS_NODEFILE
to determine which nodes those are.
Some MPI implementations require you to feed the PBS_NODEFILE
to mpirun
, e.g. for Open MPI one may pass -hostfile my_nodefile.txt
.
Selecting different node in one job
Using the above information, one may allocate multiple nodes of the same type, e.g. multiple 48-core nodes. In order to mix multiple different resources, one may use the PBS’ “+” notation. For example in order to mix one 48-core node and two 8-core node in one PBS job, one may pass:
[username@launch ~]$ qsub -lselect=1:ncpus=48:mpiprocs=48+2:ncpus=8:mpiprocs=8 myscript.csh
Guidelines / Rules
- Create a temporary working directory in /scratch, not /tmp
- /tmp is reserved for use by the operating system, and is only 5GB in size.
- Preferably specify /scratch/$PBS_JOBID in your submit script so that it’s easy to associate scratch directories with their jobs.
- Copy your input files to your scratch space and work on the data there. Avoid using your home directory as much as possible.
- If you need more than about 500GB of scratch space, you can also use /scratch2. It’s a lot slower than /scratch, so try to avoid that too.
- Copy only your results back to your home directory. Input files that haven’t changed don’t need to be copied.
- Erase your temporary working directory when you’re done.
- Secure your work from accidental deletion or contamination by disallowing other users access to your scratch directories
umask 0077
disallows access by all other users
Examples
ADF
ADF generates run files which are scripts which contain your data. Make sure to convert it to a UNIX file first using dos2unix, and remember to make it executable with chmod +x.
ADF script requesting 4 cores, on 1 node, -m selects to mail begin and end messages and -M is the email address to send to. Requests 1 week walltime.
#!/bin/bash
#PBS -N JobName
#PBS -l select=1:ncpus=4:scratch=true
#PBS -l walltime=168:00:00
#PBS -m be
#PBS -M username@sun.ac.za
INPUT=inputfile.run
# make sure I'm the only one that can read my output
umask 0077
TMP=/scratch/${PBS_JOBID}
mkdir -p ${TMP}
if [ ! -d "${TMP}"] ; then
echo "Cannot create temporary directory. Disk probably full."
exit 1
fi
cd ${TMP}
. /apps/adf/2014.04/adfrc.sh
# override ADF's scratch directory
export SCM_TMPDIR=${TMP}
# override log file
export SCM_LOGFILE="${TMP}/${PBS_JOBID}.logfile"
# Submit job
${PBS_O_WORKDIR}/${INPUT}
# job done, copy everything back
echo "Copying from ${TMP}/ to ${PBS_O_WORKDIR}/"
/usr/bin/rsync -vax ${TMP}/ "${PBS_O_WORKDIR}/"
# delete my temporary files
[ $? -eq 0 ] && /bin/rm -rf ${TMP}
ANSYS
Fluent
Fluent script requesting 4 cores, on 1 node, -m selects to mail begin and end messages and -M is the email address to send to. Requests 1 week walltime.
#!/bin/bash
#PBS -N JobName
#PBS -l select=1:ncpus=4:mpiprocs=4:scratch=true
#PBS -l walltime=168:00:00
#PBS -m be
#PBS -e output.err
#PBS -o output.out
#PBS -M username@sun.ac.za
INPUT=inputfile.jou
# make sure I'm the only one that can read my output
umask 0077
TMP=/scratch/${PBS_JOBID}
mkdir -p ${TMP}
if [ ! -d "${TMP}"] ; then
echo "Cannot create temporary directory. Disk probably full."
exit 1
fi
# copy the input files to ${TMP}
echo "Copying from ${PBS_O_WORKDIR}/ to ${TMP}/"
/usr/bin/rsync -vax "${PBS_O_WORKDIR}"/ ${TMP}/
cd ${TMP}
# choose version of FLUENT
#module load app/ansys150
module load app/ansys162
# Automatically calculate the number of processors
np=$(cat ${PBS_NODEFILE} | wc -l)
fluent 3d -pdefault -cnf=${PBS_NODEFILE} -mpi=intel -g -t${np} -ssh -i ${INPUT}
# job done, copy everything back
echo "Copying from ${TMP}/ to ${PBS_O_WORKDIR}/"
/usr/bin/rsync -vax ${TMP}/ "${PBS_O_WORKDIR}/"
# delete my temporary files
[ $? -eq 0 ] && /bin/rm -rf ${TMP}
Fluid-Structure Interaction
You need the following 5 files:
- coupling (.sci) file
- structural data (.dat) file
- case (.cas.gz) file
- journal (.jnl) file
- submit script (.sh)
The coupling file should contain two participants. The names of these participants should not have spaces in them. In the example below, Solution 4
should be renamed to something like Solution4
. Make sure to replace all instances of the name in the file.
<SystemCoupling Ver="1"> <Participants Count="2"> <Participant Ver="1" Type="0"> <Name PropType="string">Solution 4</Name> <DisplayName PropType="string">0012 V2</DisplayName> <SupportsCouplingIterations PropType="bool">True</SupportsCouplingIterations> <UnitSystem PropType="string">MKS_STANDARD</UnitSystem> <Regions Count="1"> <--- snip --->
The journal file should contain (replace the filename on the ‘rc’ line with your case file):
file/start-transcript Solution.trn file set-batch-options , yes , rc FFF-1.1-1-00047.cas.gz solve/initialize/initialize-flow (sc-solve) wcd FluentRestart.cas.gz exit ok
The job script is given below. Update the COUPLING
, STRUCTURALDATA
, JOURNAL
and NPA
variables to reflect your case.
#!/bin/bash
#PBS -N fsi
#PBS -l select=1:ncpus=48:mpiprocs=48:mem=90GB:scratch=true
#PBS -l walltime=24:00:00
COUPLING=coupling.sci
STRUCTURALDATA=ds.dat
JOURNAL=fluent.journal
# number of processors for Ansys
NPA=8
# Automatically calculate the number of processors left over for Fluent
NP=$(cat ${PBS_NODEFILE} | wc -l)
NPF=$((NP-NPA))
# make sure I'm the only one that can read my output
umask 0077
# create a temporary directory with a random name in /scratch
TMP=/scratch/${PBS_JOBID}
mkdir -p ${TMP}
echo "Temporary work dir: ${TMP}"
if [ ! -d "${TMP}"] ; then
echo "Cannot create temporary directory. Disk probably full."
exit 1
fi
# copy the input files to ${TMP}
echo "Copying from ${PBS_O_WORKDIR}/ to ${TMP}/"
/usr/bin/rsync -vax "${PBS_O_WORKDIR}/" ${TMP}/
cd ${TMP}
module load app/ansys162
# Start coupling program
/apps/ansys_inc/v162/aisol/.workbench -cmd ansys.services.systemcoupling.exe -inputFile ${COUPLING} &
# Wait until scServer.scs is created
TIMEOUT=60
while [ ! -f scServer.scs -a $TIMEOUT -gt 0 ]; do
TIMEOUT=$((TIMEOUT-1))
sleep 2
done
if [ -f scServer.scs ]; then
# Parse the data in scServer.scs
readarray JOB < scServer.scs
HOSTPORT=(${JOB[0]//@/ })
# Run Fluent
fluent 3ddp -g -t${NPF} -driver null -ssh -scport=${HOSTPORT[0]} -schost=${HOSTPORT[1]} -scname="${JOB[4]}" < ${JOURNAL} > output.FLUENT &
# Run Ansys
ansys162 -b -scport=${HOSTPORT[0]} -schost=${HOSTPORT[1]} -scname="${JOB[2]}" -i ${STRUCTURALDATA} -o output.ANSYS -np ${NPA}
# job done, copy everything back
echo "Copying from ${TMP}/ to ${PBS_O_WORKDIR}/"
/usr/bin/rsync -vax ${TMP}/ "${PBS_O_WORKDIR}/"
fi
# delete my temporary files
[ $? -eq 0 ] && /bin/rm -rf ${TMP}
CFX
CFX script requesting 4 cores, on 1 node, -m selects to mail begin and end messages and -M is the email address to send to. Requests 1 week walltime.
#!/bin/bash
#PBS -N JobName
#PBS -l select=1:ncpus=4:mpiprocs=4:scratch=true
#PBS -l walltime=168:00:00
#PBS -m be
#PBS -e output.err
#PBS -o output.out
#PBS -M username@sun.ac.za
DEF=inputfile.def
INI=inputfile.ini
# make sure I'm the only one that can read my output
umask 0077
TMP=/scratch/${PBS_JOBID}
mkdir -p ${TMP}
if [ ! -d "${TMP}"] ; then
echo "Cannot create temporary directory. Disk probably full."
exit 1
fi
# copy the input files to ${TMP}
echo "Copying from ${PBS_O_WORKDIR}/ to ${TMP}/"
/usr/bin/rsync -vax "${PBS_O_WORKDIR}"/ ${TMP}/
cd ${TMP}
module load app/ansys162
# get list of processors
PAR=$(sed -e '{:q;N;s/\n/,/g;t q}' ${PBS_NODEFILE})
cfx5solve -def ${DEF} -ini ${INI} -par-dist ${PAR}
# job done, copy everything back
echo "Copying from ${TMP}/ to ${PBS_O_WORKDIR}/"
/usr/bin/rsync -vax ${TMP}/ "${PBS_O_WORKDIR}/"
# delete my temporary files
[ $? -eq 0 ] && /bin/rm -rf ${TMP}
Abaqus
Abaqus script requesting 4 cores, on 1 node, -m selects to mail begin and end messages and -M is the email address to send to. Uses system default walltime.
#!/bin/bash
#PBS -l select=1:ncpus=4:mpiprocs=4:scratch=true
#PBS -m be
#PBS -M username@sun.ac.za
# the input file without the .inp extension
JOBNAME=xyz
# make sure I'm the only one that can read my output
umask 0077
TMP=/scratch/${PBS_JOBID}
mkdir -p ${TMP}
if [ ! -d "${TMP}"] ; then
echo "Cannot create temporary directory. Disk probably full."
exit 1
fi
# copy the input files to ${TMP}
echo "Copying from ${PBS_O_WORKDIR}/ to ${TMP}/"
/usr/bin/rsync -vax "${PBS_O_WORKDIR}"/ ${TMP}/
cd ${TMP}
module load app/abaqus
# Automatically calculate the number of processors
np=$(cat ${PBS_NODEFILE} | wc -l)
abaqus job=${JOBNAME} input=${JOBNAME}.inp analysis cpus=${np} scratch=${TMP} interactive
wait
# job done, copy everything back
echo "Copying from ${TMP}/ to ${PBS_O_WORKDIR}/"
/usr/bin/rsync -vax ${TMP}/ "${PBS_O_WORKDIR}/"
# delete my temporary files
[ $? -eq 0 ] && /bin/rm -rf ${TMP}
R
R script requesting 1 node, -m selects to mail begin and end messages and -M is the email address to send to. Uses system default walltime.
#!/bin/bash
#PBS -l select=1:ncpus=1
#PBS -M username@sun.ac.za
#PBS -m be
cd ${PBS_O_WORKDIR}
module load app/R
R CMD BATCH script.R
CPMD
CPMD script requesting 8 cores on 1 node, -N names the job ‘cpmd’, -m selects to mail end message and -M is the email address to send to. CPMD runs with MPI which needs to be told which nodes it may use. The list of nodes it may use is given in $PBS_NODEFILE
. Uses system default walltime.
#!/bin/bash
#PBS -N cpmd
#PBS -l select=1:ncpus=8:mpiprocs=8
#PBS -m e
#PBS -M username@sun.ac.za
module load compilers/gcc-4.8.2
module load openmpi-x86_64
cd ${PBS_O_WORKDIR}
# Automatically calculate the number of processors
np=$(cat ${PBS_NODEFILE} | wc -l)
mpirun -np ${np} --hostfile ${PBS_NODEFILE} /apps/CPMD/3.17.1/cpmd.x xyz.inp > xyz.out
Gaussian
Gaussian has massive temporary files (.rwf file). Generally we don’t care about this file afterward, so this script doesn’t copy it from temporary storage after job completion. Requests 6 week walltime.
#!/bin/bash
#PBS -N SomeHecticallyChemicalName
#PBS -l select=1:ncpus=8:mpiprocs=8:mem=16GB:scratch=true
#PBS -l walltime=1008:00:00
#PBS -m be
INPUT=input.cor
# make sure I'm the only one that can read my output
umask 0077
TMP=/scratch/${PBS_JOBID}
TMP2=/scratch2/${PBS_JOBID}
mkdir -p ${TMP} ${TMP2}
if [ ! -d "${TMP}"] ; then
echo "Cannot create temporary directory. Disk probably full."
exit 1
fi
if [ ! -d "${TMP2}" ]; then
echo "Cannot create overflow temporary directory. Disk probably full."
exit 1
fi
export GAUSS_SCRDIR=${TMP}
# copy the input files to ${TMP}
echo "Copying from ${PBS_O_WORKDIR}/ to ${TMP}/"
/usr/bin/rsync -vax "${PBS_O_WORKDIR}"/ ${TMP}/
cd ${TMP}
# make sure input file has %RWF line for specifying temporary storage
if [ -z "$(/bin/grep ^%RWF ${INPUT})" ]; then
/bin/sed -i '1s/^/%RWF\n/' ${INPUT}
fi
# assign 100GB of local temporary storage for every 4 CPUs
MAXTMP=$(( $(/bin/cat ${PBS_NODEFILE} | /usr/bin/wc -l) * 100 / 4 ))
# update input file to use local temporary storage
/bin/sed -i -E "s|%RWF(.*)|%RWF=${TMP}/,${MAXTMP}GB,${TMP2}/1.rwf,500GB,${TMP2}/2.rwf,500GB,${TMP2}/3.rwf,500GB,${TMP2}/4.rwf,500GB,${TMP2}/,-1|g" ${TMP}/${INPUT}
. /apps/Gaussian/09D/g09/bsd/g09.profile
/apps/Gaussian/09D/g09/g09 ${INPUT} > output.log
# job done, copy everything except .rwf back
echo "Copying from ${TMP}/ to ${PBS_O_WORKDIR}/"
/usr/bin/rsync -vax --exclude=*.rwf ${TMP}/ "${PBS_O_WORKDIR}/"
# delete my temporary files
[ $? -eq 0 ] && /bin/rm -rf ${TMP} ${TMP2}
This script also requires that the input file contains a line starting with %RWF. This is so that the script can update the input file to specify that only the first part of the RWF be written to the compute node’s local scratch space. Overflow is written to the scratch space on the storage server. Unfortunately the RWF files can grow in size to more than 1TB, and can fill the compute node’s scratch space, choking out other jobs and dying itself.
pisoFOAM
pisoFOAM generates a lot of output, not all of which is useful. In this example we use crontab to schedule the deletion of unwanted output while the job runs. Requests 3 week walltime.
#!/bin/bash
#PBS -l select=1:ncpus=8:mpiprocs=8:scratch=true
#PBS -l walltime=504:00:00
#PBS -m be
# make sure I'm the only one that can read my output
umask 0077
# create a temporary directory in /scratch
TMP=/scratch/${PBS_JOBID}
/bin/mkdir ${TMP}
echo "Temporary work dir: ${TMP}"
if [ ! -d "${TMP}"] ; then
echo "Cannot create temporary directory. Disk probably full."
exit 1
fi
# copy the input files to ${TMP}
echo "Copying from ${PBS_O_WORKDIR}/ to ${TMP}/"
/usr/bin/rsync -vax "${PBS_O_WORKDIR}/" ${TMP}/
cd ${TMP}
# start crontab, delete unwanted files every 6 hours
/bin/echo "0 */6 * * * /bin/find ${TMP} -regextype posix-egrep -regex '(${TMP}/processor[0-9]+)/([^/]*)/((uniform/.*)|ddt.*|phi.*|.*_0.*)' -exec rm {} \\;" | /usr/bin/crontab
# Automatically calculate the number of processors
np=$(cat ${PBS_NODEFILE} | wc -l)
module load compilers/gcc-4.8.2
module load openmpi/1.6.5
export MPI_BUFFER_SIZE=200000000
export FOAM_INST_DIR=/apps/OpenFOAM
foamDotFile=${FOAM_INST_DIR}/OpenFOAM-2.2.2/etc/bashrc
[ -f ${foamDotFile} ] && . ${foamDotFile}
blockMesh
decomposePar
mpirun -np ${np} pisoFoam -parallel > ${PBS_O_WORKDIR}/output.log
# remove crontab entry (assumes I only have one on this node)
/usr/bin/crontab -r
# job done, copy everything back
echo "Copying from ${TMP}/ to ${PBS_O_WORKDIR}/"
/usr/bin/rsync -vax --exclude "*_0.gz" --exclude "phi*.gz" --exclude "ddt*.gz" ${TMP}/ "${PBS_O_WORKDIR}/"
# delete my temporary files
[ $? -eq 0 ] && /bin/rm -rf ${TMP}
MSC Marc
Marc script requesting 8 cores, on 1 node, -m selects to mail end message and -M is the email address to send to.
#!/bin/bash
#PBS -N JobName
#PBS -l select=1:ncpus=8:mpiprocs=8:scratch=true
#PBS -l walltime=24:00:00
#PBS -l license_marc=8
#PBS -m e
INPUT=inputfile
# make sure I'm the only one that can read my output
umask 0077
TMP=/scratch/${PBS_JOBID}
mkdir -p ${TMP}
if [ ! -d "${TMP}"] ; then
echo "Cannot create temporary directory. Disk probably full."
exit 1
fi
# copy the input files to ${TMP}
echo "Copying from ${PBS_O_WORKDIR}/ to ${TMP}/"
/usr/bin/rsync -vax "${PBS_O_WORKDIR}"/ ${TMP}/
cd ${TMP}
module load app/marc
# get number of processors assigned
NPS=$(/bin/cat ${PBS_NODEFILE} | /usr/bin/wc -l)
HOSTS=hosts.${PBS_JOBID}
[ -f ${HOSTS} ] && /bin/rm ${HOSTS}
# create hosts file
uniq -c ${PBS_NODEFILE} | while read np host; do
/bin/echo "${host} ${np}" >> ${HOSTS}
done
if [ ${NPS} -gt 1 ]; then
run_marc -j ${INPUT} -ver n -back n -ci n -cr n -nps ${NPS} -host ${HOSTS}
else
run_marc -j ${INPUT} -ver n -back n -ci n -cr n
fi
# job done, copy everything back
echo "Copying from ${TMP}/ to ${PBS_O_WORKDIR}/"
/usr/bin/rsync -vax ${TMP}/ "${PBS_O_WORKDIR}/"
# delete my temporary files
[ $? -eq 0 ] && /bin/rm -rf ${TMP}
mothur
mothur has massive data volumes, and therefore has to use local scratch space to avoid killing the file server. Requests 1 core on 1 node.
mothur’s input can either be a file with all the commands to process listed, or the commands can be given on the commandline if prefixed with a #.
#!/bin/bash
#PBS -l select=1:ncpus=1:mpiprocs=1:scratch=true
#PBS -m e
# make sure I'm the only one that can read my output
umask 0077
TMP=/scratch/${PBS_JOBID}
mkdir -p ${TMP}
if [ ! -d "${TMP}"] ; then
echo "Cannot create temporary directory. Disk probably full."
exit 1
fi
# copy the input files to ${TMP}
echo "Copying from ${PBS_O_WORKDIR}/ to ${TMP}/"
/usr/bin/rsync -vax "${PBS_O_WORKDIR}"/ ${TMP}/
cd ${TMP}
module load app/mothur
# Automatically calculate the number of processors
np=$(cat ${PBS_NODEFILE} | wc -l)
mothur inputfile
# could also put the commands on the command line
#mothur "#cluster.split(column=file.dist, name=file.names, large=T, processors=${np})"
# job done, copy everything back
echo "Copying from ${TMP}/ to ${PBS_O_WORKDIR}/"
/usr/bin/rsync -vax ${TMP}/ "${PBS_O_WORKDIR}/"
# delete my temporary files
[ $? -eq 0 ] && /bin/rm -rf ${TMP}
Hadoop
Hadoop is useful for sorting through massive amounts of data. In this example we read the input data into a distributed HDFS, and do a map/reduce. Upon completion the output is copied out of the HDFS to central storage. scratch
nodes are requested due to their large scratch space. The input and output data together should not exceed 1.5TB, so we request 1 node for every 750GB of input data. In this example we request 6 nodes for 4TB of input data.
Java example
#!/bin/bash
#PBS -V
#PBS -l select=6:ncpus=1:scratch=true
#PBS -N hadoopDedupe
# make sure I'm the only one that can read my output
umask 0077
# create a temporary directory in /scratch
TMP=/scratch/${PBS_JOBID}
mkdir -p ${TMP}/logs
JAR=dedupe.jar
CLASS=za.ac.sun.hpc.dedupe
INPUT="${PBS_O_WORKDIR}/input"
OUTPUT="${PBS_O_WORKDIR}"
HADOOP_PREFIX=/apps/hadoop/2.4.1
JAVA_HOME=/usr/lib/jvm/java
HADOOP_CONF_DIR="${PBS_O_WORKDIR}/conf"
# copy the class to ${TMP}
cp "${HADOOP_PREFIX}/common/${JAR}" ${TMP}
# create Hadoop configs
cp -a ${HADOOP_PREFIX}/conf ${HADOOP_CONF_DIR}
MASTER=$(hostname)
uniq ${PBS_NODEFILE} > ${HADOOP_CONF_DIR}/slaves
echo ${MASTER} > ${HADOOP_CONF_DIR}/masters
sed -i "s|export JAVA_HOME=.*|export JAVA_HOME=${JAVA_HOME}|g" ${HADOOP_CONF_DIR}/hadoop-env.sh
sed -i "s|<value>/scratch/.*</value>|<value>/scratch/${PBS_JOBID}</value>|g" ${HADOOP_CONF_DIR}/{hdfs,core}-site.xml
sed -i "s|<value>.*:50090</value>|<value>${MASTER}:50090</value>|g" ${HADOOP_CONF_DIR}/{hdfs,core}-site.xml
sed -i "s|hdfs://.*:|hdfs://${MASTER}:|g" ${HADOOP_CONF_DIR}/core-site.xml
sed -i "s|.*export HADOOP_LOG_DIR.*|export HADOOP_LOG_DIR=${TMP}/logs|g" ${HADOOP_CONF_DIR}/hadoop-env.sh
sed -i "s|.*export HADOOP_PID_DIR.*|export HADOOP_PID_DIR=${TMP}|g" ${HADOOP_CONF_DIR}/hadoop-env.sh
# setup Hadoop services
. ${HADOOP_CONF_DIR}/hadoop-env.sh
${HADOOP_PREFIX}/bin/hdfs namenode -format
${HADOOP_PREFIX}/sbin/start-dfs.sh
# import data
${HADOOP_PREFIX}/bin/hdfs dfs -mkdir /user
${HADOOP_PREFIX}/bin/hdfs dfs -mkdir /user/${USER}
${HADOOP_PREFIX}/bin/hdfs dfs -put ${INPUT} input
cd ${TMP}
# run hadoop job
${HADOOP_PREFIX}/bin/hadoop jar ${JAR} ${CLASS} input output
# retrieve output from Hadoop
mkdir -p "${OUTPUT}"
${HADOOP_PREFIX}/bin/hdfs dfs -get output "${OUTPUT}"
# stop Hadoop services
${HADOOP_PREFIX}/sbin/stop-dfs.sh
# retrieve logs
cp -a ${TMP}/logs "${PBS_O_WORKDIR}"
# clear HDFS directories on all slaves
cat ${HADOOP_CONF_DIR}/slaves | while read slave; do
ssh -n ${slave} "rm -rf ${TMP}"
done
# delete my temporary files
[ $? -eq 0 ] && /bin/rm -rf ${TMP}
Third-party script example
#!/bin/bash
#PBS -V
#PBS -l select=6:ncpus=1:scratch=true
#PBS -N hadoopDedupe
# make sure I'm the only one that can read my output
umask 0077
# create a temporary directory in /scratch
TMP=/scratch/${PBS_JOBID}
mkdir -p ${TMP}/logs
INPUT="${PBS_O_WORKDIR}/input"
OUTPUT="${PBS_O_WORKDIR}"
MAPPER="mapper.py"
REDUCER="reducer.py"
# copy the mapper and reducer to ${TMP}
cp "${PBS_O_WORKDIR}/${MAPPER}" "${PBS_O_WORKDIR}/${REDUCER}" ${TMP}
HADOOP_PREFIX=/apps/hadoop/2.4.1
JAVA_HOME=/usr/lib/jvm/java
HADOOP_CONF_DIR="${PBS_O_WORKDIR}/conf"
# create Hadoop configs
cp -a ${HADOOP_PREFIX}/conf ${HADOOP_CONF_DIR}
MASTER=$(hostname)
uniq ${PBS_NODEFILE} > ${HADOOP_CONF_DIR}/slaves
echo ${MASTER} > ${HADOOP_CONF_DIR}/masters
sed -i "s|export JAVA_HOME=.*|export JAVA_HOME=${JAVA_HOME}|g" ${HADOOP_CONF_DIR}/hadoop-env.sh
sed -i "s|<value>/scratch/.*</value>|<value>/scratch/${PBS_JOBID}</value>|g" ${HADOOP_CONF_DIR}/{hdfs,core}-site.xml
sed -i "s|<value>.*:50090</value>|<value>${MASTER}:50090</value>|g" ${HADOOP_CONF_DIR}/{hdfs,core}-site.xml
sed -i "s|hdfs://.*:|hdfs://${MASTER}:|g" ${HADOOP_CONF_DIR}/core-site.xml
sed -i "s|.*export HADOOP_LOG_DIR.*|export HADOOP_LOG_DIR=${TMP}/logs|g" ${HADOOP_CONF_DIR}/hadoop-env.sh
sed -i "s|.*export HADOOP_PID_DIR.*|export HADOOP_PID_DIR=${TMP}|g" ${HADOOP_CONF_DIR}/hadoop-env.sh
# setup Hadoop services
. ${HADOOP_CONF_DIR}/hadoop-env.sh
${HADOOP_PREFIX}/bin/hdfs namenode -format
${HADOOP_PREFIX}/sbin/start-dfs.sh
# import data
${HADOOP_PREFIX}/bin/hdfs dfs -mkdir /user
${HADOOP_PREFIX}/bin/hdfs dfs -mkdir /user/${USER}
${HADOOP_PREFIX}/bin/hdfs dfs -put ${INPUT} input
cd ${TMP}
# run hadoop job
STREAM=${HADOOP_PREFIX}/share/hadoop/tools/lib/hadoop-streaming-2.4.1.jar
${HADOOP_PREFIX}/bin/hadoop jar ${STREAM} ${OPTIONS} -files ${MAPPER},${REDUCER} -mapper ${MAPPER} -reducer ${REDUCER} -input input -output output
# retrieve output from Hadoop
mkdir -p "${OUTPUT}"
${HADOOP_PREFIX}/bin/hdfs dfs -get output "${OUTPUT}"
# stop Hadoop services
${HADOOP_PREFIX}/sbin/stop-dfs.sh
# retrieve logs
cp -a ${TMP}/logs "${PBS_O_WORKDIR}"
# clear HDFS directories on all slaves
cat ${HADOOP_CONF_DIR}/slaves | while read slave; do
ssh -n ${slave} "rm -rf ${TMP}"
done
# delete my temporary files
[ $? -eq 0 ] && /bin/rm -rf ${TMP}
Numeca
This script assumes you have 4 Numeca files available for your project. If your project is named proj1, the required files are proj1.iec, proj1.igg, proj1.bcs and proj1.cgns.
Script requests 4 hours walltime, and uses 8 cores on a host with scratch space.
#!/bin/bash
#PBS -N proj1
#PBS -l select=1:ncpus=8:mpiprocs=8:scratch=true
#PBS -l walltime=04:00:00
INPUT=proj1
# make sure I'm the only one that can read my output
umask 0077
# create a temporary directory with the job id as name in /scratch
TMP=/scratch/${PBS_JOBID}
mkdir -p ${TMP}
echo "Temporary work dir: ${TMP}"
# copy the input files to ${TMP}
echo "Copying from ${PBS_O_WORKDIR}/ to ${TMP}/"
/usr/bin/rsync -vax "${PBS_O_WORKDIR}/" ${TMP}/
cd ${TMP}
NUMECA=/apps/numeca/bin
VERSION=90_3
# create hosts list
TMPH=$(/bin/mktemp)
/usr/bin/tail -n +2 ${PBS_NODEFILE} | /usr/bin/uniq -c | while read np host; do
/bin/echo "${host} ${np}" >> ${TMPH}
done
NHOSTS=$(/bin/cat ${TMPH} | /usr/bin/wc -l)
LHOSTS=$(while read line; do echo -n ${line}; done < ${TMPH})
/bin/rm ${TMPH}
# Create .run file
${NUMECA}/fine -niversion ${VERSION} -batch ${INPUT}.iec ${INPUT}.igg ${PBS_JOBID}.run
# Set up parallel run
${NUMECA}/fine -niversion ${VERSION} -batch -parallel ${PBS_JOBID}.run ${NHOSTS} ${LHOSTS}
# Start solver
${NUMECA}/euranusTurbo_parallel ${PBS_JOBID}.run -steering ${PBS_JOBID}.steering -niversion ${VERSION} -p4pg ${PBS_JOBID}.p4pg
# job done, copy everything back
echo "Copying from ${TMP}/ to ${PBS_O_WORKDIR}/"
/usr/bin/rsync -vax ${TMP}/ "${PBS_O_WORKDIR}/"
# delete my temporary files
[ $? -eq 0 ] && /bin/rm -rf ${TMP}
This script assumes you have 3 Numeca files available for your project. If your project is named proj1, the required files are proj1.iec, proj1.trb and proj1.geomTurbo.
Script requests 4 hours walltime, and uses 8 cores on a host with scratch space.
#!/bin/bash
#PBS -N proj1
#PBS -l select=1:ncpus=8:mpiprocs=8:scratch=true
#PBS -l walltime=04:00:00
INPUT=proj1
# make sure I'm the only one that can read my output
umask 0077
# create a temporary directory with the job id as name in /scratch
TMP=/scratch/${PBS_JOBID}
mkdir -p ${TMP}
echo "Temporary work dir: ${TMP}"
# copy the input files to ${TMP}
echo "Copying from ${PBS_O_WORKDIR}/ to ${TMP}/"
/usr/bin/rsync -vax "${PBS_O_WORKDIR}/" ${TMP}/
# Automatically calculate the number of processors - 1 less than requested, used for master process
NP=$(( $(cat ${PBS_NODEFILE} | wc -l) - 1))
cd ${TMP}
NUMECA=/apps/numeca/bin
VERSION=101
# inputs .trb, .geomTurbo
# outputs .bcs, .cgns, .igg
/usr/bin/xvfb-run -d ${NUMECA}/igg -print -batch -niversion ${VERSION} -autogrid5 -trb ${TMP}/${INPUT}.trb -geomTurbo ${TMP}/${INPUT}.geomTurbo -mesh ${TMP}/${PBS_JOBID}.igg
# inputs .iec, .igg
# outputs .run
/usr/bin/xvfb-run -d ${NUMECA}/fine -print -batch -niversion ${VERSION} -project ${TMP}/${INPUT}.iec -mesh ${TMP}/${PBS_JOBID}.igg -computation ${TMP}/${PBS_JOBID}.run
# inputs .run
# outputs .p4pg, .batch
/usr/bin/xvfb-run -d ${NUMECA}/fine -print -batch -niversion ${VERSION} -parallel -computation ${TMP}/${PBS_JOBID}.run -nproc ${NP} -nbint 128 -nbreal 128
# inputs .run, .p4pg
${NUMECA}/euranusTurbo_parallel${VERSION} ${TMP}/${PBS_JOBID}.run -steering ${TMP}/${PBS_JOBID}.steering -p4pg ${TMP}/${PBS_JOBID}.p4pg
# job done, copy everything back
echo "Copying from ${TMP}/ to ${PBS_O_WORKDIR}/"
/usr/bin/rsync -vax ${TMP}/ "${PBS_O_WORKDIR}/"
# delete my temporary files
[ $? -eq 0 ] && /bin/rm -rf ${TMP}
Programs that handle job submission differently
MATLAB
With MATLAB’s Parallel Computing Toolbox (PCT), it’s possible to submit your MATLAB code directly from your desktop to the HPC without writing submit scripts and submitting the job manually. See MathWorks for further details.
The HPC has a license to allow the use of 16 cores by MATLAB. MATLAB R2015a and all standard toolboxes are installed on the HPC, and any MATLAB product you are licensed for will be able to run on the HPC.
To be able to use the HPC for MATLAB, you will require a Parallel Computing Toolbox license on your desktop.
Setup
- Install the required scripts for a generic PBS cluster
- Copy all the files from
MATLABROOT\toolbox\distcomp\examples\integration\pbs\nonshared
toMATLABROOT\toolbox\local
. MATLABROOT is the location where you installed MATLAB on your machine, most probably inC:\Program Files\MATLAB\R2015a
or/usr/local/MATLAB/R2015a
. - Edit
MATLABROOT\toolbox\local\independentSubmitFcn.m
- Change line 122 by adding
-l walltime=24:00:00
additionalSubmitArgs = '-l walltime=24:00:00';
- Change line 122 by adding
- Edit
MATLABROOT\toolbox\local\communicatingSubmitFcn.m
- Change line 117 by adding
-l walltime=24:00:00
additionalSubmitArgs = sprintf('-l select=%d:ncpus=%d -l walltime=24:00:00', numberOfNodes, procsPerNode);
- The two changes are required to increase the default walltime on the HPC.
- Change line 117 by adding
- Copy all the files from
- Force the use of FQDN or IP in hostname lookup for parpool
- Create (or edit if it already exists)
MATLABROOT\toolbox\local\startup.m
, and addpctconfig('hostname', 'IP or hostname');
- replace IP or hostname with your machine’s IP or hostname (
ip addr list
on Linux,ipconfig
in a Command Prompt on Windows)
- replace IP or hostname with your machine’s IP or hostname (
- Restart MATLAB to apply the change, or run it manually in the Command Window
- Create (or edit if it already exists)
- Create a cluster profile
- If your PCT is installed and licensed correctly, you should see a dropdown named Parallel in your toolbar.
- Open the Parallel dropdown, and select Manage Cluster Profiles… (screenshot)
- Add a new Generic custom 3rd party cluster profile (screenshot)
- Rename the new cluster profile to ‘HPC1’ by right-clicking on it
- Set the following values (screenshot, screenshot, screenshot)
- Description: HPC1
- NumWorkers: 16
- ClusterMatlabRoot: /apps/MATLAB/R2015a
- IndependentSubmitFcn: {@independentSubmitFcn, ‘hpc1.sun.ac.za’, ‘/scratch2/user’}
- replace user with your own username
- CommunicationSubmitFcn: {@communicatingSubmitFcn, ‘hpc1.sun.ac.za’, ‘/scratch2/user’}
- replace user with your own username
- OperatingSystem: unix
- HasSharedFilesystem: false
- GetJobStateFcn: @getJobStateFcn
- DeleteJobFcn: @deleteJobFcn
- All other values can be left at their default (or empty) values.
- Select the ‘Validation Results’ tab and click Validate (screenshot)
- You will be prompted for your HPC username. When prompted for a identity file, select No if you don’t know what it is.
- Depending on how busy the HPC is, the testing should complete in 10 to 30 minutes.
- If the last step fails, your hostname is most probably incorrectly set up.