{"id":241,"date":"2016-11-17T23:35:44","date_gmt":"2016-11-17T23:35:44","guid":{"rendered":"http:\/\/sites.sandiego.edu\/hpc\/?page_id=241"},"modified":"2017-05-17T16:21:19","modified_gmt":"2017-05-17T16:21:19","slug":"submit-jobs-hpc-saber1","status":"publish","type":"page","link":"https:\/\/sites.sandiego.edu\/hpc\/submit-jobs-hpc-saber1\/","title":{"rendered":"How to submit jobs on HPC saber1"},"content":{"rendered":"<h2><span id=\"Submitting_jobs\" class=\"mw-headline\">Submitting jobs<\/span><\/h2>\n<p>PBS comes with very complete man pages. Therefore, for complete documentation of PBS commands you are encouraged to type <code>man pbs<\/code> and go from there. Jobs are submitted using the <code>qsub<\/code> command. Type <code>man qsub<\/code> for information on the plethora of options that it offers.<\/p>\n<p>Let&#8217;s say I have an executable called &#8220;myprog&#8221;. Let me try and submit it to PBS:<\/p>\n<pre>[username@launch ~]$ qsub myprog\r\nqsub:  file must be an ascii script\r\n<\/pre>\n<p>Oops&#8230; That didn&#8217;t work because qsub expects a shell script. Any shell should work, so use your favorite one. So I write a simple script called &#8220;myscript.sh&#8221;<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n<span class=\"nb\">cd<\/span> <span class=\"nv\">$PBS_O_WORKDIR<\/span>\r\n.\/myprog argument1 argument2\r\n<\/pre>\n<\/div>\n<p>and then I submit it:<\/p>\n<pre>[username@launch ~]$ qsub myscript.sh\r\n4681.mn01\r\n<\/pre>\n<p>That worked! Note the use of the <code>$PBS_O_WORKDIR<\/code> environment variable. This is important, since by default PBS on our cluster will start executing the commands in your shell script from your home directory. To go to the directory in which you executed <code>qsub<\/code>, <code>cd<\/code> to <code>$PBS_O_WORKDIR<\/code>. There are several other useful PBS environment variables that we will encounter later.<\/p>\n<h3><span id=\"Editing_files\" class=\"mw-headline\">Editing files<\/span><\/h3>\n<p>Editing files on the cluster can be done through a couple of different methods&#8230;<\/p>\n<h4><span id=\"Native_Editors\" class=\"mw-headline\">Native Editors<\/span><\/h4>\n<ul>\n<li><code>vim<\/code> &#8211; The visual editor (vi) is the traditional Unix editor. However, it is not necessarily the most intuitive editor. That being the case, if you are unfamiliar with it, there is a vi tutorial, <code>vimtutor<\/code>.<\/li>\n<li><code>pico<\/code> &#8211; While pico is not installed on the system, nano is installed, and is a pico work-a-like.<\/li>\n<li><code>nano<\/code> &#8211; Nano has a good bit of on-screen help to make it easier to use.<\/li>\n<\/ul>\n<h4><span id=\"External_Editors\" class=\"mw-headline\">External Editors<\/span><\/h4>\n<p>You can also use your favourite editor on your local machine and then transfer the files over to the HPC afterwards. One caveat to this is that files created on Windows machines usually contain unprintable characters which may be misinterpreted by Linux command interpreters (shells). If this happens, there is a utility called <code>dos2unix<\/code> that you can use to convert the text file from DOS\/Windows formatting to Linux formatting.<\/p>\n<pre>$ dos2unix script.sub\r\ndos2unix: converting file script.sub to UNIX format ...\r\n<\/pre>\n<h3><span id=\"Specifying_job_parameters\" class=\"mw-headline\">Specifying job parameters<\/span><\/h3>\n<p>By default, any script you submit will run on a single processor for a maximum of 5 minutes. The name of the job will be the name of the script, and it will not email you when it starts, finishes, or is interrupted. stdout and stderr are collected into separate files named after the job number. You can affect the default behaviour of PBS by passing it parameters. These parameters can be specified on the command line or inside the shell script itself. For example, let&#8217;s say I want to send stdout and stderr to a file that is different from the default:<\/p>\n<pre>[username@launch ~]$ qsub -e myprog.err -o myprog.out myscript.sh\r\n<\/pre>\n<p>Alternatively, I can actually edit myscript.sh to include these parameters. I can specify any PBS command line parameter I want in a line that begins with &#8220;#PBS&#8221;:<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n<span class=\"c\">#PBS -e myprog.err<\/span>\r\n<span class=\"c\">#PBS -o myprog.out<\/span>\r\n<span class=\"nb\">cd<\/span> <span class=\"nv\">$PBS_O_WORKDIR<\/span>\r\n.\/myprog argument1 argument2\r\n<\/pre>\n<\/div>\n<p>Now I just submit my modified script with no command-line arguments<\/p>\n<pre>[username@launch ~]$ qsub myscript.csh\r\n<\/pre>\n<h3><span id=\"Useful_PBS_parameters\" class=\"mw-headline\">Useful PBS parameters<\/span><\/h3>\n<p>Here is an example of a more involved script that requests only 1 hour of execution time, renames the job, and sends email when the job begins, ends, or aborts:<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n \r\n<span class=\"c\"># Name of my job:<\/span>\r\n<span class=\"c\">#PBS -N My-Program<\/span>\r\n \r\n<span class=\"c\"># Run for 1 hour:<\/span>\r\n<span class=\"c\">#PBS -l walltime=1:00:00<\/span>\r\n \r\n<span class=\"c\"># Where to write stderr:<\/span>\r\n<span class=\"c\">#PBS -e myprog.err<\/span>\r\n \r\n<span class=\"c\"># Where to write stdout: <\/span>\r\n<span class=\"c\">#PBS -o myprog.out<\/span>\r\n \r\n<span class=\"c\"># Send me email when my job aborts, begins, or ends<\/span>\r\n<span class=\"c\">#PBS -m abe<\/span>\r\n \r\n<span class=\"c\"># This command switched to the directory from which the \"qsub\" command was run:<\/span>\r\n<span class=\"nb\">cd<\/span> <span class=\"nv\">$PBS_O_WORKDIR<\/span>\r\n \r\n<span class=\"c\">#  Now run my program<\/span>\r\n.\/myprog argument1 argument2\r\n \r\n<span class=\"nb\">echo <\/span>Done!\r\n<\/pre>\n<\/div>\n<p>Some more useful PBS parameters:<\/p>\n<ul>\n<li>-M: Specify your email address (defaults to campus email).<\/li>\n<li>-j oe: merge standard output and standard error into standard output file.<\/li>\n<li>-V: export all your environment variables to the batch job.<\/li>\n<li>-I: run an interactive job (see below).<\/li>\n<\/ul>\n<p>Once again, you are encouraged to consult the qsub manpage for more options.<\/p>\n<h3><span id=\"Special_concerns_for_running_OpenMP_programs\" class=\"mw-headline\">Special concerns for running OpenMP programs<\/span><\/h3>\n<p>By default, PBS assigns you 1 core on 1 node. You can, however, run your job on up to 64 cores per node. Therefore, if you want to run an OpenMP program, you must specify the number of processors per node. This is done with the flag <code>-l select=1:ncpus=&lt;cores&gt;<\/code> where <code>&lt;cores&gt;<\/code> is the number of OpenMP threads you wish to use. Keep in mind that you still must set the OMP_NUM_THREADS environment variable within your script, e.g.:<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n<span class=\"c\">#PBS -N My-OpenMP-Script<\/span>\r\n<span class=\"c\">#PBS -l select=1:ncpus=8<\/span>\r\n<span class=\"c\">#PBS -l walltime=1:00:00<\/span>\r\n \r\n<span class=\"nb\">cd<\/span> <span class=\"nv\">$PBS_O_WORKDIR<\/span>\r\n<span class=\"nb\">export <\/span><span class=\"nv\">OMP_NUM_THREADS<\/span><span class=\"o\">=<\/span>8\r\n.\/MyOpenMPProgram\r\n<\/pre>\n<\/div>\n<h3><span id=\"Jobs_with_large_output_files\" class=\"mw-headline\">Jobs with large output files<\/span><\/h3>\n<p>Instead of a job submission like this:<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n<span class=\"c\">#PBS -N massiveJob<\/span>\r\n\r\n<span class=\"nb\">cd<\/span> <span class=\"nv\">$PBS_O_WORKDIR<\/span>\r\nmyprogram -i \/home\/me\/inputfile -o \/home\/me\/outputfile\r\n<\/pre>\n<\/div>\n<p>change it to something like this:<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n<span class=\"c\">#PBS -l select=1:ncpus=1:scratch=true<\/span>\r\n<span class=\"c\">#PBS -N massiveJob<\/span>\r\n\r\n<span class=\"c\"># make sure I'm the only one that can read my output<\/span>\r\n<span class=\"nb\">umask <\/span>0077\r\n<span class=\"c\"># create a temporary directory with the job ID as name in \/scratch<\/span>\r\n<span class=\"nv\">TMP<\/span><span class=\"o\">=<\/span>\/scratch\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>\r\nmkdir -p <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Temporary work dir: ${TMP}\"<\/span>\r\n\r\n<span class=\"c\"># copy the input files to ${TMP}<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${PBS_O_WORKDIR}\/ to ${TMP}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"s2\">\"${PBS_O_WORKDIR}\/\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/\r\n\r\n<span class=\"nb\">cd<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"c\"># write my output to my new temporary work directory<\/span>\r\nmyprogram -i inputfile -o outputfile\r\n\r\n<span class=\"c\"># job done, copy everything back<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${TMP}\/ to ${PBS_O_WORKDIR}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/ <span class=\"s2\">\"${PBS_O_WORKDIR}\/\"<\/span>\r\n\r\n<span class=\"c\"># delete my temporary files<\/span>\r\n<span class=\"o\">[<\/span> <span class=\"nv\">$?<\/span> -eq 0 <span class=\"o\">]<\/span> <span class=\"o\">&amp;&amp;<\/span> \/bin\/rm -rf <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n<\/pre>\n<\/div>\n<p>Any job that has to write massive amounts of data will benefit from the above. Take note of the<code>:<\/code> <code>scratch=true<\/code> that was added to the node request line. If you do not add that feature request to the script, your job may be assigned to a node without scratch space.<\/p>\n<h3><span id=\"Using_the_PBS_NODEFILE_for_multi-threaded_jobs\" class=\"mw-headline\">Using the PBS_NODEFILE for multi-threaded jobs<\/span><\/h3>\n<p>Until now, we have only dealt with serial jobs. In a serial job, your PBS script will automatically be executed on the target node assigned by the scheduler. If you asked for more than one node, however, your script will only execute on the first node of the set of nodes allocated to you. To access the remainder of the nodes, you must either use MPI or manually launch threads. But which nodes to run on? PBS gives you a list of nodes in a file at the location pointed to by the <code>PBS_NODEFILE<\/code> environment variable. In your shell script, you may thereby ascertain the nodes on which your job can run by looking at the file in the location specified by this variable:<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n<span class=\"c\">#PBS -l select=2:mpiprocs=8<\/span>\r\n\r\n<span class=\"nb\">echo <\/span>The nodefile <span class=\"k\">for <\/span>this job is stored at <span class=\"k\">$(<\/span><span class=\"nb\">echo<\/span> <span class=\"k\">${<\/span><span class=\"nv\">PBS_NODEFILE<\/span><span class=\"k\">})<\/span>\r\ncat <span class=\"k\">${<\/span><span class=\"nv\">PBS_NODEFILE<\/span><span class=\"k\">}<\/span>\r\n<\/pre>\n<\/div>\n<p>When you run this job, you should then get output similar to:<\/p>\n<pre>The nodefile for this job is stored at \/var\/spool\/PBS\/aux\/33.pbsserver.hpc\r\ncomp001.hpc\r\ncomp001.hpc\r\ncomp001.hpc\r\ncomp001.hpc\r\ncomp001.hpc\r\ncomp001.hpc\r\ncomp001.hpc\r\ncomp001.hpc\r\ncomp002.hpc\r\ncomp002.hpc\r\ncomp002.hpc\r\ncomp002.hpc\r\ncomp002.hpc\r\ncomp002.hpc\r\ncomp002.hpc\r\ncomp002.hpc\r\n<\/pre>\n<p>If you have an application that manually forks processes onto the nodes of your job, you are responsible for parsing the <code>PBS_NODEFILE<\/code> to determine which nodes those are.<\/p>\n<p>Some MPI implementations require you to feed the <code>PBS_NODEFILE<\/code> to <code>mpirun<\/code>, e.g. for Open MPI one may pass <code>-hostfile my_nodefile.txt<\/code>.<\/p>\n<h3><span id=\"Selecting_different_node_in_one_job\" class=\"mw-headline\">Selecting different node in one job<\/span><\/h3>\n<p>Using the above information, one may allocate multiple nodes of the same type, e.g. multiple 48-core nodes. In order to mix multiple different resources, one may use the PBS&#8217; &#8220;+&#8221; notation. For example in order to mix one 48-core node and two 8-core node in one PBS job, one may pass:<\/p>\n<pre>[username@launch ~]$ qsub -lselect=1:ncpus=48:mpiprocs=48+2:ncpus=8:mpiprocs=8 myscript.csh\r\n<\/pre>\n<h2><span id=\"Guidelines_.2F_Rules\" class=\"mw-headline\">Guidelines \/ Rules<\/span><\/h2>\n<ul>\n<li>Create a temporary working directory in <b>\/scratch<\/b>, not <b>\/tmp<\/b>\n<ul>\n<li><b>\/tmp<\/b> is reserved for use by the operating system, and is only 5GB in size.<\/li>\n<li>Preferably specify <b>\/scratch\/$PBS_JOBID<\/b> in your submit script so that it&#8217;s easy to associate scratch directories with their jobs.<\/li>\n<li>Copy your input files to your scratch space and work on the data there. Avoid using your home directory as much as possible.\n<ul>\n<li>If you need more than about 500GB of scratch space, you can also use <b>\/scratch2<\/b>. It&#8217;s a lot slower than <b>\/scratch<\/b>, so try to avoid that too.<\/li>\n<\/ul>\n<\/li>\n<li>Copy only your results back to your home directory. Input files that haven&#8217;t changed don&#8217;t need to be copied.<\/li>\n<li>Erase your temporary working directory when you&#8217;re done.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<ul>\n<li>Secure your work from accidental deletion or contamination by disallowing other users access to your scratch directories\n<ul>\n<li><code>umask 0077<\/code> disallows access by all other users<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h2><span id=\"Examples\" class=\"mw-headline\">Examples<\/span><\/h2>\n<h3><span id=\"ADF\" class=\"mw-headline\">ADF<\/span><\/h3>\n<p>ADF generates run files which are scripts which contain your data. Make sure to convert it to a UNIX file first using <b>dos2unix<\/b>, and remember to make it executable with <b>chmod +x<\/b>.<\/p>\n<p>ADF script requesting 4 cores, on 1 node, -m selects to mail <b>b<\/b>egin and <b>e<\/b>nd messages and -M is the email address to send to. Requests 1 week walltime.<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n<span class=\"c\">#PBS -N JobName<\/span>\r\n<span class=\"c\">#PBS -l select=1:ncpus=4:scratch=true<\/span>\r\n<span class=\"c\">#PBS -l walltime=168:00:00<\/span>\r\n<span class=\"c\">#PBS -m be<\/span>\r\n<span class=\"c\">#PBS -M username@sun.ac.za<\/span>\r\n\r\n<span class=\"nv\">INPUT<\/span><span class=\"o\">=<\/span>inputfile.run\r\n\r\n<span class=\"c\"># make sure I'm the only one that can read my output<\/span>\r\n<span class=\"nb\">umask <\/span>0077\r\n<span class=\"nv\">TMP<\/span><span class=\"o\">=<\/span>\/scratch\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>\r\nmkdir -p <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"k\">if<\/span> <span class=\"o\">[<\/span> ! -d <span class=\"s2\">\"${TMP}\"<\/span><span class=\"o\">]<\/span> ; <span class=\"k\">then<\/span>\r\n\t<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Cannot create temporary directory. Disk probably full.\"<\/span>\r\n\t<span class=\"nb\">exit <\/span>1\r\n<span class=\"k\">fi<\/span>\r\n\r\n<span class=\"nb\">cd<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\n. \/apps\/adf\/2014.04\/adfrc.sh\r\n\r\n<span class=\"c\"># override ADF's scratch directory<\/span>\r\n<span class=\"nb\">export <\/span><span class=\"nv\">SCM_TMPDIR<\/span><span class=\"o\">=<\/span><span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"c\"># override log file<\/span>\r\n<span class=\"nb\">export <\/span><span class=\"nv\">SCM_LOGFILE<\/span><span class=\"o\">=<\/span><span class=\"s2\">\"${TMP}\/${PBS_JOBID}.logfile\"<\/span>\r\n\r\n<span class=\"c\"># Submit job<\/span>\r\n<span class=\"k\">${<\/span><span class=\"nv\">PBS_O_WORKDIR<\/span><span class=\"k\">}<\/span>\/<span class=\"k\">${<\/span><span class=\"nv\">INPUT<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"c\"># job done, copy everything back <\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${TMP}\/ to ${PBS_O_WORKDIR}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/ <span class=\"s2\">\"${PBS_O_WORKDIR}\/\"<\/span>\r\n\r\n<span class=\"c\"># delete my temporary files<\/span>\r\n<span class=\"o\">[<\/span> <span class=\"nv\">$?<\/span> -eq 0 <span class=\"o\">]<\/span> <span class=\"o\">&amp;&amp;<\/span> \/bin\/rm -rf <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n<\/pre>\n<\/div>\n<h3><span id=\"ANSYS\" class=\"mw-headline\">ANSYS<\/span><\/h3>\n<h4><span id=\"Fluent\" class=\"mw-headline\">Fluent<\/span><\/h4>\n<p>Fluent script requesting 4 cores, on 1 node, -m selects to mail <b>b<\/b>egin and <b>e<\/b>nd messages and -M is the email address to send to. Requests 1 week walltime.<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n<span class=\"c\">#PBS -N JobName<\/span>\r\n<span class=\"c\">#PBS -l select=1:ncpus=4:mpiprocs=4:scratch=true<\/span>\r\n<span class=\"c\">#PBS -l walltime=168:00:00<\/span>\r\n<span class=\"c\">#PBS -m be<\/span>\r\n<span class=\"c\">#PBS -e output.err<\/span>\r\n<span class=\"c\">#PBS -o output.out<\/span>\r\n<span class=\"c\">#PBS -M username@sun.ac.za<\/span>\r\n\r\n<span class=\"nv\">INPUT<\/span><span class=\"o\">=<\/span>inputfile.jou\r\n\r\n<span class=\"c\"># make sure I'm the only one that can read my output<\/span>\r\n<span class=\"nb\">umask <\/span>0077\r\n<span class=\"nv\">TMP<\/span><span class=\"o\">=<\/span>\/scratch\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>\r\nmkdir -p <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"k\">if<\/span> <span class=\"o\">[<\/span> ! -d <span class=\"s2\">\"${TMP}\"<\/span><span class=\"o\">]<\/span> ; <span class=\"k\">then<\/span>\r\n\t<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Cannot create temporary directory. Disk probably full.\"<\/span>\r\n\t<span class=\"nb\">exit <\/span>1\r\n<span class=\"k\">fi<\/span>\r\n\r\n<span class=\"c\"># copy the input files to ${TMP}<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${PBS_O_WORKDIR}\/ to ${TMP}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"s2\">\"${PBS_O_WORKDIR}\"<\/span>\/ <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/\r\n\r\n<span class=\"nb\">cd<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"c\"># choose version of FLUENT<\/span>\r\n<span class=\"c\">#module load app\/ansys150<\/span>\r\nmodule load app\/ansys162\r\n\r\n<span class=\"c\"># Automatically calculate the number of processors<\/span>\r\n<span class=\"nv\">np<\/span><span class=\"o\">=<\/span><span class=\"k\">$(<\/span>cat <span class=\"k\">${<\/span><span class=\"nv\">PBS_NODEFILE<\/span><span class=\"k\">}<\/span> | wc -l<span class=\"k\">)<\/span>\r\n\r\nfluent 3d -pdefault -cnf<span class=\"o\">=<\/span><span class=\"k\">${<\/span><span class=\"nv\">PBS_NODEFILE<\/span><span class=\"k\">}<\/span> -mpi<span class=\"o\">=<\/span>intel -g -t<span class=\"k\">${<\/span><span class=\"nv\">np<\/span><span class=\"k\">}<\/span> -ssh -i <span class=\"k\">${<\/span><span class=\"nv\">INPUT<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"c\"># job done, copy everything back <\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${TMP}\/ to ${PBS_O_WORKDIR}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/ <span class=\"s2\">\"${PBS_O_WORKDIR}\/\"<\/span>\r\n\r\n<span class=\"c\"># delete my temporary files<\/span>\r\n<span class=\"o\">[<\/span> <span class=\"nv\">$?<\/span> -eq 0 <span class=\"o\">]<\/span> <span class=\"o\">&amp;&amp;<\/span> \/bin\/rm -rf <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n<\/pre>\n<\/div>\n<h4><span id=\"Fluid-Structure_Interaction\" class=\"mw-headline\">Fluid-Structure Interaction<\/span><\/h4>\n<p>You need the following 5 files:<\/p>\n<ul>\n<li>coupling (.sci) file<\/li>\n<li>structural data (.dat) file<\/li>\n<li>case (.cas.gz) file<\/li>\n<li>journal (.jnl) file<\/li>\n<li>submit script (.sh)<\/li>\n<\/ul>\n<p>The coupling file should contain two participants. The names of these participants should not have spaces in them. In the example below, <code>Solution 4<\/code> should be renamed to something like <code>Solution4<\/code>. Make sure to replace all instances of the name in the file.<\/p>\n<pre>&lt;SystemCoupling Ver=\"1\"&gt;\r\n  &lt;Participants Count=\"2\"&gt;\r\n    &lt;Participant Ver=\"1\" Type=\"0\"&gt;\r\n      &lt;Name PropType=\"string\"&gt;Solution 4&lt;\/Name&gt;\r\n      &lt;DisplayName PropType=\"string\"&gt;0012 V2&lt;\/DisplayName&gt;\r\n      &lt;SupportsCouplingIterations PropType=\"bool\"&gt;True&lt;\/SupportsCouplingIterations&gt;\r\n      &lt;UnitSystem PropType=\"string\"&gt;MKS_STANDARD&lt;\/UnitSystem&gt;\r\n      &lt;Regions Count=\"1\"&gt;\r\n&lt;--- snip ---&gt;\r\n<\/pre>\n<p>The journal file should contain (replace the filename on the \u2018rc\u2019 line with your case file):<\/p>\n<pre>file\/start-transcript Solution.trn\r\nfile set-batch-options , yes ,\r\nrc FFF-1.1-1-00047.cas.gz\r\nsolve\/initialize\/initialize-flow\r\n(sc-solve)\r\nwcd FluentRestart.cas.gz\r\nexit\r\nok\r\n<\/pre>\n<p>The job script is given below. Update the <code>COUPLING<\/code>, <code>STRUCTURALDATA<\/code>, <code>JOURNAL<\/code> and <code>NPA<\/code> variables to reflect your case.<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n\r\n<span class=\"c\">#PBS -N fsi<\/span>\r\n<span class=\"c\">#PBS -l select=1:ncpus=48:mpiprocs=48:mem=90GB:scratch=true<\/span>\r\n<span class=\"c\">#PBS -l walltime=24:00:00<\/span>\r\n\r\n<span class=\"nv\">COUPLING<\/span><span class=\"o\">=<\/span>coupling.sci\r\n<span class=\"nv\">STRUCTURALDATA<\/span><span class=\"o\">=<\/span>ds.dat\r\n<span class=\"nv\">JOURNAL<\/span><span class=\"o\">=<\/span>fluent.journal\r\n\r\n<span class=\"c\"># number of processors for Ansys<\/span>\r\n<span class=\"nv\">NPA<\/span><span class=\"o\">=<\/span>8\r\n<span class=\"c\"># Automatically calculate the number of processors left over for Fluent<\/span>\r\n<span class=\"nv\">NP<\/span><span class=\"o\">=<\/span><span class=\"k\">$(<\/span>cat <span class=\"k\">${<\/span><span class=\"nv\">PBS_NODEFILE<\/span><span class=\"k\">}<\/span> | wc -l<span class=\"k\">)<\/span>\r\n<span class=\"nv\">NPF<\/span><span class=\"o\">=<\/span><span class=\"k\">$((<\/span>NP-NPA<span class=\"k\">))<\/span>\r\n\r\n<span class=\"c\"># make sure I'm the only one that can read my output<\/span>\r\n<span class=\"nb\">umask <\/span>0077\r\n\r\n<span class=\"c\"># create a temporary directory with a random name in \/scratch<\/span>\r\n<span class=\"nv\">TMP<\/span><span class=\"o\">=<\/span>\/scratch\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>\r\nmkdir -p <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Temporary work dir: ${TMP}\"<\/span>\r\n\r\n<span class=\"k\">if<\/span> <span class=\"o\">[<\/span> ! -d <span class=\"s2\">\"${TMP}\"<\/span><span class=\"o\">]<\/span> ; <span class=\"k\">then<\/span>\r\n\t<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Cannot create temporary directory. Disk probably full.\"<\/span>\r\n\t<span class=\"nb\">exit <\/span>1\r\n<span class=\"k\">fi<\/span>\r\n\r\n<span class=\"c\"># copy the input files to ${TMP}<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${PBS_O_WORKDIR}\/ to ${TMP}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"s2\">\"${PBS_O_WORKDIR}\/\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/\r\n\r\n<span class=\"nb\">cd<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\nmodule load app\/ansys162\r\n\r\n<span class=\"c\"># Start coupling program<\/span>\r\n\/apps\/ansys_inc\/v162\/aisol\/.workbench -cmd ansys.services.systemcoupling.exe -inputFile <span class=\"k\">${<\/span><span class=\"nv\">COUPLING<\/span><span class=\"k\">}<\/span> &amp;\r\n\r\n<span class=\"c\"># Wait until scServer.scs is created<\/span>\r\n<span class=\"nv\">TIMEOUT<\/span><span class=\"o\">=<\/span>60\r\n<span class=\"k\">while<\/span> <span class=\"o\">[<\/span> ! -f scServer.scs -a <span class=\"nv\">$TIMEOUT<\/span> -gt 0 <span class=\"o\">]<\/span>; <span class=\"k\">do<\/span>\r\n\t<span class=\"nv\">TIMEOUT<\/span><span class=\"o\">=<\/span><span class=\"k\">$((<\/span>TIMEOUT-1<span class=\"k\">))<\/span>\r\n\tsleep 2\r\n<span class=\"k\">done<\/span>\r\n\r\n<span class=\"k\">if<\/span> <span class=\"o\">[<\/span> -f scServer.scs <span class=\"o\">]<\/span>; <span class=\"k\">then<\/span>\r\n\r\n\t<span class=\"c\"># Parse the data in scServer.scs<\/span>\r\n\treadarray JOB &lt; scServer.scs\r\n\t<span class=\"nv\">HOSTPORT<\/span><span class=\"o\">=(<\/span><span class=\"k\">${<\/span><span class=\"nv\">JOB<\/span><span class=\"p\">[0]\/\/@\/ <\/span><span class=\"k\">}<\/span><span class=\"o\">)<\/span>\r\n\r\n\t<span class=\"c\"># Run Fluent<\/span>\r\n\tfluent 3ddp -g -t<span class=\"k\">${<\/span><span class=\"nv\">NPF<\/span><span class=\"k\">}<\/span> -driver null -ssh -scport<span class=\"o\">=<\/span><span class=\"k\">${<\/span><span class=\"nv\">HOSTPORT<\/span><span class=\"p\">[0]<\/span><span class=\"k\">}<\/span> -schost<span class=\"o\">=<\/span><span class=\"k\">${<\/span><span class=\"nv\">HOSTPORT<\/span><span class=\"p\">[1]<\/span><span class=\"k\">}<\/span> -scname<span class=\"o\">=<\/span><span class=\"s2\">\"${JOB[4]}\"<\/span> &lt; <span class=\"k\">${<\/span><span class=\"nv\">JOURNAL<\/span><span class=\"k\">}<\/span> &gt; output.FLUENT &amp;\r\n\r\n\t<span class=\"c\"># Run Ansys<\/span>\r\n\tansys162 -b -scport<span class=\"o\">=<\/span><span class=\"k\">${<\/span><span class=\"nv\">HOSTPORT<\/span><span class=\"p\">[0]<\/span><span class=\"k\">}<\/span> -schost<span class=\"o\">=<\/span><span class=\"k\">${<\/span><span class=\"nv\">HOSTPORT<\/span><span class=\"p\">[1]<\/span><span class=\"k\">}<\/span> -scname<span class=\"o\">=<\/span><span class=\"s2\">\"${JOB[2]}\"<\/span> -i <span class=\"k\">${<\/span><span class=\"nv\">STRUCTURALDATA<\/span><span class=\"k\">}<\/span> -o output.ANSYS -np <span class=\"k\">${<\/span><span class=\"nv\">NPA<\/span><span class=\"k\">}<\/span>\r\n\r\n\t<span class=\"c\"># job done, copy everything back<\/span>\r\n\t<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${TMP}\/ to ${PBS_O_WORKDIR}\/\"<\/span>\r\n\t\/usr\/bin\/rsync -vax <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/ <span class=\"s2\">\"${PBS_O_WORKDIR}\/\"<\/span>\r\n\r\n<span class=\"k\">fi<\/span>\r\n\r\n<span class=\"c\"># delete my temporary files<\/span>\r\n<span class=\"o\">[<\/span> <span class=\"nv\">$?<\/span> -eq 0 <span class=\"o\">]<\/span> <span class=\"o\">&amp;&amp;<\/span> \/bin\/rm -rf <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n<\/pre>\n<\/div>\n<h4><span id=\"CFX\" class=\"mw-headline\">CFX<\/span><\/h4>\n<p>CFX script requesting 4 cores, on 1 node, -m selects to mail <b>b<\/b>egin and <b>e<\/b>nd messages and -M is the email address to send to. Requests 1 week walltime.<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n<span class=\"c\">#PBS -N JobName<\/span>\r\n<span class=\"c\">#PBS -l select=1:ncpus=4:mpiprocs=4:scratch=true<\/span>\r\n<span class=\"c\">#PBS -l walltime=168:00:00<\/span>\r\n<span class=\"c\">#PBS -m be<\/span>\r\n<span class=\"c\">#PBS -e output.err<\/span>\r\n<span class=\"c\">#PBS -o output.out<\/span>\r\n<span class=\"c\">#PBS -M username@sun.ac.za<\/span>\r\n\r\n<span class=\"nv\">DEF<\/span><span class=\"o\">=<\/span>inputfile.def\r\n<span class=\"nv\">INI<\/span><span class=\"o\">=<\/span>inputfile.ini\r\n\r\n<span class=\"c\"># make sure I'm the only one that can read my output<\/span>\r\n<span class=\"nb\">umask <\/span>0077\r\n<span class=\"nv\">TMP<\/span><span class=\"o\">=<\/span>\/scratch\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>\r\nmkdir -p <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"k\">if<\/span> <span class=\"o\">[<\/span> ! -d <span class=\"s2\">\"${TMP}\"<\/span><span class=\"o\">]<\/span> ; <span class=\"k\">then<\/span>\r\n\t<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Cannot create temporary directory. Disk probably full.\"<\/span>\r\n\t<span class=\"nb\">exit <\/span>1\r\n<span class=\"k\">fi<\/span>\r\n\r\n<span class=\"c\"># copy the input files to ${TMP}<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${PBS_O_WORKDIR}\/ to ${TMP}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"s2\">\"${PBS_O_WORKDIR}\"<\/span>\/ <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/\r\n\r\n<span class=\"nb\">cd<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\nmodule load app\/ansys162\r\n\r\n<span class=\"c\"># get list of processors<\/span>\r\n<span class=\"nv\">PAR<\/span><span class=\"o\">=<\/span><span class=\"k\">$(<\/span>sed -e <span class=\"s1\">'{:q;N;s\/\\n\/,\/g;t q}'<\/span> <span class=\"k\">${<\/span><span class=\"nv\">PBS_NODEFILE<\/span><span class=\"k\">})<\/span>\r\n\r\ncfx5solve -def <span class=\"k\">${<\/span><span class=\"nv\">DEF<\/span><span class=\"k\">}<\/span> -ini <span class=\"k\">${<\/span><span class=\"nv\">INI<\/span><span class=\"k\">}<\/span> -par-dist <span class=\"k\">${<\/span><span class=\"nv\">PAR<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"c\"># job done, copy everything back <\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${TMP}\/ to ${PBS_O_WORKDIR}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/ <span class=\"s2\">\"${PBS_O_WORKDIR}\/\"<\/span>\r\n\r\n<span class=\"c\"># delete my temporary files<\/span>\r\n<span class=\"o\">[<\/span> <span class=\"nv\">$?<\/span> -eq 0 <span class=\"o\">]<\/span> <span class=\"o\">&amp;&amp;<\/span> \/bin\/rm -rf <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n<\/pre>\n<\/div>\n<h3><span id=\"Abaqus\" class=\"mw-headline\">Abaqus<\/span><\/h3>\n<p>Abaqus script requesting 4 cores, on 1 node, -m selects to mail <b>b<\/b>egin and <b>e<\/b>nd messages and -M is the email address to send to. Uses system default walltime.<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n<span class=\"c\">#PBS -l select=1:ncpus=4:mpiprocs=4:scratch=true<\/span>\r\n<span class=\"c\">#PBS -m be<\/span>\r\n<span class=\"c\">#PBS -M username@sun.ac.za<\/span>\r\n\r\n<span class=\"c\"># the input file without the .inp extension<\/span>\r\n<span class=\"nv\">JOBNAME<\/span><span class=\"o\">=<\/span>xyz\r\n\r\n<span class=\"c\"># make sure I'm the only one that can read my output<\/span>\r\n<span class=\"nb\">umask <\/span>0077\r\n<span class=\"nv\">TMP<\/span><span class=\"o\">=<\/span>\/scratch\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>\r\nmkdir -p <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"k\">if<\/span> <span class=\"o\">[<\/span> ! -d <span class=\"s2\">\"${TMP}\"<\/span><span class=\"o\">]<\/span> ; <span class=\"k\">then<\/span>\r\n\t<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Cannot create temporary directory. Disk probably full.\"<\/span>\r\n\t<span class=\"nb\">exit <\/span>1\r\n<span class=\"k\">fi<\/span>\r\n\r\n<span class=\"c\"># copy the input files to ${TMP}<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${PBS_O_WORKDIR}\/ to ${TMP}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"s2\">\"${PBS_O_WORKDIR}\"<\/span>\/ <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/\r\n\r\n<span class=\"nb\">cd<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\nmodule load app\/abaqus\r\n\r\n<span class=\"c\"># Automatically calculate the number of processors<\/span>\r\n<span class=\"nv\">np<\/span><span class=\"o\">=<\/span><span class=\"k\">$(<\/span>cat <span class=\"k\">${<\/span><span class=\"nv\">PBS_NODEFILE<\/span><span class=\"k\">}<\/span> | wc -l<span class=\"k\">)<\/span>\r\n\r\nabaqus <span class=\"nv\">job<\/span><span class=\"o\">=<\/span><span class=\"k\">${<\/span><span class=\"nv\">JOBNAME<\/span><span class=\"k\">}<\/span> <span class=\"nv\">input<\/span><span class=\"o\">=<\/span><span class=\"k\">${<\/span><span class=\"nv\">JOBNAME<\/span><span class=\"k\">}<\/span>.inp analysis <span class=\"nv\">cpus<\/span><span class=\"o\">=<\/span><span class=\"k\">${<\/span><span class=\"nv\">np<\/span><span class=\"k\">}<\/span> <span class=\"nv\">scratch<\/span><span class=\"o\">=<\/span><span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span> interactive\r\n<span class=\"nb\">wait<\/span>\r\n\r\n<span class=\"c\"># job done, copy everything back <\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${TMP}\/ to ${PBS_O_WORKDIR}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/ <span class=\"s2\">\"${PBS_O_WORKDIR}\/\"<\/span>\r\n\r\n<span class=\"c\"># delete my temporary files<\/span>\r\n<span class=\"o\">[<\/span> <span class=\"nv\">$?<\/span> -eq 0 <span class=\"o\">]<\/span> <span class=\"o\">&amp;&amp;<\/span> \/bin\/rm -rf <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n<\/pre>\n<\/div>\n<h3><span id=\"R\" class=\"mw-headline\">R<\/span><\/h3>\n<p>R script requesting 1 node, -m selects to mail <b>b<\/b>egin and <b>e<\/b>nd messages and -M is the email address to send to. Uses system default walltime.<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n\r\n<span class=\"c\">#PBS -l select=1:ncpus=1<\/span>\r\n<span class=\"c\">#PBS -M username@sun.ac.za<\/span>\r\n<span class=\"c\">#PBS -m be<\/span>\r\n\r\n<span class=\"nb\">cd<\/span> <span class=\"k\">${<\/span><span class=\"nv\">PBS_O_WORKDIR<\/span><span class=\"k\">}<\/span>\r\n\r\nmodule load app\/R\r\n\r\nR CMD BATCH script.R\r\n<\/pre>\n<\/div>\n<h3><span id=\"CPMD\" class=\"mw-headline\">CPMD<\/span><\/h3>\n<p>CPMD script requesting 8 cores on 1 node, -N names the job &#8216;cpmd&#8217;, -m selects to mail <b>e<\/b>nd message and -M is the email address to send to. CPMD runs with MPI which needs to be told which nodes it may use. The list of nodes it may use is given in <code>$PBS_NODEFILE<\/code>. Uses system default walltime.<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n<span class=\"c\">#PBS -N cpmd<\/span>\r\n<span class=\"c\">#PBS -l select=1:ncpus=8:mpiprocs=8<\/span>\r\n<span class=\"c\">#PBS -m e<\/span>\r\n<span class=\"c\">#PBS -M username@sun.ac.za<\/span>\r\n\r\nmodule load compilers\/gcc-4.8.2\r\nmodule load openmpi-x86_64\r\n\r\n<span class=\"nb\">cd<\/span> <span class=\"k\">${<\/span><span class=\"nv\">PBS_O_WORKDIR<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"c\"># Automatically calculate the number of processors<\/span>\r\n<span class=\"nv\">np<\/span><span class=\"o\">=<\/span><span class=\"k\">$(<\/span>cat <span class=\"k\">${<\/span><span class=\"nv\">PBS_NODEFILE<\/span><span class=\"k\">}<\/span> | wc -l<span class=\"k\">)<\/span>\r\n\r\nmpirun -np <span class=\"k\">${<\/span><span class=\"nv\">np<\/span><span class=\"k\">}<\/span> --hostfile <span class=\"k\">${<\/span><span class=\"nv\">PBS_NODEFILE<\/span><span class=\"k\">}<\/span> \/apps\/CPMD\/3.17.1\/cpmd.x xyz.inp &gt; xyz.out\r\n<\/pre>\n<\/div>\n<h3><span id=\"Gaussian\" class=\"mw-headline\">Gaussian<\/span><\/h3>\n<p>Gaussian has massive temporary files (.rwf file). Generally we don&#8217;t care about this file afterward, so this script doesn&#8217;t copy it from temporary storage after job completion. Requests 6 week walltime.<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n<span class=\"c\">#PBS -N SomeHecticallyChemicalName<\/span>\r\n<span class=\"c\">#PBS -l select=1:ncpus=8:mpiprocs=8:mem=16GB:scratch=true<\/span>\r\n<span class=\"c\">#PBS -l walltime=1008:00:00<\/span>\r\n<span class=\"c\">#PBS -m be<\/span>\r\n\r\n<span class=\"nv\">INPUT<\/span><span class=\"o\">=<\/span>input.cor\r\n\r\n<span class=\"c\"># make sure I'm the only one that can read my output<\/span>\r\n<span class=\"nb\">umask <\/span>0077\r\n<span class=\"nv\">TMP<\/span><span class=\"o\">=<\/span>\/scratch\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>\r\n<span class=\"nv\">TMP2<\/span><span class=\"o\">=<\/span>\/scratch2\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>\r\nmkdir -p <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP2<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"k\">if<\/span> <span class=\"o\">[<\/span> ! -d <span class=\"s2\">\"${TMP}\"<\/span><span class=\"o\">]<\/span> ; <span class=\"k\">then<\/span>\r\n\t<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Cannot create temporary directory. Disk probably full.\"<\/span>\r\n\t<span class=\"nb\">exit <\/span>1\r\n<span class=\"k\">fi<\/span>\r\n\r\n<span class=\"k\">if<\/span> <span class=\"o\">[<\/span> ! -d <span class=\"s2\">\"${TMP2}\"<\/span> <span class=\"o\">]<\/span>; <span class=\"k\">then<\/span>\r\n\t<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Cannot create overflow temporary directory. Disk probably full.\"<\/span>\r\n\t<span class=\"nb\">exit <\/span>1\r\n<span class=\"k\">fi<\/span>\r\n\r\n<span class=\"nb\">export <\/span><span class=\"nv\">GAUSS_SCRDIR<\/span><span class=\"o\">=<\/span><span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"c\"># copy the input files to ${TMP}<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${PBS_O_WORKDIR}\/ to ${TMP}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"s2\">\"${PBS_O_WORKDIR}\"<\/span>\/ <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/\r\n\r\n<span class=\"nb\">cd<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"c\"># make sure input file has %RWF line for specifying temporary storage<\/span>\r\n<span class=\"k\">if<\/span> <span class=\"o\">[<\/span> -z <span class=\"s2\">\"$(\/bin\/grep ^%RWF ${INPUT})\"<\/span> <span class=\"o\">]<\/span>; <span class=\"k\">then<\/span>\r\n\t\/bin\/sed -i <span class=\"s1\">'1s\/^\/%RWF\\n\/'<\/span> <span class=\"k\">${<\/span><span class=\"nv\">INPUT<\/span><span class=\"k\">}<\/span>\r\n<span class=\"k\">fi<\/span>\r\n\r\n<span class=\"c\"># assign 100GB of local temporary storage for every 4 CPUs<\/span>\r\n<span class=\"nv\">MAXTMP<\/span><span class=\"o\">=<\/span><span class=\"k\">$((<\/span> <span class=\"k\">$(<\/span>\/bin\/cat <span class=\"k\">${<\/span><span class=\"nv\">PBS_NODEFILE<\/span><span class=\"k\">}<\/span> | \/usr\/bin\/wc -l<span class=\"k\">)<\/span> <span class=\"o\">*<\/span> <span class=\"m\">100<\/span> <span class=\"o\">\/<\/span> <span class=\"m\">4<\/span> <span class=\"k\">))<\/span>\r\n\r\n<span class=\"c\"># update input file to use local temporary storage<\/span>\r\n\/bin\/sed -i -E <span class=\"s2\">\"s|%RWF(.*)|%RWF=${TMP}\/,${MAXTMP}GB,${TMP2}\/1.rwf,500GB,${TMP2}\/2.rwf,500GB,${TMP2}\/3.rwf,500GB,${TMP2}\/4.rwf,500GB,${TMP2}\/,-1|g\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/<span class=\"k\">${<\/span><span class=\"nv\">INPUT<\/span><span class=\"k\">}<\/span>\r\n\r\n. \/apps\/Gaussian\/09D\/g09\/bsd\/g09.profile\r\n\r\n\/apps\/Gaussian\/09D\/g09\/g09 <span class=\"k\">${<\/span><span class=\"nv\">INPUT<\/span><span class=\"k\">}<\/span> &gt; output.log\r\n\r\n<span class=\"c\"># job done, copy everything except .rwf back <\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${TMP}\/ to ${PBS_O_WORKDIR}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax --exclude<span class=\"o\">=<\/span>*.rwf <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/ <span class=\"s2\">\"${PBS_O_WORKDIR}\/\"<\/span>\r\n\r\n<span class=\"c\"># delete my temporary files<\/span>\r\n<span class=\"o\">[<\/span> <span class=\"nv\">$?<\/span> -eq 0 <span class=\"o\">]<\/span> <span class=\"o\">&amp;&amp;<\/span> \/bin\/rm -rf <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP2<\/span><span class=\"k\">}<\/span>\r\n<\/pre>\n<\/div>\n<p>This script also requires that the input file contains a line starting with <b>%RWF<\/b>. This is so that the script can update the input file to specify that only the first part of the RWF be written to the compute node&#8217;s local scratch space. Overflow is written to the scratch space on the storage server. Unfortunately the RWF files can grow in size to more than 1TB, and can fill the compute node&#8217;s scratch space, choking out other jobs and dying itself.<\/p>\n<h3><span id=\"pisoFOAM\" class=\"mw-headline\">pisoFOAM<\/span><\/h3>\n<p>pisoFOAM generates a lot of output, not all of which is useful. In this example we use <b>crontab<\/b> to schedule the deletion of unwanted output while the job runs. Requests 3 week walltime.<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n<span class=\"c\">#PBS -l select=1:ncpus=8:mpiprocs=8:scratch=true<\/span>\r\n<span class=\"c\">#PBS -l walltime=504:00:00<\/span>\r\n<span class=\"c\">#PBS -m be<\/span>\r\n \r\n<span class=\"c\"># make sure I'm the only one that can read my output<\/span>\r\n<span class=\"nb\">umask <\/span>0077\r\n<span class=\"c\"># create a temporary directory in \/scratch<\/span>\r\n<span class=\"nv\">TMP<\/span><span class=\"o\">=<\/span>\/scratch\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>\r\n\/bin\/mkdir <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Temporary work dir: ${TMP}\"<\/span>\r\n\r\n<span class=\"k\">if<\/span> <span class=\"o\">[<\/span> ! -d <span class=\"s2\">\"${TMP}\"<\/span><span class=\"o\">]<\/span> ; <span class=\"k\">then<\/span>\r\n\t<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Cannot create temporary directory. Disk probably full.\"<\/span>\r\n\t<span class=\"nb\">exit <\/span>1\r\n<span class=\"k\">fi<\/span>\r\n\r\n<span class=\"c\"># copy the input files to ${TMP}<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${PBS_O_WORKDIR}\/ to ${TMP}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"s2\">\"${PBS_O_WORKDIR}\/\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/\r\n\r\n<span class=\"nb\">cd<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n \r\n<span class=\"c\"># start crontab, delete unwanted files every 6 hours<\/span>\r\n\/bin\/echo <span class=\"s2\">\"0 *\/6 * * * \/bin\/find ${TMP} -regextype posix-egrep -regex '(${TMP}\/processor[0-9]+)\/([^\/]*)\/((uniform\/.*)|ddt.*|phi.*|.*_0.*)' -exec rm {} \\\\;\"<\/span> | \/usr\/bin\/crontab\r\n\r\n<span class=\"c\"># Automatically calculate the number of processors<\/span>\r\n<span class=\"nv\">np<\/span><span class=\"o\">=<\/span><span class=\"k\">$(<\/span>cat <span class=\"k\">${<\/span><span class=\"nv\">PBS_NODEFILE<\/span><span class=\"k\">}<\/span> | wc -l<span class=\"k\">)<\/span>\r\n\r\nmodule load compilers\/gcc-4.8.2\r\nmodule load openmpi\/1.6.5\r\n\r\n<span class=\"nb\">export <\/span><span class=\"nv\">MPI_BUFFER_SIZE<\/span><span class=\"o\">=<\/span>200000000\r\n \r\n<span class=\"nb\">export <\/span><span class=\"nv\">FOAM_INST_DIR<\/span><span class=\"o\">=<\/span>\/apps\/OpenFOAM\r\n<span class=\"nv\">foamDotFile<\/span><span class=\"o\">=<\/span><span class=\"k\">${<\/span><span class=\"nv\">FOAM_INST_DIR<\/span><span class=\"k\">}<\/span>\/OpenFOAM-2.2.2\/etc\/bashrc\r\n<span class=\"o\">[<\/span> -f <span class=\"k\">${<\/span><span class=\"nv\">foamDotFile<\/span><span class=\"k\">}<\/span> <span class=\"o\">]<\/span> <span class=\"o\">&amp;&amp;<\/span> . <span class=\"k\">${<\/span><span class=\"nv\">foamDotFile<\/span><span class=\"k\">}<\/span>\r\n\r\nblockMesh\r\ndecomposePar\r\n \r\nmpirun -np <span class=\"k\">${<\/span><span class=\"nv\">np<\/span><span class=\"k\">}<\/span> pisoFoam -parallel &gt; <span class=\"k\">${<\/span><span class=\"nv\">PBS_O_WORKDIR<\/span><span class=\"k\">}<\/span>\/output.log\r\n \r\n<span class=\"c\"># remove crontab entry (assumes I only have one on this node)<\/span>\r\n\/usr\/bin\/crontab -r\r\n \r\n<span class=\"c\"># job done, copy everything back<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${TMP}\/ to ${PBS_O_WORKDIR}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax --exclude <span class=\"s2\">\"*_0.gz\"<\/span> --exclude <span class=\"s2\">\"phi*.gz\"<\/span> --exclude <span class=\"s2\">\"ddt*.gz\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/ <span class=\"s2\">\"${PBS_O_WORKDIR}\/\"<\/span>\r\n \r\n<span class=\"c\"># delete my temporary files<\/span>\r\n<span class=\"o\">[<\/span> <span class=\"nv\">$?<\/span> -eq 0 <span class=\"o\">]<\/span> <span class=\"o\">&amp;&amp;<\/span> \/bin\/rm -rf <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n<\/pre>\n<\/div>\n<h3><span id=\"MSC_Marc\" class=\"mw-headline\">MSC Marc<\/span><\/h3>\n<p>Marc script requesting 8 cores, on 1 node, -m selects to mail <b>e<\/b>nd message and -M is the email address to send to.<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n<span class=\"c\">#PBS -N JobName<\/span>\r\n<span class=\"c\">#PBS -l select=1:ncpus=8:mpiprocs=8:scratch=true<\/span>\r\n<span class=\"c\">#PBS -l walltime=24:00:00<\/span>\r\n<span class=\"c\">#PBS -l license_marc=8<\/span>\r\n<span class=\"c\">#PBS -m e<\/span>\r\n\r\n<span class=\"nv\">INPUT<\/span><span class=\"o\">=<\/span>inputfile\r\n\r\n<span class=\"c\"># make sure I'm the only one that can read my output<\/span>\r\n<span class=\"nb\">umask <\/span>0077\r\n<span class=\"nv\">TMP<\/span><span class=\"o\">=<\/span>\/scratch\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>\r\nmkdir -p <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"k\">if<\/span> <span class=\"o\">[<\/span> ! -d <span class=\"s2\">\"${TMP}\"<\/span><span class=\"o\">]<\/span> ; <span class=\"k\">then<\/span>\r\n\t<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Cannot create temporary directory. Disk probably full.\"<\/span>\r\n\t<span class=\"nb\">exit <\/span>1\r\n<span class=\"k\">fi<\/span>\r\n\r\n<span class=\"c\"># copy the input files to ${TMP}<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${PBS_O_WORKDIR}\/ to ${TMP}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"s2\">\"${PBS_O_WORKDIR}\"<\/span>\/ <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/\r\n\r\n<span class=\"nb\">cd<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\nmodule load app\/marc\r\n\r\n<span class=\"c\"># get number of processors assigned<\/span>\r\n<span class=\"nv\">NPS<\/span><span class=\"o\">=<\/span><span class=\"k\">$(<\/span>\/bin\/cat <span class=\"k\">${<\/span><span class=\"nv\">PBS_NODEFILE<\/span><span class=\"k\">}<\/span> | \/usr\/bin\/wc -l<span class=\"k\">)<\/span>\r\n<span class=\"nv\">HOSTS<\/span><span class=\"o\">=<\/span>hosts.<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"o\">[<\/span> -f <span class=\"k\">${<\/span><span class=\"nv\">HOSTS<\/span><span class=\"k\">}<\/span> <span class=\"o\">]<\/span> <span class=\"o\">&amp;&amp;<\/span> \/bin\/rm <span class=\"k\">${<\/span><span class=\"nv\">HOSTS<\/span><span class=\"k\">}<\/span>\r\n<span class=\"c\"># create hosts file<\/span>\r\nuniq -c <span class=\"k\">${<\/span><span class=\"nv\">PBS_NODEFILE<\/span><span class=\"k\">}<\/span> | <span class=\"k\">while <\/span><span class=\"nb\">read <\/span>np host; <span class=\"k\">do<\/span>\r\n\t\/bin\/echo <span class=\"s2\">\"${host} ${np}\"<\/span> &gt;&gt; <span class=\"k\">${<\/span><span class=\"nv\">HOSTS<\/span><span class=\"k\">}<\/span>\r\n<span class=\"k\">done<\/span>\r\n\r\n<span class=\"k\">if<\/span> <span class=\"o\">[<\/span> <span class=\"k\">${<\/span><span class=\"nv\">NPS<\/span><span class=\"k\">}<\/span> -gt 1 <span class=\"o\">]<\/span>; <span class=\"k\">then<\/span>\r\n\trun_marc -j <span class=\"k\">${<\/span><span class=\"nv\">INPUT<\/span><span class=\"k\">}<\/span> -ver n -back n -ci n -cr n -nps <span class=\"k\">${<\/span><span class=\"nv\">NPS<\/span><span class=\"k\">}<\/span> -host <span class=\"k\">${<\/span><span class=\"nv\">HOSTS<\/span><span class=\"k\">}<\/span>\r\n<span class=\"k\">else<\/span>\r\n\trun_marc -j <span class=\"k\">${<\/span><span class=\"nv\">INPUT<\/span><span class=\"k\">}<\/span> -ver n -back n -ci n -cr n\r\n<span class=\"k\">fi<\/span>\r\n\r\n<span class=\"c\"># job done, copy everything back <\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${TMP}\/ to ${PBS_O_WORKDIR}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/ <span class=\"s2\">\"${PBS_O_WORKDIR}\/\"<\/span>\r\n\r\n<span class=\"c\"># delete my temporary files<\/span>\r\n<span class=\"o\">[<\/span> <span class=\"nv\">$?<\/span> -eq 0 <span class=\"o\">]<\/span> <span class=\"o\">&amp;&amp;<\/span> \/bin\/rm -rf <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n<\/pre>\n<\/div>\n<h3><span id=\"mothur\" class=\"mw-headline\">mothur<\/span><\/h3>\n<p>mothur has massive data volumes, and therefore has to use local scratch space to avoid killing the file server. Requests 1 core on 1 node.<\/p>\n<p>mothur&#8217;s input can either be a file with all the commands to process listed, or the commands can be given on the commandline if prefixed with a #.<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n\r\n<span class=\"c\">#PBS -l select=1:ncpus=1:mpiprocs=1:scratch=true<\/span>\r\n<span class=\"c\">#PBS -m e<\/span>\r\n\r\n<span class=\"c\"># make sure I'm the only one that can read my output<\/span>\r\n<span class=\"nb\">umask <\/span>0077\r\n<span class=\"nv\">TMP<\/span><span class=\"o\">=<\/span>\/scratch\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>\r\nmkdir -p <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"k\">if<\/span> <span class=\"o\">[<\/span> ! -d <span class=\"s2\">\"${TMP}\"<\/span><span class=\"o\">]<\/span> ; <span class=\"k\">then<\/span>\r\n\t<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Cannot create temporary directory. Disk probably full.\"<\/span>\r\n\t<span class=\"nb\">exit <\/span>1\r\n<span class=\"k\">fi<\/span>\r\n\r\n<span class=\"c\"># copy the input files to ${TMP}<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${PBS_O_WORKDIR}\/ to ${TMP}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"s2\">\"${PBS_O_WORKDIR}\"<\/span>\/ <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/\r\n\r\n<span class=\"nb\">cd<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\nmodule load app\/mothur\r\n\r\n<span class=\"c\"># Automatically calculate the number of processors<\/span>\r\n<span class=\"nv\">np<\/span><span class=\"o\">=<\/span><span class=\"k\">$(<\/span>cat <span class=\"k\">${<\/span><span class=\"nv\">PBS_NODEFILE<\/span><span class=\"k\">}<\/span> | wc -l<span class=\"k\">)<\/span>\r\n\r\nmothur inputfile\r\n\r\n<span class=\"c\"># could also put the commands on the command line<\/span>\r\n<span class=\"c\">#mothur \"#cluster.split(column=file.dist, name=file.names, large=T, processors=${np})\"<\/span>\r\n\r\n<span class=\"c\"># job done, copy everything back <\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${TMP}\/ to ${PBS_O_WORKDIR}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/ <span class=\"s2\">\"${PBS_O_WORKDIR}\/\"<\/span>\r\n\r\n<span class=\"c\"># delete my temporary files<\/span>\r\n<span class=\"o\">[<\/span> <span class=\"nv\">$?<\/span> -eq 0 <span class=\"o\">]<\/span> <span class=\"o\">&amp;&amp;<\/span> \/bin\/rm -rf <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n<\/pre>\n<\/div>\n<h3><span id=\"Hadoop\" class=\"mw-headline\">Hadoop<\/span><\/h3>\n<p>Hadoop is useful for sorting through massive amounts of data. In this example we read the input data into a distributed HDFS, and do a map\/reduce. Upon completion the output is copied out of the HDFS to central storage. <code>scratch<\/code> nodes are requested due to their large scratch space. The input and output data together should not exceed 1.5TB, so we request 1 node for every 750GB of input data. In this example we request 6 nodes for 4TB of input data.<\/p>\n<p><b>Java example<\/b><\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n\r\n<span class=\"c\">#PBS -V<\/span>\r\n<span class=\"c\">#PBS -l select=6:ncpus=1:scratch=true<\/span>\r\n<span class=\"c\">#PBS -N hadoopDedupe<\/span>\r\n\r\n<span class=\"c\"># make sure I'm the only one that can read my output<\/span>\r\n<span class=\"nb\">umask <\/span>0077\r\n\r\n<span class=\"c\"># create a temporary directory in \/scratch<\/span>\r\n<span class=\"nv\">TMP<\/span><span class=\"o\">=<\/span>\/scratch\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>\r\nmkdir -p <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/logs\r\n\r\n<span class=\"nv\">JAR<\/span><span class=\"o\">=<\/span>dedupe.jar\r\n<span class=\"nv\">CLASS<\/span><span class=\"o\">=<\/span>za.ac.sun.hpc.dedupe\r\n<span class=\"nv\">INPUT<\/span><span class=\"o\">=<\/span><span class=\"s2\">\"${PBS_O_WORKDIR}\/input\"<\/span>\r\n<span class=\"nv\">OUTPUT<\/span><span class=\"o\">=<\/span><span class=\"s2\">\"${PBS_O_WORKDIR}\"<\/span>\r\n\r\n<span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"o\">=<\/span>\/apps\/hadoop\/2.4.1\r\n<span class=\"nv\">JAVA_HOME<\/span><span class=\"o\">=<\/span>\/usr\/lib\/jvm\/java\r\n<span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"o\">=<\/span><span class=\"s2\">\"${PBS_O_WORKDIR}\/conf\"<\/span>\r\n\r\n<span class=\"c\"># copy the class to ${TMP}<\/span>\r\ncp <span class=\"s2\">\"${HADOOP_PREFIX}\/common\/${JAR}\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"c\"># create Hadoop configs<\/span>\r\ncp -a <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/conf <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"nv\">MASTER<\/span><span class=\"o\">=<\/span><span class=\"k\">$(<\/span>hostname<span class=\"k\">)<\/span>\r\nuniq <span class=\"k\">${<\/span><span class=\"nv\">PBS_NODEFILE<\/span><span class=\"k\">}<\/span> &gt; <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/slaves\r\n<span class=\"nb\">echo<\/span> <span class=\"k\">${<\/span><span class=\"nv\">MASTER<\/span><span class=\"k\">}<\/span> &gt; <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/masters\r\n\r\nsed -i <span class=\"s2\">\"s|export JAVA_HOME=.*|export JAVA_HOME=${JAVA_HOME}|g\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/hadoop-env.sh\r\nsed -i <span class=\"s2\">\"s|&lt;value&gt;\/scratch\/.*&lt;\/value&gt;|&lt;value&gt;\/scratch\/${PBS_JOBID}&lt;\/value&gt;|g\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/<span class=\"o\">{<\/span>hdfs,core<span class=\"o\">}<\/span>-site.xml\r\nsed -i <span class=\"s2\">\"s|&lt;value&gt;.*:50090&lt;\/value&gt;|&lt;value&gt;${MASTER}:50090&lt;\/value&gt;|g\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/<span class=\"o\">{<\/span>hdfs,core<span class=\"o\">}<\/span>-site.xml\r\nsed -i <span class=\"s2\">\"s|hdfs:\/\/.*:|hdfs:\/\/${MASTER}:|g\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/core-site.xml\r\nsed -i <span class=\"s2\">\"s|.*export HADOOP_LOG_DIR.*|export HADOOP_LOG_DIR=${TMP}\/logs|g\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/hadoop-env.sh\r\nsed -i <span class=\"s2\">\"s|.*export HADOOP_PID_DIR.*|export HADOOP_PID_DIR=${TMP}|g\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/hadoop-env.sh\r\n\r\n<span class=\"c\"># setup Hadoop services<\/span>\r\n. <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/hadoop-env.sh\r\n\r\n<span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/bin\/hdfs namenode -format\r\n<span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/sbin\/start-dfs.sh\r\n\r\n<span class=\"c\"># import data<\/span>\r\n<span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/bin\/hdfs dfs -mkdir \/user\r\n<span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/bin\/hdfs dfs -mkdir \/user\/<span class=\"k\">${<\/span><span class=\"nv\">USER<\/span><span class=\"k\">}<\/span>\r\n<span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/bin\/hdfs dfs -put <span class=\"k\">${<\/span><span class=\"nv\">INPUT<\/span><span class=\"k\">}<\/span> input\r\n\r\n<span class=\"nb\">cd<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"c\"># run hadoop job<\/span>\r\n<span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/bin\/hadoop jar <span class=\"k\">${<\/span><span class=\"nv\">JAR<\/span><span class=\"k\">}<\/span> <span class=\"k\">${<\/span><span class=\"nv\">CLASS<\/span><span class=\"k\">}<\/span> input output\r\n\r\n<span class=\"c\"># retrieve output from Hadoop<\/span>\r\nmkdir -p <span class=\"s2\">\"${OUTPUT}\"<\/span>\r\n<span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/bin\/hdfs dfs -get output <span class=\"s2\">\"${OUTPUT}\"<\/span>\r\n\r\n<span class=\"c\"># stop Hadoop services<\/span>\r\n<span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/sbin\/stop-dfs.sh\r\n\r\n<span class=\"c\"># retrieve logs<\/span>\r\ncp -a <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/logs <span class=\"s2\">\"${PBS_O_WORKDIR}\"<\/span>\r\n\r\n<span class=\"c\"># clear HDFS directories on all slaves<\/span>\r\ncat <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/slaves | <span class=\"k\">while <\/span><span class=\"nb\">read <\/span>slave; <span class=\"k\">do<\/span>\r\n    ssh -n <span class=\"k\">${<\/span><span class=\"nv\">slave<\/span><span class=\"k\">}<\/span> <span class=\"s2\">\"rm -rf ${TMP}\"<\/span>\r\n<span class=\"k\">done<\/span>\r\n\r\n<span class=\"c\"># delete my temporary files<\/span>\r\n<span class=\"o\">[<\/span> <span class=\"nv\">$?<\/span> -eq 0 <span class=\"o\">]<\/span> <span class=\"o\">&amp;&amp;<\/span> \/bin\/rm -rf <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n<\/pre>\n<\/div>\n<p><b>Third-party script example<\/b><\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n\r\n<span class=\"c\">#PBS -V<\/span>\r\n<span class=\"c\">#PBS -l select=6:ncpus=1:scratch=true<\/span>\r\n<span class=\"c\">#PBS -N hadoopDedupe<\/span>\r\n\r\n<span class=\"c\"># make sure I'm the only one that can read my output<\/span>\r\n<span class=\"nb\">umask <\/span>0077\r\n\r\n<span class=\"c\"># create a temporary directory in \/scratch<\/span>\r\n<span class=\"nv\">TMP<\/span><span class=\"o\">=<\/span>\/scratch\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>\r\nmkdir -p <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/logs\r\n\r\n<span class=\"nv\">INPUT<\/span><span class=\"o\">=<\/span><span class=\"s2\">\"${PBS_O_WORKDIR}\/input\"<\/span>\r\n<span class=\"nv\">OUTPUT<\/span><span class=\"o\">=<\/span><span class=\"s2\">\"${PBS_O_WORKDIR}\"<\/span>\r\n<span class=\"nv\">MAPPER<\/span><span class=\"o\">=<\/span><span class=\"s2\">\"mapper.py\"<\/span>\r\n<span class=\"nv\">REDUCER<\/span><span class=\"o\">=<\/span><span class=\"s2\">\"reducer.py\"<\/span>\r\n\r\n<span class=\"c\"># copy the mapper and reducer to ${TMP}<\/span>\r\ncp <span class=\"s2\">\"${PBS_O_WORKDIR}\/${MAPPER}\"<\/span> <span class=\"s2\">\"${PBS_O_WORKDIR}\/${REDUCER}\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"o\">=<\/span>\/apps\/hadoop\/2.4.1\r\n<span class=\"nv\">JAVA_HOME<\/span><span class=\"o\">=<\/span>\/usr\/lib\/jvm\/java\r\n<span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"o\">=<\/span><span class=\"s2\">\"${PBS_O_WORKDIR}\/conf\"<\/span>\r\n\r\n<span class=\"c\"># create Hadoop configs<\/span>\r\ncp -a <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/conf <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"nv\">MASTER<\/span><span class=\"o\">=<\/span><span class=\"k\">$(<\/span>hostname<span class=\"k\">)<\/span>\r\nuniq <span class=\"k\">${<\/span><span class=\"nv\">PBS_NODEFILE<\/span><span class=\"k\">}<\/span> &gt; <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/slaves\r\n<span class=\"nb\">echo<\/span> <span class=\"k\">${<\/span><span class=\"nv\">MASTER<\/span><span class=\"k\">}<\/span> &gt; <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/masters\r\n\r\nsed -i <span class=\"s2\">\"s|export JAVA_HOME=.*|export JAVA_HOME=${JAVA_HOME}|g\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/hadoop-env.sh\r\nsed -i <span class=\"s2\">\"s|&lt;value&gt;\/scratch\/.*&lt;\/value&gt;|&lt;value&gt;\/scratch\/${PBS_JOBID}&lt;\/value&gt;|g\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/<span class=\"o\">{<\/span>hdfs,core<span class=\"o\">}<\/span>-site.xml\r\nsed -i <span class=\"s2\">\"s|&lt;value&gt;.*:50090&lt;\/value&gt;|&lt;value&gt;${MASTER}:50090&lt;\/value&gt;|g\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/<span class=\"o\">{<\/span>hdfs,core<span class=\"o\">}<\/span>-site.xml\r\nsed -i <span class=\"s2\">\"s|hdfs:\/\/.*:|hdfs:\/\/${MASTER}:|g\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/core-site.xml\r\nsed -i <span class=\"s2\">\"s|.*export HADOOP_LOG_DIR.*|export HADOOP_LOG_DIR=${TMP}\/logs|g\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/hadoop-env.sh\r\nsed -i <span class=\"s2\">\"s|.*export HADOOP_PID_DIR.*|export HADOOP_PID_DIR=${TMP}|g\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/hadoop-env.sh\r\n\r\n<span class=\"c\"># setup Hadoop services<\/span>\r\n. <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/hadoop-env.sh\r\n\r\n<span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/bin\/hdfs namenode -format\r\n<span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/sbin\/start-dfs.sh\r\n\r\n<span class=\"c\"># import data<\/span>\r\n<span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/bin\/hdfs dfs -mkdir \/user\r\n<span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/bin\/hdfs dfs -mkdir \/user\/<span class=\"k\">${<\/span><span class=\"nv\">USER<\/span><span class=\"k\">}<\/span>\r\n<span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/bin\/hdfs dfs -put <span class=\"k\">${<\/span><span class=\"nv\">INPUT<\/span><span class=\"k\">}<\/span> input\r\n\r\n<span class=\"nb\">cd<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"c\"># run hadoop job<\/span>\r\n<span class=\"nv\">STREAM<\/span><span class=\"o\">=<\/span><span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/share\/hadoop\/tools\/lib\/hadoop-streaming-2.4.1.jar\r\n<span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/bin\/hadoop jar <span class=\"k\">${<\/span><span class=\"nv\">STREAM<\/span><span class=\"k\">}<\/span> <span class=\"k\">${<\/span><span class=\"nv\">OPTIONS<\/span><span class=\"k\">}<\/span> -files <span class=\"k\">${<\/span><span class=\"nv\">MAPPER<\/span><span class=\"k\">}<\/span>,<span class=\"k\">${<\/span><span class=\"nv\">REDUCER<\/span><span class=\"k\">}<\/span> -mapper <span class=\"k\">${<\/span><span class=\"nv\">MAPPER<\/span><span class=\"k\">}<\/span> -reducer <span class=\"k\">${<\/span><span class=\"nv\">REDUCER<\/span><span class=\"k\">}<\/span> -input input -output output\r\n\r\n<span class=\"c\"># retrieve output from Hadoop<\/span>\r\nmkdir -p <span class=\"s2\">\"${OUTPUT}\"<\/span>\r\n<span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/bin\/hdfs dfs -get output <span class=\"s2\">\"${OUTPUT}\"<\/span>\r\n\r\n<span class=\"c\"># stop Hadoop services<\/span>\r\n<span class=\"k\">${<\/span><span class=\"nv\">HADOOP_PREFIX<\/span><span class=\"k\">}<\/span>\/sbin\/stop-dfs.sh\r\n\r\n<span class=\"c\"># retrieve logs<\/span>\r\ncp -a <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/logs <span class=\"s2\">\"${PBS_O_WORKDIR}\"<\/span>\r\n\r\n<span class=\"c\"># clear HDFS directories on all slaves<\/span>\r\ncat <span class=\"k\">${<\/span><span class=\"nv\">HADOOP_CONF_DIR<\/span><span class=\"k\">}<\/span>\/slaves | <span class=\"k\">while <\/span><span class=\"nb\">read <\/span>slave; <span class=\"k\">do<\/span>\r\n    ssh -n <span class=\"k\">${<\/span><span class=\"nv\">slave<\/span><span class=\"k\">}<\/span> <span class=\"s2\">\"rm -rf ${TMP}\"<\/span>\r\n<span class=\"k\">done<\/span>\r\n\r\n<span class=\"c\"># delete my temporary files<\/span>\r\n<span class=\"o\">[<\/span> <span class=\"nv\">$?<\/span> -eq 0 <span class=\"o\">]<\/span> <span class=\"o\">&amp;&amp;<\/span> \/bin\/rm -rf <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n<\/pre>\n<\/div>\n<h3><span id=\"Numeca\" class=\"mw-headline\">Numeca<\/span><\/h3>\n<p>This script assumes you have 4 Numeca files available for your project. If your project is named <i>proj1<\/i>, the required files are <i>proj1.iec<\/i>, <i>proj1.igg<\/i>, <i>proj1.bcs<\/i> and <i>proj1.cgns<\/i>.<\/p>\n<p>Script requests 4 hours walltime, and uses 8 cores on a host with scratch space.<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n\r\n<span class=\"c\">#PBS -N proj1<\/span>\r\n<span class=\"c\">#PBS -l select=1:ncpus=8:mpiprocs=8:scratch=true<\/span>\r\n<span class=\"c\">#PBS -l walltime=04:00:00<\/span>\r\n\r\n<span class=\"nv\">INPUT<\/span><span class=\"o\">=<\/span>proj1\r\n\r\n<span class=\"c\"># make sure I'm the only one that can read my output<\/span>\r\n<span class=\"nb\">umask <\/span>0077\r\n<span class=\"c\"># create a temporary directory with the job id as name in \/scratch<\/span>\r\n<span class=\"nv\">TMP<\/span><span class=\"o\">=<\/span>\/scratch\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>\r\nmkdir -p <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Temporary work dir: ${TMP}\"<\/span>\r\n\r\n<span class=\"c\"># copy the input files to ${TMP}<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${PBS_O_WORKDIR}\/ to ${TMP}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"s2\">\"${PBS_O_WORKDIR}\/\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/\r\n\r\n<span class=\"nb\">cd<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"nv\">NUMECA<\/span><span class=\"o\">=<\/span>\/apps\/numeca\/bin\r\n<span class=\"nv\">VERSION<\/span><span class=\"o\">=<\/span>90_3\r\n\r\n<span class=\"c\"># create hosts list<\/span>\r\n<span class=\"nv\">TMPH<\/span><span class=\"o\">=<\/span><span class=\"k\">$(<\/span>\/bin\/mktemp<span class=\"k\">)<\/span>\r\n\/usr\/bin\/tail -n +2 <span class=\"k\">${<\/span><span class=\"nv\">PBS_NODEFILE<\/span><span class=\"k\">}<\/span> | \/usr\/bin\/uniq -c | <span class=\"k\">while <\/span><span class=\"nb\">read <\/span>np host; <span class=\"k\">do<\/span>\r\n    \/bin\/echo <span class=\"s2\">\"${host} ${np}\"<\/span> &gt;&gt; <span class=\"k\">${<\/span><span class=\"nv\">TMPH<\/span><span class=\"k\">}<\/span>\r\n<span class=\"k\">done<\/span>\r\n<span class=\"nv\">NHOSTS<\/span><span class=\"o\">=<\/span><span class=\"k\">$(<\/span>\/bin\/cat <span class=\"k\">${<\/span><span class=\"nv\">TMPH<\/span><span class=\"k\">}<\/span> | \/usr\/bin\/wc -l<span class=\"k\">)<\/span>\r\n<span class=\"nv\">LHOSTS<\/span><span class=\"o\">=<\/span><span class=\"k\">$(while <\/span><span class=\"nb\">read <\/span>line; <span class=\"k\">do <\/span><span class=\"nb\">echo<\/span> -n <span class=\"k\">${<\/span><span class=\"nv\">line<\/span><span class=\"k\">}<\/span>; <span class=\"k\">done<\/span> &lt; <span class=\"k\">${<\/span><span class=\"nv\">TMPH<\/span><span class=\"k\">})<\/span>\r\n\/bin\/rm <span class=\"k\">${<\/span><span class=\"nv\">TMPH<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"c\"># Create .run file<\/span>\r\n<span class=\"k\">${<\/span><span class=\"nv\">NUMECA<\/span><span class=\"k\">}<\/span>\/fine -niversion <span class=\"k\">${<\/span><span class=\"nv\">VERSION<\/span><span class=\"k\">}<\/span> -batch <span class=\"k\">${<\/span><span class=\"nv\">INPUT<\/span><span class=\"k\">}<\/span>.iec <span class=\"k\">${<\/span><span class=\"nv\">INPUT<\/span><span class=\"k\">}<\/span>.igg <span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>.run\r\n\r\n<span class=\"c\"># Set up parallel run<\/span>\r\n<span class=\"k\">${<\/span><span class=\"nv\">NUMECA<\/span><span class=\"k\">}<\/span>\/fine -niversion <span class=\"k\">${<\/span><span class=\"nv\">VERSION<\/span><span class=\"k\">}<\/span> -batch -parallel <span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>.run <span class=\"k\">${<\/span><span class=\"nv\">NHOSTS<\/span><span class=\"k\">}<\/span> <span class=\"k\">${<\/span><span class=\"nv\">LHOSTS<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"c\"># Start solver<\/span>\r\n<span class=\"k\">${<\/span><span class=\"nv\">NUMECA<\/span><span class=\"k\">}<\/span>\/euranusTurbo_parallel <span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>.run -steering <span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>.steering -niversion <span class=\"k\">${<\/span><span class=\"nv\">VERSION<\/span><span class=\"k\">}<\/span> -p4pg <span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>.p4pg\r\n\r\n<span class=\"c\"># job done, copy everything back<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${TMP}\/ to ${PBS_O_WORKDIR}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/ <span class=\"s2\">\"${PBS_O_WORKDIR}\/\"<\/span>\r\n\r\n<span class=\"c\"># delete my temporary files<\/span>\r\n<span class=\"o\">[<\/span> <span class=\"nv\">$?<\/span> -eq 0 <span class=\"o\">]<\/span> <span class=\"o\">&amp;&amp;<\/span> \/bin\/rm -rf <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n<\/pre>\n<\/div>\n<p>This script assumes you have 3 Numeca files available for your project. If your project is named <i>proj1<\/i>, the required files are <i>proj1.iec<\/i>, <i>proj1.trb<\/i> and <i>proj1.geomTurbo<\/i>.<\/p>\n<p>Script requests 4 hours walltime, and uses 8 cores on a host with scratch space.<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\">\n<pre><span class=\"c\">#!\/bin\/bash<\/span>\r\n\r\n<span class=\"c\">#PBS -N proj1<\/span>\r\n<span class=\"c\">#PBS -l select=1:ncpus=8:mpiprocs=8:scratch=true<\/span>\r\n<span class=\"c\">#PBS -l walltime=04:00:00<\/span>\r\n\r\n<span class=\"nv\">INPUT<\/span><span class=\"o\">=<\/span>proj1\r\n\r\n<span class=\"c\"># make sure I'm the only one that can read my output<\/span>\r\n<span class=\"nb\">umask <\/span>0077\r\n<span class=\"c\"># create a temporary directory with the job id as name in \/scratch<\/span>\r\n<span class=\"nv\">TMP<\/span><span class=\"o\">=<\/span>\/scratch\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>\r\nmkdir -p <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Temporary work dir: ${TMP}\"<\/span>\r\n\r\n<span class=\"c\"># copy the input files to ${TMP}<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${PBS_O_WORKDIR}\/ to ${TMP}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"s2\">\"${PBS_O_WORKDIR}\/\"<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/\r\n\r\n<span class=\"c\"># Automatically calculate the number of processors - 1 less than requested, used for master process<\/span>\r\n<span class=\"nv\">NP<\/span><span class=\"o\">=<\/span><span class=\"k\">$((<\/span> <span class=\"k\">$(<\/span>cat <span class=\"k\">${<\/span><span class=\"nv\">PBS_NODEFILE<\/span><span class=\"k\">}<\/span> | wc -l<span class=\"k\">)<\/span> <span class=\"o\">-<\/span> <span class=\"m\">1<\/span><span class=\"k\">))<\/span>\r\n\r\n<span class=\"nb\">cd<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n\r\n<span class=\"nv\">NUMECA<\/span><span class=\"o\">=<\/span>\/apps\/numeca\/bin\r\n<span class=\"nv\">VERSION<\/span><span class=\"o\">=<\/span>101\r\n\r\n<span class=\"c\"># inputs .trb, .geomTurbo<\/span>\r\n<span class=\"c\"># outputs .bcs, .cgns, .igg<\/span>\r\n\/usr\/bin\/xvfb-run -d <span class=\"k\">${<\/span><span class=\"nv\">NUMECA<\/span><span class=\"k\">}<\/span>\/igg -print -batch -niversion <span class=\"k\">${<\/span><span class=\"nv\">VERSION<\/span><span class=\"k\">}<\/span> -autogrid5 -trb <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/<span class=\"k\">${<\/span><span class=\"nv\">INPUT<\/span><span class=\"k\">}<\/span>.trb -geomTurbo <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/<span class=\"k\">${<\/span><span class=\"nv\">INPUT<\/span><span class=\"k\">}<\/span>.geomTurbo -mesh <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>.igg\r\n\r\n<span class=\"c\"># inputs .iec, .igg<\/span>\r\n<span class=\"c\"># outputs .run<\/span>\r\n\/usr\/bin\/xvfb-run -d <span class=\"k\">${<\/span><span class=\"nv\">NUMECA<\/span><span class=\"k\">}<\/span>\/fine -print -batch -niversion <span class=\"k\">${<\/span><span class=\"nv\">VERSION<\/span><span class=\"k\">}<\/span> -project <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/<span class=\"k\">${<\/span><span class=\"nv\">INPUT<\/span><span class=\"k\">}<\/span>.iec -mesh <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>.igg -computation <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>.run\r\n\r\n<span class=\"c\"># inputs .run<\/span>\r\n<span class=\"c\"># outputs .p4pg, .batch<\/span>\r\n\/usr\/bin\/xvfb-run -d <span class=\"k\">${<\/span><span class=\"nv\">NUMECA<\/span><span class=\"k\">}<\/span>\/fine -print -batch -niversion <span class=\"k\">${<\/span><span class=\"nv\">VERSION<\/span><span class=\"k\">}<\/span> -parallel -computation <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>.run -nproc <span class=\"k\">${<\/span><span class=\"nv\">NP<\/span><span class=\"k\">}<\/span> -nbint 128 -nbreal 128\r\n\r\n<span class=\"c\"># inputs .run, .p4pg<\/span>\r\n<span class=\"k\">${<\/span><span class=\"nv\">NUMECA<\/span><span class=\"k\">}<\/span>\/euranusTurbo_parallel<span class=\"k\">${<\/span><span class=\"nv\">VERSION<\/span><span class=\"k\">}<\/span> <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>.run -steering <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>.steering -p4pg <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/<span class=\"k\">${<\/span><span class=\"nv\">PBS_JOBID<\/span><span class=\"k\">}<\/span>.p4pg\r\n\r\n<span class=\"c\"># job done, copy everything back<\/span>\r\n<span class=\"nb\">echo<\/span> <span class=\"s2\">\"Copying from ${TMP}\/ to ${PBS_O_WORKDIR}\/\"<\/span>\r\n\/usr\/bin\/rsync -vax <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\/ <span class=\"s2\">\"${PBS_O_WORKDIR}\/\"<\/span>\r\n\r\n<span class=\"c\"># delete my temporary files<\/span>\r\n<span class=\"o\">[<\/span> <span class=\"nv\">$?<\/span> -eq 0 <span class=\"o\">]<\/span> <span class=\"o\">&amp;&amp;<\/span> \/bin\/rm -rf <span class=\"k\">${<\/span><span class=\"nv\">TMP<\/span><span class=\"k\">}<\/span>\r\n<\/pre>\n<\/div>\n<h2><span id=\"Programs_that_handle_job_submission_differently\" class=\"mw-headline\">Programs that handle job submission differently<\/span><\/h2>\n<h3><span id=\"MATLAB\" class=\"mw-headline\">MATLAB<\/span><\/h3>\n<p>With MATLAB&#8217;s Parallel Computing Toolbox (PCT), it&#8217;s possible to submit your MATLAB code directly from your desktop to the HPC without writing submit scripts and submitting the job manually. See <a class=\"external text\" href=\"http:\/\/www.mathworks.com\/products\/parallel-computing\/\" rel=\"nofollow\">MathWorks<\/a> for further details.<\/p>\n<p>The HPC has a license to allow the use of 16 cores by MATLAB. MATLAB R2015a and all standard toolboxes are installed on the HPC, and any MATLAB product you are licensed for will be able to run on the HPC.<\/p>\n<p>To be able to use the HPC for MATLAB, you will require a Parallel Computing Toolbox license on your desktop.<\/p>\n<h4><span id=\"Setup\" class=\"mw-headline\">Setup<\/span><\/h4>\n<ol>\n<li>Install the required scripts for a generic PBS cluster\n<ol>\n<li>Copy all the files from <code><i>MATLABROOT<\/i>\\toolbox\\distcomp\\examples\\integration\\pbs\\nonshared<\/code> to <code><i>MATLABROOT<\/i>\\toolbox\\local<\/code>. <i>MATLABROOT<\/i> is the location where you installed MATLAB on your machine, most probably in <code>C:\\Program Files\\MATLAB\\R2015a<\/code> or <code>\/usr\/local\/MATLAB\/R2015a<\/code>.<\/li>\n<li>Edit <code><i>MATLABROOT<\/i>\\toolbox\\local\\independentSubmitFcn.m<\/code>\n<ul>\n<li>Change line 122 by adding <code>-l walltime=24:00:00<\/code><br \/>\n<code>additionalSubmitArgs = '-l walltime=24:00:00';<\/code><\/li>\n<\/ul>\n<\/li>\n<li>Edit <code><i>MATLABROOT<\/i>\\toolbox\\local\\communicatingSubmitFcn.m<\/code>\n<ul>\n<li>Change line 117 by adding <code>-l walltime=24:00:00<\/code><br \/>\n<code>additionalSubmitArgs = sprintf('-l select=%d:ncpus=%d -l walltime=24:00:00', numberOfNodes, procsPerNode);<\/code><\/li>\n<\/ul>\n<dl>\n<dd>The two changes are required to increase the default <a title=\"Main Page\" href=\"https:\/\/www0.sun.ac.za\/hpc\/index.php?title=Main_Page#Job_priorities\">walltime<\/a> on the HPC.<\/dd>\n<\/dl>\n<\/li>\n<\/ol>\n<\/li>\n<li>Force the use of FQDN or IP in hostname lookup for <i>parpool<\/i>\n<ol>\n<li>Create (or edit if it already exists) <code><i>MATLABROOT<\/i>\\toolbox\\local\\startup.m<\/code>, and add\n<dl>\n<dd><code>pctconfig('hostname', '<b>IP or hostname'<\/b>);<\/code><\/p>\n<dl>\n<dd>replace <b>IP or hostname<\/b> with your machine&#8217;s IP or hostname (<code>ip addr list<\/code> on Linux, <code>ipconfig<\/code> in a <i>Command Prompt<\/i> on Windows)<\/dd>\n<\/dl>\n<\/dd>\n<\/dl>\n<\/li>\n<li>Restart MATLAB to apply the change, or run it manually in the <i>Command Window<\/i><\/li>\n<\/ol>\n<\/li>\n<li>Create a cluster profile\n<dl>\n<dd>If your PCT is installed and licensed correctly, you should see a dropdown named Parallel in your toolbar.<\/dd>\n<\/dl>\n<ol>\n<li>Open the <i>Parallel<\/i> dropdown, and select <i>Manage Cluster Profiles&#8230;<\/i> (<a class=\"internal\" title=\"MATLABPCT1.png\" href=\"https:\/\/www0.sun.ac.za\/hpc\/images\/3\/39\/MATLABPCT1.png\">screenshot<\/a>)<\/li>\n<li>Add a new <b>Generic<\/b> custom 3rd party cluster profile (<a class=\"internal\" title=\"MATLABPCT2.png\" href=\"https:\/\/www0.sun.ac.za\/hpc\/images\/2\/24\/MATLABPCT2.png\">screenshot<\/a>)<\/li>\n<li>Rename the new cluster profile to &#8216;HPC1&#8217; by right-clicking on it<\/li>\n<li>Set the following values (<a class=\"internal\" title=\"MATLABPCT3.png\" href=\"https:\/\/www0.sun.ac.za\/hpc\/images\/0\/00\/MATLABPCT3.png\">screenshot<\/a>, <a class=\"internal\" title=\"MATLABPCT4.png\" href=\"https:\/\/www0.sun.ac.za\/hpc\/images\/e\/e0\/MATLABPCT4.png\">screenshot<\/a>, <a class=\"internal\" title=\"MATLABPCT5.png\" href=\"https:\/\/www0.sun.ac.za\/hpc\/images\/f\/f1\/MATLABPCT5.png\">screenshot<\/a>)\n<ul>\n<li><b>Description<\/b>: HPC1<\/li>\n<li><b>NumWorkers<\/b>: 16<\/li>\n<li><b>ClusterMatlabRoot<\/b>: \/apps\/MATLAB\/R2015a<\/li>\n<li><b>IndependentSubmitFcn<\/b>: {@independentSubmitFcn, &#8216;hpc1.sun.ac.za&#8217;, &#8216;\/scratch2\/<b>user&#8217;<\/b>}\n<dl>\n<dd>replace <b>user<\/b> with your own username<\/dd>\n<\/dl>\n<\/li>\n<li><b>CommunicationSubmitFcn<\/b>: {@communicatingSubmitFcn, &#8216;hpc1.sun.ac.za&#8217;, &#8216;\/scratch2\/<b>user&#8217;<\/b>}\n<dl>\n<dd>replace <b>user<\/b> with your own username<\/dd>\n<\/dl>\n<\/li>\n<li><b>OperatingSystem<\/b>: unix<\/li>\n<li><b>HasSharedFilesystem<\/b>: false<\/li>\n<li><b>GetJobStateFcn<\/b>: @getJobStateFcn<\/li>\n<li><b>DeleteJobFcn<\/b>: @deleteJobFcn<\/li>\n<\/ul>\n<dl>\n<dt>All other values can be left at their default (or empty) values.<\/dt>\n<\/dl>\n<\/li>\n<li>Select the &#8216;Validation Results&#8217; tab and click <i>Validate<\/i> (<a class=\"internal\" title=\"MATLABPCT6.png\" href=\"https:\/\/www0.sun.ac.za\/hpc\/images\/6\/61\/MATLABPCT6.png\">screenshot<\/a>)\n<dl>\n<dd>You will be prompted for your HPC username. When prompted for a identity file, select <b>No<\/b> if you don&#8217;t know what it is.<\/dd>\n<dd>Depending on how busy the HPC is, the testing should complete in 10 to 30 minutes.<\/dd>\n<dt>If the last step fails, your hostname is most probably incorrectly set up.<\/dt>\n<\/dl>\n<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>Submitting jobs PBS comes with very complete man pages. Therefore, for complete documentation of PBS commands you are encouraged to type man pbs and go from there. Jobs are submitted&#8230;<span class=\"readmore\"><a href=\"https:\/\/sites.sandiego.edu\/hpc\/submit-jobs-hpc-saber1\/\">Read More >><\/a><\/span><\/p>\n","protected":false},"author":546,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-241","page","type-page","status-publish","hentry"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>How to submit jobs on HPC saber1 - Academic and Research Computing<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/sites.sandiego.edu\/hpc\/submit-jobs-hpc-saber1\/\" class=\"yoast-seo-meta-tag\" \/>\n<meta property=\"og:locale\" content=\"en_US\" class=\"yoast-seo-meta-tag\" \/>\n<meta property=\"og:type\" content=\"article\" class=\"yoast-seo-meta-tag\" \/>\n<meta property=\"og:title\" content=\"How to submit jobs on HPC saber1 - Academic and Research Computing\" class=\"yoast-seo-meta-tag\" \/>\n<meta property=\"og:description\" content=\"Submitting jobs PBS comes with very complete man pages. Therefore, for complete documentation of PBS commands you are encouraged to type man pbs and go from there. Jobs are submitted...Read More &gt;&gt;\" class=\"yoast-seo-meta-tag\" \/>\n<meta property=\"og:url\" content=\"https:\/\/sites.sandiego.edu\/hpc\/submit-jobs-hpc-saber1\/\" class=\"yoast-seo-meta-tag\" \/>\n<meta property=\"og:site_name\" content=\"Academic and Research Computing\" class=\"yoast-seo-meta-tag\" \/>\n<meta property=\"article:modified_time\" content=\"2017-05-17T16:21:19+00:00\" class=\"yoast-seo-meta-tag\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" class=\"yoast-seo-meta-tag\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" class=\"yoast-seo-meta-tag\" \/>\n\t<meta name=\"twitter:data1\" content=\"28 minutes\" class=\"yoast-seo-meta-tag\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/sites.sandiego.edu\\\/hpc\\\/submit-jobs-hpc-saber1\\\/\",\"url\":\"https:\\\/\\\/sites.sandiego.edu\\\/hpc\\\/submit-jobs-hpc-saber1\\\/\",\"name\":\"How to submit jobs on HPC saber1 - Academic and Research Computing\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/sites.sandiego.edu\\\/hpc\\\/#website\"},\"datePublished\":\"2016-11-17T23:35:44+00:00\",\"dateModified\":\"2017-05-17T16:21:19+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/sites.sandiego.edu\\\/hpc\\\/submit-jobs-hpc-saber1\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/sites.sandiego.edu\\\/hpc\\\/submit-jobs-hpc-saber1\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/sites.sandiego.edu\\\/hpc\\\/submit-jobs-hpc-saber1\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/sites.sandiego.edu\\\/hpc\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"How to submit jobs on HPC saber1\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/sites.sandiego.edu\\\/hpc\\\/#website\",\"url\":\"https:\\\/\\\/sites.sandiego.edu\\\/hpc\\\/\",\"name\":\"Academic and Research Computing\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/sites.sandiego.edu\\\/hpc\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"How to submit jobs on HPC saber1 - Academic and Research Computing","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/sites.sandiego.edu\/hpc\/submit-jobs-hpc-saber1\/","og_locale":"en_US","og_type":"article","og_title":"How to submit jobs on HPC saber1 - Academic and Research Computing","og_description":"Submitting jobs PBS comes with very complete man pages. Therefore, for complete documentation of PBS commands you are encouraged to type man pbs and go from there. Jobs are submitted...Read More >>","og_url":"https:\/\/sites.sandiego.edu\/hpc\/submit-jobs-hpc-saber1\/","og_site_name":"Academic and Research Computing","article_modified_time":"2017-05-17T16:21:19+00:00","twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"28 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/sites.sandiego.edu\/hpc\/submit-jobs-hpc-saber1\/","url":"https:\/\/sites.sandiego.edu\/hpc\/submit-jobs-hpc-saber1\/","name":"How to submit jobs on HPC saber1 - Academic and Research Computing","isPartOf":{"@id":"https:\/\/sites.sandiego.edu\/hpc\/#website"},"datePublished":"2016-11-17T23:35:44+00:00","dateModified":"2017-05-17T16:21:19+00:00","breadcrumb":{"@id":"https:\/\/sites.sandiego.edu\/hpc\/submit-jobs-hpc-saber1\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/sites.sandiego.edu\/hpc\/submit-jobs-hpc-saber1\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/sites.sandiego.edu\/hpc\/submit-jobs-hpc-saber1\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/sites.sandiego.edu\/hpc\/"},{"@type":"ListItem","position":2,"name":"How to submit jobs on HPC saber1"}]},{"@type":"WebSite","@id":"https:\/\/sites.sandiego.edu\/hpc\/#website","url":"https:\/\/sites.sandiego.edu\/hpc\/","name":"Academic and Research Computing","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/sites.sandiego.edu\/hpc\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/sites.sandiego.edu\/hpc\/wp-json\/wp\/v2\/pages\/241","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sites.sandiego.edu\/hpc\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.sandiego.edu\/hpc\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.sandiego.edu\/hpc\/wp-json\/wp\/v2\/users\/546"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.sandiego.edu\/hpc\/wp-json\/wp\/v2\/comments?post=241"}],"version-history":[{"count":0,"href":"https:\/\/sites.sandiego.edu\/hpc\/wp-json\/wp\/v2\/pages\/241\/revisions"}],"wp:attachment":[{"href":"https:\/\/sites.sandiego.edu\/hpc\/wp-json\/wp\/v2\/media?parent=241"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}