- Generate an SSH key
- Run Maestro
- Schrodinger – Launching tasks
- Schrodinger Glide Batch Submission
- Batch Glide Jobs
- Batch LigPrep Jobs
Set up environment
Since Schrodinger working files are large and user home directories are small, we need to set Schrodinger environment variables to change the location of Schrodinger temporary files.
Login to hpcmn submit node
$ ssh -X userid@hpcmn.sandiego.edu
Set up your .bash_profile. You should add these lines to your .bash_profile to automatically load the what’s in your .bashrc.
Go to your home directory
$ cd
Show working directory
$ pwd
Edit your .bash_profile. If you don’t already have one this will create it. nano is a basic editor. Hit Ctrl-o to save from within nano. Hit Ctrl-x to exit the nano editor.
Add the lines to your .bash_profile
$ nano .bash_profile
if [ -e "${HOME}/.bashrc" ] ; then source "${HOME}/.bashrc" fi
Set up your .bashrc for Schrodinger.
$ cd
Add these lines to your .bashrc. Replace USER in lines below with your vega user directory name.
$ nano .bashrc
if [ -d "/vega" ]; then # Yeti SCHRODINGER_JOBDB2=/vega/stock/users/USER/.jobdb2 SCHRODINGER_TMPDIR=/yeti/tmp fi export SCHRODINGER_JOBDB2 export SCHRODINGER_TMPDIR
Generate an SSH key
Creating a passwordless SSH key is necessary and does the following:
- Enables you to submit jobs to Saber1 scheduler from within interactive job.
- Enables multi-node jobs to communicate across compute nodes
1. Generate a public/private RSA key pair on a host of your choice, whose home directory is
shared with the remote hosts that you want to run jobs on. Type the following commands while connected to yeti:
$ cd ~/.ssh $ ssh-keygen -t rsa
Note: When asked for a passphrase do not enter one; just press ENTER. If you specify a passphrase it defeats the purpose of configuring passwordless ssh.
2. Now add your public key to the list of keys allowed to log in to your account:
$ cat id_rsa.pub >> authorized_keys $ cat id_rsa.pub >> authorized_keys2
3. Remove your known_hosts file:
$ rm known_hosts*
This is necessary so that the new RSA key-pair mechanism is used for every host. Other-
wise, hosts to which you previously connected using passwords might not use the new
system automatically.
Run Maestro
Login
$ ssh -X userid@hpcmn.sandiego.edu
Start an interactive job
Note: for typical use, you can use the default of 1 processor on 1 node. From within this job you can launch completely separate, new jobs to Saber1. You may, if you wish, exit your interactive after submitting tasks to Saber1. To see the results of jobs submitted interactively, just launch another interactive job.
$ qsub -I -W group_list=saber1stock -l walltime=04:00:00,mem=4000mb -X
Load schrodinger into environment
$ module load schrodinger
Change to your vega user directory. This will set your default working directory for Schrodinger.
$ cd /vega/stock/users/USERNAME
Launch maestro
$ maestro
Schrodinger – Launching tasks
When ready to submit a job from within maestro, for example a Desmond MD job:
Click on the gear icon next to the Run button to select a host entry
If you select a “local” option, that will run the job with the resources you requested for interactive job with the qsub command. You should only select the local option for shorter tasks that can complete within a few minutes, such as minimization.
For longer tasks that you don’t need to immediately attend to, you should submit a batch job back to Saber1. To do that choose a “Saber1” entry such as:
- Select one of the following saber1 entries.
- Remember to specify the total # of processors.
- saber1-16 (16) Total: 16 processors
- saber2-16 (32) Total: 32 processors
- saber4-16 (64) Total: 64 processors
- saber1-gpu1 (1,1) Total: 1 GPU
For example, choosing the saber12-16 entry will submit a completely new and unrelated job to the Saber1 scheduler, requesting 2 servers with 16 processors per server, a total of 32 processors.
Note: the number of processors you can request may be limited by the number of licenses that are available.
Also note: when requesting more resources, such as 4 nodes, versus 2, your job may have to spend more time waiting in the queue until sufficient resources become available.
Using a GPU with Desmond can yield a 30 to 80 fold speedup against a single CPU core, so it’s worth trying it out by selecting: saber1-gpu1
After you submit the Saber1 job, open up another terminal window and run qstat to search for the new job, replacing userid below with your actual userid.
$ qstat -u USERID
If your job is in the Q state, meaning it’s queued, you can check the status of the job for more information. Replace JOBID below with your actual jobid #.
$ checkjob -v JOBID
Schrodinger Glide Batch Submission
Note: text after $ denotes what is typed into the terminal command line
Batch Glide Jobs
$ ssh -X ‘your userid’@hpcmn.sandiego.edu $ cd /vega/stock/users/’you’ $ qsub -q interactive -I -W group_list=saber1stock -l walltime=04:00:00,mem=4000mb -X $ module load schrodinger/2015-3 $ maestro -SGL
- Set up glide docking job (chose grid, ligands, force field, etc.)
- Click on cog to change job name and chose number of subjobs, click OK (not run)
- Click on arrow next to cog and click WRITE
- In Fetch, make sure a folder with the name of your job has written and contains ‘job name’.in and ‘job name’.sh
- Launch another terminal window and ssh into Saber1
- cd into job directory (eg. $ cd /vega/stock/users/’you’/’job name’)
$ nano ‘job name’.sh
- In this window, delete the line of script paste in the following, modified to your job:
- modify amount of walltime and/or memory (max amount shown below)
- change input file to your directory
#!/bin/sh #Torque directives #PBS -W group_list=saber1stock #PBS -l nodes=1:ppn=16,walltime=48:00:00,mem=60gb #PBS -M ‘your userid’@hpcmn.sandiego.edu #PBS -m abe #PBS -V # Set output and error directories #PBS -o localhost:$PBS_O_WORKDIR/ #PBS -e localhost:$PBS_O_WORKDIR/ module load schrodinger/2015-3 export SCHRODINGER_TMPDIR=/hpcmn/tmp INPUT=/vega/stock/users/’you’/’job name’/’job name’.in "${SCHRODINGER}/glide" -WAIT -OVERWRITE -HOST "localhost:16" -NJOBS 16 "${INPUT}"
- hit CONTROL+X to exit window
- hit Y to save changes, hit ENTER to write file
$ qsub ‘job name’.sh
- a job number should be generated
- to monitor job, use $ qstat –u ‘your uni’ to monitor job
- if you need to delete this job, use $ qdel ‘job number’
- to make sure shell script has modified correctly, use $ cat ‘job name’.sh
- to make sure input file has correct information, use $ cat ‘job name’.in
Batch LigPrep Jobs
Same as Glide jobs, write the input and shell script files modify shell script as below:
#!/bin/sh #Torque directives #PBS -W group_list=saber1stock #PBS -l nodes=1:ppn=16,walltime=48:00:00,mem=60gb #PBS -M ‘your userid’@hpcmn.sandiego.edu #PBS -m abe #PBS -V # Set output and error directories #PBS -o localhost:$PBS_O_WORKDIR/ #PBS -e localhost:$PBS_O_WORKDIR/ module load schrodinger/2015-3 export SCHRODINGER_TMPDIR=/hpcmn/tmp INPUT=/vega/stock/users/’you’/’job name’/’job name’.in "${SCHRODINGER}/ligprep" -WAIT -OVERWRITE -HOST "localhost:16" -NJOBS 16 "${INPUT}"