Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The Schrödinger Suite is a collection of software for chemical and biochemical use. It offers various tools that facilitate the investigation of the structures, reactivity and properties of chemical systems. There is a campus site license for this software, supported by UITS. More information is available here: http://software.uconn.edu/schrodinger/ .

Load Modules

Code Block
module load schrodinger/2022-4

...

Code Block
find /gpfs/sharedfs1/admin/hpc2.0/apps/schrodinger/2022-4/ -maxdepth 1 -executable -type f -printf "%f\n" | sort | pr -tT -8 | column -t
autots    desmond   gfxinfo   jsc       material  phase_hy  qiksim    ska
biolumin  elements  glide     jws       mxmd      phase_qs  qpld      ssp
blast     epik      hppmap    knime     oned_scr  phase_sc  qsite     sta
bmin      epikx     ifd       licadmin  para_tes  pipeline  run       structur
confgen   fep_abso  ifd-md    ligand_s  pfam      prime     schrodin  testapp
confgenx  fep_plus  impact    ligprep   phase_bu  prime_mm  shape_sc  vsw
consensu  fep_solu  installa  machid    phase_da  primex    shape_sc  watermap
constant  ffbuilde  jaguar    macromod  phase_fi  qikfit    shape_sc  wscore
covalent  generate  jobcontr  maestro   phase_fq  qikprop   sitemap

Host Configuration

The Schrödinger Suite is configured to submit jobs directly to the SLURM job scheduler. Therefore, you do not need to wrap your commands in a submission script. You can execute Schrödinger commands directly from a login node. When you submit Schrödinger jobs, you do so to hosts. We have created the following hosts: slurm-parallel-24, slurm-parallel-48, slurm-parallel-96, slurm-parallel-192, slurm-parallel-384. Each one of these hosts will submit a job to SLURM's hi-core parallel partition for the number of cores specified by the number at the end of its name.

Below is a table listing the available Schrodinger hosts on HPC, what partition each host submits the Schrodinger job on, and how many cores are allocated for each host/job.

...

Host

...

Partition

...

Cores being allocated to job

...

slurm-single

...

general

...

24

...

slurm-parallel-24

...

hi-core

...

24

Example Application Usage

qsite

Code Block
qsite -SAVE -PARALLEL 24 -HOST slurm-parallel-24 3IIS_Per1.in 
Launching JAGUAR under jobcontrol.
Exec: /gpfs/sharedfs1/admin/hpc2.0/apps/schrodinger/2022-4/jaguar-v11.8/bin/Linux-x86_64
JobId: job60-login5-1674022

...

Code Block
sacct
       JobID    JobName  Partition    Account  AllocCPUS      State ExitCode 
------------ ---------- ---------- ---------- ---------- ---------- -------- 
39148       j3IIS_Per1   hi-core   abc12345         24    RUNNING      0:0 
391148.0       hostname              abc12345         24  COMPLETED      0:0

Run Test Suite

Code Block
testapp -HOST slurm-parallel-24 -DEBUG
para_testapp -HOST slurm-parallel-48 -DEBUG

Installation Oddities

Schrödinger comes pre-packaged with an outdated version of mpi(< 1.8.1), meaning an old bug in the MPI->SLURM interface needs to be manually patched by appending the following line to schrodinger's mpi's default config file:

Code Block
plm_slurm_args = --cpu_bind=boards

Example Submission Script

Code Block
#!/bin/bash
#SBATCH --partition=general-gpu                # Name of Partition
#SBATCH --ntasks=20                            # Maximum CPU cores for job
#SBATCH --nodes=1                              # Ensure all cores are from the same node
#SBATCH --mem=128G                             # Request 128 GB of available RAM
#SBATCH --gres=gpu:2                           # Request 2 GPU cards for the job
#SBATCH --mail-type=END                        # Event(s) that triggers email notification (BEGIN,END,FAIL,ALL)
#SBATCH --mail-user=first.lastname@uconn.edu   # Destination email address

module load schrodinger/2022-4

host=`srun hostname|head -1`
nproc=`srun hostname|wc -l`
<schrodinger program> -HOST ${host}:${nproc} <other options>

...