The Schrödinger Suite is a collection of software for chemical and biochemical use. It offers various tools that facilitate the investigation of the structures, reactivity and properties of chemical systems. There is a campus site license for this software, supported by UITS. More information is available here: http://software.uconn.edu/schrodinger/ .
Load Modules
module load schrodinger/2022-4
You can then see a list of executable programs:
find /gpfs/sharedfs1/admin/hpc2.0/apps/schrodinger/2022-4/ -maxdepth 1 -executable -type f -printf "%f\n" | sort | pr -tT -8 | column -t autots diagnost gfxinfo jsc material phase_fq qikprop ska biolumin elements glide jws mxmd phase_hy qiksim ssp blast epik HPC1.0Or knime oned_scr phase_qs qpld sta bmin epikx hppmap licadmin Original phase_sc qsite structur confgen fep_abso ifd ligand_s para_tes pipeline run testapp confgenx fep_plus ifd-md ligprep pfam prime schrodin version. consensu fep_solu impact machid phase_bu prime_mm shape_sc vsw constant ffb_fep_ installa macromod phase_da primex shape_sc watermap covalent ffbuilde jaguar maestro phase_fi qikfit sitemap wscore desmond generate jobcontr
Host Configuration
The Schrödinger Suite is configured to submit jobs directly to the SLURM job scheduler. Therefore, you do not need to wrap your commands in a submission script. You can execute Schrödinger commands directly from a login node. When you submit Schrödinger jobs, you do so to hosts
. We have created the following hosts: slurm-parallel-24
, slurm-parallel-48
, slurm-parallel-96
, slurm-parallel-192
, slurm-parallel-384
. Each one of these hosts will submit a job to SLURM's hi-core
parallel partition for the number of cores specified by the number at the end of its name.
Below is a table listing the available Schrodinger hosts on HPC, what partition each host submits the Schrodinger job on, and how many cores are allocated for each host/job.
Host | Partition | Cores being allocated to job |
---|---|---|
slurm-single | general | 24 |
Example Application Usage
qsite
qsite -SAVE -PARALLEL 24 -HOST slurm-parallel-24 3IIS_Per1.in Launching JAGUAR under jobcontrol. Exec: /apps2/schrodinger/2016-2/jaguar-v9.2/bin/Linux-x86_64 JobId: cn01-0-57b33646
Note that the numeric value of -PARALLEL
should match the numeric value of the -HOST
that you specified.
You can then view the status of your running job with sacct
.
sacct JobID JobName Partition Account AllocCPUS State ExitCode ------------ ---------- ---------- ---------- ---------- ---------- -------- 39148 j3IIS_Per1 parallel abc12345 24 RUNNING 0:0 391148.0 hostname abc12345 24 COMPLETED 0:0
Run Test Suite
testapp -HOST slurm-parallel-24 -DEBUG para_testapp -HOST slurm-parallel-48 -DEBUG
Installation Oddities
Schrödinger comes pre-packaged with an outdated version of mpi(< 1.8.1), meaning an old bug in the MPI->SLURM interface needs to be manually patched by appending the following line to schrodinger's mpi's default config file:
plm_slurm_args = --cpu_bind=boards
Quantum Espresso
Quantum Espresso can be used to run various Schrödinger suites.
QE is the leading high-performance, open-source quantum mechanical software package for nanoscale modeling of material
It is recommended to load a global openmpi version available through the SPACK package manager to allow for MPI communications before loading and running Quantum Espresso.
A section on how to load openmpi through SPACK is available at the bottom of the following openmpi knowledge base article:
https://kb.uconn.edu/space/SH/26033783855/OpenMPI+Guide
To load Quantum Espresso on HPC for the use of the Schrödinger suite after openmpi has been loaded, the following module load line can called.
module load quantumespresso/7.1
The quantumespresso/7.1 module will automatically load the needed schrodinger/2022-4 module.
Quantum Espresso provides an executable that can take various command line options to run needed calculations.
Here are the command line options that are available for the Quantum Espresso run_qe executable:
run_qe Provide EXE_NAME Usage: run_qe EXE_NAME TPP OPENMP INPUT_FILE
The options are:
EXE_NAME=Name Of Schrödinger EXE (maestro, desmond, etc)
TPP=# value (1, 2, 3, etc)
OPENMP=MPI command (mpirun, mpiexec, etc)
INPUT_FILE=Input File looking to be run
Example:
run_qe maestro 2 mpirun code.in