The Schrödinger Suite is a collection of software for chemical and biochemical use. It offers various tools that facilitate the investigation of the structures, reactivity and properties of chemical systems. There is a campus site license for this software, supported by UITS. More information is available here: http://software.uconn.edu/schrodinger/ .
Load Modules
module load schrodinger/2022-4
You can then see a list of executable programs:
find /gpfs/sharedfs1/admin/hpc2.0/apps/schrodinger/2022-4/ -maxdepth 1 -executable -type f -printf "%f\n" | sort | pr -tT -8 | column -t autots desmond gfxinfo jsc material phase_hy qiksim ska biolumin elements glide jws mxmd phase_qs qpld ssp blast epik hppmap knime oned_scr phase_sc qsite sta bmin epikx ifd licadmin para_tes pipeline run structur confgen fep_abso ifd-md ligand_s pfam prime schrodin testapp confgenx fep_plus impact ligprep phase_bu prime_mm shape_sc vsw consensu fep_solu installa machid phase_da primex shape_sc watermap constant ffbuilde jaguar macromod phase_fi qikfit shape_sc wscore covalent generate jobcontr maestro phase_fq qikprop sitemap
Host Configuration
The Schrödinger Suite is configured to submit jobs directly to the SLURM job scheduler. Therefore, you do not need to wrap your commands in a submission script. You can execute Schrödinger commands directly from a login node. When you submit Schrödinger jobs, you do so to hosts
. We have created the following hosts: slurm-parallel-24
, slurm-parallel-48
, slurm-parallel-96
, slurm-parallel-192
, slurm-parallel-384
. Each one of these hosts will submit a job to SLURM's hi-core
parallel partition for the number of cores specified by the number at the end of its name.
Below is a table listing the available Schrodinger hosts on HPC, what partition each host submits the Schrodinger job on, and how many cores are allocated for each host/job.
Host | Partition | Cores being allocated to job |
---|---|---|
slurm-single | general | 24 |
slurm-parallel-24 | hi-core | 24 |
Example Application Usage
qsite
qsite -SAVE -PARALLEL 24 -HOST slurm-parallel-24 3IIS_Per1.in Launching JAGUAR under jobcontrol. Exec: /gpfs/sharedfs1/admin/hpc2.0/apps/schrodinger/2022-4/jaguar-v11.8/bin/Linux-x86_64 JobId: job60-login5-1674022
Note that the numeric value of -PARALLEL
should match the numeric value of the -HOST
that you specified.
You can then view the status of your running job with sacct
.
sacct JobID JobName Partition Account AllocCPUS State ExitCode ------------ ---------- ---------- ---------- ---------- ---------- -------- 39148 j3IIS_Per1 hi-core abc12345 24 RUNNING 0:0 391148.0 hostname abc12345 24 COMPLETED 0:0
Run Test Suite
testapp -HOST slurm-parallel-24 -DEBUG para_testapp -HOST slurm-parallel-48 -DEBUG
Installation Oddities
Schrödinger comes pre-packaged with an outdated version of mpi(< 1.8.1), meaning an old bug in the MPI->SLURM interface needs to be manually patched by appending the following line to schrodinger's mpi's default config file:
plm_slurm_args = --cpu_bind=boards
Quantum Espresso
Quantum Espresso can be used to run various Schrödinger suites.
QE is the leading high-performance, open-source quantum mechanical software package for nanoscale modeling of material
It is recommended to load a global openmpi version available through the SPACK package manager to allow for MPI communications before loading and running Quantum Espresso.
A section on how to load openmpi through SPACK is available at the bottom of the following openmpi knowledge base article:
https://kb.uconn.edu/space/SH/26033783855/OpenMPI+Guide
To load Quantum Espresso on HPC for the use of the Schrödinger suite after openmpi has been loaded, the following module load line can called.
module load quantumespresso/7.1
The quantumespresso/7.1 module will automatically load the needed schrodinger/2022-4 module.
Quantum Espresso provides an executable that can take various command line options to run needed calculations.
Here are the command line options that are available for the Quantum Espresso run_qe executable:
run_qe Provide EXE_NAME Usage: run_qe EXE_NAME TPP OPENMP INPUT_FILE
The available options are:
EXE_NAME=Name Of Schrödinger EXE (maestro, desmond, etc)
TPP=# value (1, 2, 3, etc)
OPENMP=MPI command (mpirun, mpiexec, etc)
INPUT_FILE=Input File looking to be run
Example to run a maestro job with mpi:
module load openmpi/4.1.4 quantumespresso/7.1 run_qe maestro 2 mpirun code.in