The Schrödinger Suite is a collection of software for chemical and biochemical use. It offers various tools that facilitate the investigation of the structures, reactivity and properties of chemical systems. There is a campus site license for this software, supported by UITS. More information is available here: http://software.uconn.edu/schrodinger/ .
Load Modules
module load schrodinger/2022-4
You can then see a list of executable programs:
find /gpfs/sharedfs1/admin/hpc2.0/apps/schrodinger/2022-4/ -maxdepth 1 -executable -type f -printf "%f\n" | sort | pr -tT -8 | column -t autots desmond gfxinfo jsc material phase_hy qiksim ska biolumin elements glide jws mxmd phase_qs qpld ssp blast epik hppmap knime oned_scr phase_sc qsite sta bmin epikx ifd licadmin para_tes pipeline run structur confgen fep_abso ifd-md ligand_s pfam prime schrodin testapp confgenx fep_plus impact ligprep phase_bu prime_mm shape_sc vsw consensu fep_solu installa machid phase_da primex shape_sc watermap constant ffbuilde jaguar macromod phase_fi qikfit shape_sc wscore covalent generate jobcontr maestro phase_fq qikprop sitemap
You can also see a list of utilities with the same find command above:
find /gpfs/sharedfs1/admin/hpc2.0/apps/schrodinger/2022-4/utilities/ -maxdepth 1 -executable -type f -printf "%f\n" | sort | pr -tT -8 | column -t 2d_sketc canvasHC ccp42cns create_x jnanny neutrali project_ store_re abs canvasHC cg_chsr create_x jobcontr numfreqc project_ structal align_bi canvasJo check_jo create_x jserver obabel propfilt structca align_hy canvasKM check_th custom_p jserver_ para_bmi proplist structco align_li canvasKP ch_isost desalter lictest para_epi protassi structsh anharmon canvasLC ch_ligan elim.sch licutil para_lig py.test structur apbs canvasLi ch_water epharmac ligand_i path_fin query_gp stu_add applyhtr canvas_m cluster_ extract_ ligfilte pbs_lic_ queue_bm stu_dele autoqsar canvasMC combinat feature_ ligparse pdbconve randsub stu_exec AutoTSRe canvasMD combinat feature_ lp_filte phase_al refconve stu_extr AutoTSRe canvasML combinat ffld_ser lp_label phase_cl render_k stu_modi AutoTSRe canvasMo combinat flex_ali lp_nored phase_co r_group_ stu_work babel canvasNn compare_ flexlm_s macro_pk phase_co r_group_ system_b bandshap canvasPC configur fragment maegears phase_de ring_con tautomer buildloo canvasPC conf_tem generate maesubse phase_hy ring_tem thermoch canvas_a canvasPC convert_ generate maetopqr phase_hy rmsdcalc timestam canvasBa canvasPC convert_ getpdb mae_to_s phase_hy rsync_pd uffmin canvasCo canvasPh convert_ glide_en makejbas phase_mm sdconver unique_n canvasCS canvasPL convert_ glide_me make_lin phase_pr sdsubset uniquesm canvasCS canvasPr corefind glide_so make_r_l phase_qs secstruc update_B canvasCS canvasPW create_h guardian md5diges phase_vo seqconve vis2gc canvasDB canvasRP create_h hetgrp_f merge_du postmort serial_s visdump canvasFP canvasSc create_h hit_expa micro_pk premin shape_sc watermap canvasFP canvasSD create_h impref mol2conv prepwiza show_joi wscore_m canvasFP canvasSe create_i ionizer moldescr profile_ smiles_t wscore_r canvasFP canvasSO create_m ionizer_ mtzprint profile_ spectrum zip_temp canvasFP canvasSO create_s jagconve multisim project_ stereoiz ziputil canvasHC canvasTr create_w jaguar_p
Example Application Usage
qsite
qsite -SAVE -PARALLEL 24 -HOST slurm-parallel-24 3IIS_Per1.in Launching JAGUAR under jobcontrol. Exec: /gpfs/sharedfs1/admin/hpc2.0/apps/schrodinger/2022-4/jaguar-v11.8/bin/Linux-x86_64 JobId: job60-login5-1674022
Note that the numeric value of -PARALLEL
should match the numeric value of the -HOST
that you specified.
You can then view the status of your running job with sacct
.
sacct JobID JobName Partition Account AllocCPUS State ExitCode ------------ ---------- ---------- ---------- ---------- ---------- -------- 39148 j3IIS_Per1 hi-core abc12345 24 RUNNING 0:0 391148.0 hostname abc12345 24 COMPLETED 0:0
Run Test Suite
testapp -HOST slurm-parallel-24 -DEBUG para_testapp -HOST slurm-parallel-48 -DEBUG
Installation Oddities
Schrödinger comes pre-packaged with an outdated version of mpi(< 1.8.1), meaning an old bug in the MPI->SLURM interface needs to be manually patched by appending the following line to schrodinger's mpi's default config file:
plm_slurm_args = --cpu_bind=boards
Command to call a Schrodinger utility
"${SCHRODINGER}/utilities/multisim" -JOBNAME desmond_md_job_TREK1model_1ms < restOfCommandOptions >
Example Submission Script CPU
#!/bin/bash #SBATCH --partition=general # Name of Partition #SBATCH --ntasks=126 # Maximum CPU cores for job #SBATCH --nodes=1 # Ensure all cores are from the same node #SBATCH --mem=492G # Request 128 GB of available RAM #SBATCH --constraint='epyc128' # Request AMD EPYC node for the job #SBATCH --mail-type=END # Event(s) that triggers email notification (BEGIN,END,FAIL,ALL) #SBATCH --mail-user=first.lastname@uconn.edu # Destination email address module load schrodinger/2022-4 host=`srun hostname|head -1` nproc=`srun hostname|wc -l` <schrodinger program> -HOST ${host}:${nproc} <other options>
Example Submission Script GPU
#!/bin/bash #SBATCH --partition=general-gpu # Name of Partition #SBATCH --ntasks=20 # Maximum CPU cores for job #SBATCH --nodes=1 # Ensure all cores are from the same node #SBATCH --mem=128G # Request 128 GB of available RAM #SBATCH --gres=gpu:2 # Request 2 GPU cards for the job #SBATCH --mail-type=END # Event(s) that triggers email notification (BEGIN,END,FAIL,ALL) #SBATCH --mail-user=first.lastname@uconn.edu # Destination email address module load schrodinger/2022-4 host=`srun hostname|head -1` nproc=`srun hostname|wc -l` <schrodinger program> -HOST ${host}:${nproc} <other options>