The Schrödinger Suite is a collection of software for chemical and biochemical use. It offers various tools that facilitate the investigation of the structures, reactivity and properties of chemical systems. There is a campus site license for this software, supported by UITS. More information is available here: http://software.uconn.edu/schrodinger/ .
Info |
---|
It is recommended currently to use Schrodinger through an interactive session because of some issues encountered when submitting jobs through submission scripts. |
Start an interactive session:
Code Block |
---|
srun --x11 -N 1 -n 126 -p general --constraint=epyc128 --pty bash |
Info |
---|
Make sure to include the “--x11” flag for the GUI |
Load Modules
Once a node is assigned to the interactive srun job in the previous section, schrodinger can be loaded from one of the various modules available on HPC.
Code Block |
---|
module load schrodinger/20222023-43 |
You can then see a list of executable programs:
Code Block |
---|
find /gpfs/sharedfs1/admin/hpc2.0/apps/schrodinger/20222023-43/ -maxdepth 1 -executable -type f -printf "%f\n" | sort | pr -tT -8 | column -t autots desmond gfxinfo jsc material phase_hy qiksim ska biolumin elements glidehppmap knime jws mxmd phase_qs qpld sspska blastbiolumin epik hppmapifd knime lambda_d oned_scr phase_sc qsite stassp bminblast epikx ifd-md licadmin para_tes pipeline runquick_sh sta bmin structur confgen fep_abso ifd-mdimpact ligand_s pfam prime schrodin testapp confgenxrun structur confgen fep_plus impactinstalla ligprep phase_bu prime_mm shape_scschrodin vswtestapp consensuconfgenx fep_solu jaguar installa machid phase_da primex shape_sc watermapvsw constantconsensu ffbuilde jaguar jobcontr macromod phase_fi qikfit shape_sc wscorewatermap covalentconstant generate jsc jobcontr maestro phase_fq qikprop shape_sc wscore covalent gfxinfo sitemap |
Example Application Usage
qsite
Code Block |
---|
qsite -SAVE -PARALLEL 24 -HOST slurm-parallel-24 3IIS_Per1.in Launching JAGUAR under jobcontrol. Exec:jws material phase_hy qiksim sitemap xtb desmond glide |
You can also see a list of utilities with the same find command above:
Code Block |
---|
find /gpfs/sharedfs1/admin/hpc2.0/apps/schrodinger/20222023-4/jaguar-v11.8/bin/Linux-x86_64 JobId: job60-login5-1674022 |
Note that the numeric value of -PARALLEL
should match the numeric value of the -HOST
that you specified.
You can then view the status of your running job with sacct
.
Code Block |
---|
sacct
JobID JobName Partition Account AllocCPUS State ExitCode
------------ ---------- ---------- ---------- ---------- ---------- --------
39148 j3IIS_Per1 hi-core abc12345 24 RUNNING 0:0
391148.0 hostname abc12345 24 COMPLETED 0:0 |
Run Test Suite
Code Block |
---|
testapp -HOST slurm-parallel-24 -DEBUG
para_testapp -HOST slurm-parallel-48 -DEBUG |
Installation Oddities
Schrödinger comes pre-packaged with an outdated version of mpi(< 1.8.1), meaning an old bug in the MPI->SLURM interface needs to be manually patched by appending the following line to schrodinger's mpi's default config file:
Code Block |
---|
plm_slurm_args = --cpu_bind=boards |
Example Submission Script CPU
Code Block |
---|
#!/bin/bash
#SBATCH --partition=general # Name of Partition
#SBATCH --ntasks=126 # Maximum CPU cores for job
#SBATCH --nodes=1 # Ensure all cores are from the same node
#SBATCH --mem=492G # Request 128 GB of available RAM
#SBATCH --constraint='epyc128' # Request AMD EPYC node for the job
#SBATCH --mail-type=END # Event(s) that triggers email notification (BEGIN,END,FAIL,ALL)
#SBATCH --mail-user=first.lastname@uconn.edu # Destination email address
module load schrodinger/2022-4
host=`srun hostname|head -1`
nproc=`srun hostname|wc -l`
<schrodinger program> -HOST ${host}:${nproc} <other options> |
Example Submission Script GPU
Code Block |
---|
#!/bin/bash #SBATCH --partition=general-gpu # Name of Partition #SBATCH --ntasks=20 # Maximum CPU cores for job #SBATCH --nodes=1 # Ensure all cores are from the same node #SBATCH --mem=128G # Request 128 GB of available RAM #SBATCH --gres=gpu:2 # Request 2 GPU cards for the job #SBATCH --mail-type=END 3/utilities/ -maxdepth 1 -executable -type f -printf "%f\n" | sort | pr -tT -8 | column -t 2d_sketc canvasHC cg_chsr create_w jaguar_p mtzprint project_ structal abs canvasHC check_jo create_x jaguar_t multisim project_ structca align_bi canvasKM check_re create_x jaguar_t neutrali project_ structco align_hy canvasKP check_th create_x jnanny numfreqc propfilt structsh align_li canvasLC ch_isost custom_p jobcontr obabel proplist structsu anharmon canvasLi ch_ligan desalter jresults para_bmi protassi structur apbs canvas_m ch_water elim.sch jserver para_epi py.test stu_add applyhtr canvasMC cluster_ epharmac jserver_ para_lig query_gp stu_dele autoqsar canvasMD combinat extract_ lictest path_fin queue_bm stu_exec AutoTSRe canvasML combinat feature_ licutil pbs_lic_ randsub stu_extr AutoTSRe canvasMo combinat feature_ ligand_i pdbconve refconve stu_modi AutoTSRe canvasNn combinat ffld_ser ligfilte phase_al render_k stu_work AutoTSUn canvasPC compare_ flex_ali ligparse phase_cl r_group_ system_b babel canvasPC configur flexlm_s lp_filte phase_co r_group_ tautomer bandshap canvasPC conf_tem fragment lp_label phase_co ring_con thermoch buildloo canvasPC convert_ generate lp_nored phase_de ring_tem timestam canvasBa canvasPh convert_ generate macro_pk phase_hy rmsdcalc uffmin canvasCo canvasPL convert_ getpdb maegears phase_hy rsync_pd unique_n canvasCS canvasPr convert_ glide_en maetopqr phase_hy sdconver uniquesm canvasCS canvasPW corefind glide_me mae_to_s phase_mm secstruc update_B canvasCS canvasRP create_h glide_so make_lin phase_pr seqconve update_P canvasDB canvasSc create_h guardian make_r_l phase_qs serial_s vis2gc canvasFP canvasSD create_h hetgrp_f md5diges phase_vo shape_sc visdump canvasFP canvasSe create_h hit_expa merge_du postmort show_joi watermap canvasFP canvasSO create_i impref micro_pk premin smiles_t wscore_m canvasFP canvasSO create_m ionizer modify_s prepwiza spectrum wscore_r canvasFP canvasTr create_r ionizer_ mol2conv profile_ stereoiz zip_temp canvasHC ccp42cns create_s jagconve moldescr profile_ store_re ziputil |
Example Application Usage
qsite
Code Block |
---|
qsite -SAVE -PARALLEL 24 3IIS_Per1.in |
Note that the numeric value of -PARALLEL
should match the numeric value of the -n declaration
that you specified in the previous srun command.
Jaguar
Code Block |
---|
jaguar run nameofinput.in |
There is a way to target a specific Schrodinger application or utility with the following syntax:
You can then view the status of your running job with sacct
.
Code Block |
---|
sacct JobID JobName Partition Account AllocCPUS State ExitCode ------------ ---------- ---------- ---------- ---------- ---------- -------- 39148 j3IIS_Per1 hi-core abc12345 24 RUNNING 0:0 391148.0 hostname abc12345 24 COMPLETED 0:0 |
Run Test Suite
Code Block |
---|
testapp -DEBUG # Event(s) that triggers email notification (BEGIN,END,FAIL,ALL) #SBATCH --mail-user=first.lastname@uconn.edu # Destination email address module load schrodinger/2022-4 host=`srun hostname|head -1` nproc=`srun hostname|wc -l` <schrodinger program> -HOST ${host}:${nproc} <other options> para_testapp -DEBUG |
Installation Oddities
Schrödinger comes pre-packaged with an outdated version of mpi(< 1.8.1), meaning an old bug in the MPI->SLURM interface needs to be manually patched by appending the following line to schrodinger's mpi's default config file:
Code Block |
---|
plm_slurm_args = --cpu_bind=boards |
Command to call a Schrodinger utility
Code Block |
---|
"${SCHRODINGER}/utilities/multisim" -JOBNAME desmond_md_job_TREK1model_1ms < restOfCommandOptions > |
Launching and disconnecting from an interactive fisbatch Schrodinger job
Schrodinger can be run interactively through srun or fisbatch.
The srun solution above is good for a single interactive calculation that can be left up and running without any disconnections.
If there are network or power interruptions while the Interactive Schrodinger srun job is running, the srun job will end and progress will be lost.
An alternative to avoid potential network/power interruptions for an interactive SLURM srun job would be to submit an interactive fisbatch job to HPC.
Fisbatch is older and it does have some bugs.
Fisbatch allocates a compute node to the job session, which allows users to spawn a calculation interactively through a screen session that launches on the assigned compute node,.
Users can also disconnect from the fisbatch job, and reattach to the job to track the progress of various calculations.
Here is an example to allocate an AMD EPYC compute node with 126 cores through fisbatch under the general partition:
Code Block |
---|
fisbatch -N 1 -n 126 -p general --constraint='epyc128'
FISBATCH -- waiting for JOBID jobidhere to start on cluster=slurm and partition=general
.........................!
Warning: Permanently added 'cXX,137.99.x.x' (ECDSA) to the list of known hosts.
FISBATCH -- Connecting to head node (cnXX)
|
Once a compute node is assigned and the fisbatch job is running, schrodinger can be loaded normally through the module.
Code Block |
---|
module load schrodinger/2023-3 |
Once schrodinger is loaded, the Schrodinger commands will become available and the Schrodinger calculations can be called through one of the many Schrodinger suites.
To disconnect from a fisbatch job, enter the following key strokes:
“Ctrl-a then Ctrl-d”
The screen session that fisbatch spawns on the compute node should detach and the fisbatch job will continue running.
To confirm that the job is still running, the following SLURM command can be entered:
shist
To reattach to the fisbatch job, the following command can be entered:
Code Block |
---|
fisattach jobidhere |
The fisbatch screen session should reattach and the session enabled for the specific job and the Schrodinger calculation should still be running.
If a network/power interruption happens while a fisbatch job is up and running, the job could potentially end. The network/power interruptions will not affect a job that is detached and running unless the specific assigned node runs into hardware/network problems.