The Schrödinger Suite is a collection of software for chemical and biochemical use. It offers various tools that facilitate the investigation of the structures, reactivity and properties of chemical systems. There is a campus site license for this software, supported by UITS. More information is available here: http://software.uconn.edu/schrodinger/ .
Info |
---|
It is recommended currently to use Schrodinger through an interactive session because of some issues encountered when submitting jobs through submission scripts. |
Start an interactive session:
Code Block |
---|
srun --x11 -N 1 -n 126 -p general --constraint=epyc128 --pty bash |
Info |
---|
Make sure to include the “--x11” flag for the GUI |
Load Modules
Once a node is assigned to the interactive srun job in the previous section, schrodinger can be loaded from one of the various modules available on HPC.
Code Block |
---|
module load schrodinger/20222023-43 |
You can then see a list of executable programs:
Code Block |
---|
find /gpfs/sharedfs1/admin/hpc2.0/apps/schrodinger/20222023-43/ -maxdepth 1 -executable -type f -printf "%f\n" | sort | pr -tT -8 | column -t autots elements hppmap diagnost gfxinfo knime jsc mxmd material phase_fqqs qikpropqpld ska biolumin elementsska biolumin glide epik jws ifd mxmd lambda_d oned_scr phase_hysc qiksimqsite ssp blast epikx ifd-md licadmin para_tes pipeline quick_sh sta bmin fep_abso impact ligand_s pfam prime run structur confgen fep_plus installa epik ligprep phase_bu HPC1.0Orprime_mm knimeschrodin testapp confgenx oned_scrfep_solu jaguar machid phase_qs qpld sta bmin epikx hppmap_da primex shape_sc vsw consensu ffbuilde jobcontr macromod phase_fi qikfit shape_sc watermap constant generate jsc licadmin maestro Original phase_fq qikprop shape_sc wscore covalent gfxinfo jws material phase_hy qiksim sitemap xtb desmond glide |
You can also see a list of utilities with the same find command above:
Code Block |
---|
find /gpfs/sharedfs1/admin/hpc2.0/apps/schrodinger/2023-3/utilities/ -maxdepth 1 -executable -type f -printf "%f\n" | sort | pr -tT -8 | column -t 2d_sketc canvasHC cg_chsr create_w jaguar_p mtzprint project_ structal abs canvasHC check_jo create_x jaguar_t multisim project_ structca align_bi canvasKM check_re create_x jaguar_t neutrali project_ structco align_hy canvasKP check_th create_x jnanny numfreqc propfilt structsh align_li canvasLC ch_isost custom_p jobcontr obabel proplist structsu anharmon canvasLi qsite ch_ligan desalter jresults para_bmi protassi structur confgen fep_abso ifd ligand_s para_tes pipeline run testapp confgenx fep_plus ifd-md ligprep pfam prime schrodin version. consensu fep_solu impact machid phase_bu prime_mm shape_sc vsw constant ffb_fep_ installa macromod phase_da primex shape_sc watermap covalent ffbuilde jaguar maestro phase_fi qikfit sitemap wscore desmond generate jobcontr |
Host Configuration
The Schrödinger Suite is configured to submit jobs directly to the SLURM job scheduler. Therefore, you do not need to wrap your commands in a submission script. You can execute Schrödinger commands directly from a login node. When you submit Schrödinger jobs, you do so to hosts
. We have created the following hosts: slurm-parallel-24
, slurm-parallel-48
, slurm-parallel-96
, slurm-parallel-192
, slurm-parallel-384
. Each one of these hosts will submit a job to SLURM's hi-core
parallel partition for the number of cores specified by the number at the end of its name.
Below is a table listing the available Schrodinger hosts on HPC, what partition each host submits the Schrodinger job on, and how many cores are allocated for each host/job.
...
Host
...
Partition
...
Cores being allocated to job
...
slurm-single
...
general
...
24
apbs canvas_m ch_water elim.sch jserver para_epi py.test stu_add
applyhtr canvasMC cluster_ epharmac jserver_ para_lig query_gp stu_dele
autoqsar canvasMD combinat extract_ lictest path_fin queue_bm stu_exec
AutoTSRe canvasML combinat feature_ licutil pbs_lic_ randsub stu_extr
AutoTSRe canvasMo combinat feature_ ligand_i pdbconve refconve stu_modi
AutoTSRe canvasNn combinat ffld_ser ligfilte phase_al render_k stu_work
AutoTSUn canvasPC compare_ flex_ali ligparse phase_cl r_group_ system_b
babel canvasPC configur flexlm_s lp_filte phase_co r_group_ tautomer
bandshap canvasPC conf_tem fragment lp_label phase_co ring_con thermoch
buildloo canvasPC convert_ generate lp_nored phase_de ring_tem timestam
canvasBa canvasPh convert_ generate macro_pk phase_hy rmsdcalc uffmin
canvasCo canvasPL convert_ getpdb maegears phase_hy rsync_pd unique_n
canvasCS canvasPr convert_ glide_en maetopqr phase_hy sdconver uniquesm
canvasCS canvasPW corefind glide_me mae_to_s phase_mm secstruc update_B
canvasCS canvasRP create_h glide_so make_lin phase_pr seqconve update_P
canvasDB canvasSc create_h guardian make_r_l phase_qs serial_s vis2gc
canvasFP canvasSD create_h hetgrp_f md5diges phase_vo shape_sc visdump
canvasFP canvasSe create_h hit_expa merge_du postmort show_joi watermap
canvasFP canvasSO create_i impref micro_pk premin smiles_t wscore_m
canvasFP canvasSO create_m ionizer modify_s prepwiza spectrum wscore_r
canvasFP canvasTr create_r ionizer_ mol2conv profile_ stereoiz zip_temp
canvasHC ccp42cns create_s jagconve moldescr profile_ store_re ziputil |
Example Application Usage
qsite
Code Block |
---|
qsite -SAVE -PARALLEL 24 -HOST slurm-parallel-24 3IIS_Per1.in Launching JAGUAR under jobcontrol. Exec: /apps2/schrodinger/2016-2/jaguar-v9.2/bin/Linux-x86_64 JobId: cn01-0-57b33646 |
Note that the numeric value of -PARALLEL
should match the numeric value of the -HOSTn declaration
that you specified in the previous srun command.
Jaguar
Code Block |
---|
jaguar run nameofinput.in |
There is a way to target a specific Schrodinger application or utility with the following syntax:
You can then view the status of your running job with sacct
.
Code Block |
---|
sacct JobID JobName Partition Account AllocCPUS State ExitCode ------------ ---------- ---------- ---------- ---------- ---------- -------- 39148 j3IIS_Per1 parallelhi-core abc12345 24 RUNNING 0:0 391148.0 hostname abc12345 24 COMPLETED 0:0 |
Run Test Suite
Code Block |
---|
testapp -HOST slurm-parallel-24 -DEBUG para_testapp -HOST slurm-parallel-48 -DEBUG |
Installation Oddities
Schrödinger comes pre-packaged with an outdated version of mpi(< 1.8.1), meaning an old bug in the MPI->SLURM interface needs to be manually patched by appending the following line to schrodinger's mpi's default config file:
Code Block |
---|
plm_slurm_args = --cpu_bind=boards |
Quantum Espresso
Quantum Espresso can be used to run various Schrödinger suites.
QE is the leading high-performance, open-source quantum mechanical software package for nanoscale modeling of material
It is recommended to load a global openmpi version available through the SPACK package manager to allow for MPI communications before loading and running Quantum Espresso.
A section on how to load openmpi through SPACK is available at the bottom of the following openmpi knowledge base article:
https://kb.uconn.edu/space/SH/26033783855/OpenMPI+Guide
...
Command to call a Schrodinger utility
Code Block |
---|
"${SCHRODINGER}/utilities/multisim" -JOBNAME desmond_md_job_TREK1model_1ms < restOfCommandOptions > |
Launching and disconnecting from an interactive fisbatch Schrodinger job
Schrodinger can be run interactively through srun or fisbatch.
The srun solution above is good for a single interactive calculation that can be left up and running without any disconnections.
If there are network or power interruptions while the Interactive Schrodinger srun job is running, the srun job will end and progress will be lost.
An alternative to avoid potential network/power interruptions for an interactive SLURM srun job would be to submit an interactive fisbatch job to HPC.
Fisbatch is older and it does have some bugs.
Fisbatch allocates a compute node to the job session, which allows users to spawn a calculation interactively through a screen session that launches on the assigned compute node,.
Users can also disconnect from the fisbatch job, and reattach to the job to track the progress of various calculations.
Here is an example to allocate an AMD EPYC compute node with 126 cores through fisbatch under the general partition:
Code Block |
---|
fisbatch -N 1 -n 126 -p general --constraint='epyc128'
FISBATCH -- waiting for JOBID jobidhere to start on cluster=slurm and partition=general
.........................!
Warning: Permanently added 'cXX,137.99.x.x' (ECDSA) to the list of known hosts.
FISBATCH -- Connecting to head node (cnXX)
|
Once a compute node is assigned and the fisbatch job is running, schrodinger can be loaded normally through the module.
Code Block |
---|
module load quantumespresso/7.1 |
The quantumespresso/7.1 module will automatically load the needed schrodinger/2022-4 module.
Quantum Espresso provides an executable that can take various command line options to run needed calculations.
Here are the command line options that are available for the Quantum Espresso run_qe executable:
Code Block |
---|
run_qe
Provide EXE_NAME
Usage: run_qe EXE_NAME TPP OPENMP INPUT_FILE
|
Info |
---|
The options are: EXE_NAME=Name Of Schrödinger EXE (maestro, desmond, etc) TPP=# value (1, 2, 3, etc) OPENMP=MPI command (mpirun, mpiexec, etc) INPUT_FILE=Input File looking to be run |
Example:
Code Block |
---|
run_qe maestro 2 mpirun code.inschrodinger/2023-3 |
Once schrodinger is loaded, the Schrodinger commands will become available and the Schrodinger calculations can be called through one of the many Schrodinger suites.
To disconnect from a fisbatch job, enter the following key strokes:
“Ctrl-a then Ctrl-d”
The screen session that fisbatch spawns on the compute node should detach and the fisbatch job will continue running.
To confirm that the job is still running, the following SLURM command can be entered:
shist
To reattach to the fisbatch job, the following command can be entered:
Code Block |
---|
fisattach jobidhere |
The fisbatch screen session should reattach and the session enabled for the specific job and the Schrodinger calculation should still be running.
If a network/power interruption happens while a fisbatch job is up and running, the job could potentially end. The network/power interruptions will not affect a job that is detached and running unless the specific assigned node runs into hardware/network problems.