Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
find /gpfs/sharedfs1/admin/hpc2.0/apps/schrodinger/2022-4/ -maxdepth 1 -executable -type f -printf "%f\n" | sort | pr -tT -8 | column -t
autots    diagnostdesmond   gfxinfo   jsc       material  phase_fqhy  qiksim qikprop   ska
biolumin  elements  glide     jws       mxmd      phase_hyqs  qpld  qiksim    ssp
blast     epik      hppmap HPC1.0Or   knime     oned_scr  phase_qssc  qpldqsite      sta
bmin      epikx     hppmap    licadmin  Original  phase_sc  qsite     structur
confgen   fep_abso  ifd       ligand_slicadmin  para_tes  pipeline  run       structur
testappconfgen confgenx  fep_plusabso  ifd-md    ligprep ligand_s  pfam      prime     schrodin  version.testapp
consensuconfgenx  fep_soluplus  impact    machid ligprep   phase_bu  prime_mm  shape_sc  vsw
constantconsensu  ffb_fep_solu  installa  macromodmachid    phase_da  primex    shape_sc  watermap
covalentconstant  ffbuilde  jaguar    maestro macromod  phase_fi  qikfit    sitemap shape_sc  wscore
desmondcovalent   generate  jobcontr  maestro   phase_fq  qikprop   sitemap

Host Configuration

The Schrödinger Suite is configured to submit jobs directly to the SLURM job scheduler. Therefore, you do not need to wrap your commands in a submission script. You can execute Schrödinger commands directly from a login node. When you submit Schrödinger jobs, you do so to hosts. We have created the following hosts: slurm-parallel-24, slurm-parallel-48, slurm-parallel-96, slurm-parallel-192, slurm-parallel-384. Each one of these hosts will submit a job to SLURM's hi-core parallel partition for the number of cores specified by the number at the end of its name.

...