...
Info |
---|
Make sure to include the “--x11” flag for the GUI |
Load Modules
Once a node is assigned to the interactive srun job in the previous section, schrodinger can be loaded from one of the various modules available on HPC.
Code Block |
---|
module load schrodinger/2023-3 |
...
Code Block |
---|
qsite -SAVE -PARALLEL 24 -HOST slurm-parallel-24 3IIS_Per1.in Launching JAGUAR under jobcontrol. Exec: /gpfs/sharedfs1/admin/hpc2.0/apps/schrodinger/2022-4/jaguar-v11.8/bin/Linux-x86_64 JobId: job60-login5-1674022 |
Note that the numeric value of -PARALLEL
should match the numeric value of the -HOST
that you specified.
...
Run Test Suite
Code Block |
---|
testapp -HOST slurm-parallel-24 -DEBUG para_testapp -HOST slurm-parallel-48 -DEBUG |
Installation Oddities
...
Code Block |
---|
"${SCHRODINGER}/utilities/multisim" -JOBNAME desmond_md_job_TREK1model_1ms < restOfCommandOptions > |
Example Submission Script CPU
Code Block |
---|
#!/bin/bash
#SBATCH --partition=general # Name of Partition
#SBATCH --ntasks=126 # Maximum CPU cores for job
#SBATCH --nodes=1 # Ensure all cores are from the same node
#SBATCH --mem=492G # Request 128 GB of available RAM
#SBATCH --constraint='epyc128' # Request AMD EPYC node for the job
#SBATCH --mail-type=END # Event(s) that triggers email notification (BEGIN,END,FAIL,ALL)
#SBATCH --mail-user=first.lastname@uconn.edu # Destination email address
module load schrodinger/2022-4
<schrodinger program> <other options> |
Example Submission Script GPU
Code Block |
---|
#!/bin/bash
#SBATCH --partition=general-gpu # Name of Partition
#SBATCH --ntasks=20 # Maximum CPU cores for job
#SBATCH --nodes=1 # Ensure all cores are from the same node
#SBATCH --mem=128G # Request 128 GB of available RAM
#SBATCH --gres=gpu:2 # Request 2 GPU cards for the job
#SBATCH --mail-type=END # Event(s) that triggers email notification (BEGIN,END,FAIL,ALL)
#SBATCH --mail-user=first.lastname@uconn.edu # Destination email address
module load schrodinger/2022-4
<schrodinger program> <other options>
|