The Storrs HPC cluster is a shared resource available to all researchers on all campuses. Use of the cluster is subject to all applicable university policies, including the Information Technology Acceptable Use policy. The Storrs HPC cluster cannot be used to generate or store data that has been classified as Sensitive University Data or covered by the university's Export Control Policy. All data that is stored on the cluster is subject to these restrictions, and data that is not in compliance may be removed. Please familiarize yourself with the data storage guidelines described in the Data Storage Guide.
Additionally, before using the cluster, please familiarize yourself with the procedures listed below.
\uD83D\uDCD8 Instructions
Scheduled Jobs
All computational jobs need to be submitted to the cluster using the job scheduler.
Please read the SLURM Guide for helpful information on using the scheduler.
Listed below are the runtime and resource limits for scheduled jobs.
Job property | Default Partition ( |
|
|
|
---|---|---|---|---|
Run time | 12 hours | 7 days | 6 hours | 30 Minutes |
Nodes | 8 | 4 | 16 | 1 |
Concurrent jobs | 8 |
Unscheduled programs
Programs that are running on a login node (login.storrs.hpc.uconn.edu
) without using the job scheduler are subject to certain restrictions. Any program that violates these restrictions may be throttled or terminated without notice.
Run time (minutes) | CPU limit | Memory limit |
---|---|---|
20 | 5% | 5% |
Below is a list of programs that are allowed on the login node without restrictions:
awk
basemount
bash
bzip
chgrp
chmod
cmake
comsollauncher
cp
du
emacs
find
fort
gcc
gfortran
grep
gunzip
gzip
icc
ifort
jservergo
less
ls
make
more
mv
nano
ncftp
nvcc
perl
rm
rsync
ruby
setfacl
sftp
smbclient
ssh
tail
tar
ukbfetch
vim
wget