Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents

Getting Started

How do I get an account?

Expand

If you don’t already have an account, please fill out the cluster application form.

Students and postdoctoral research associates who are requesting an account will need their advisor’s (PI’s) NetID so that we can verify their membership in the advisor’s research group. If you don’t know your advisor’s NetID, you can look it up on the UConn PhoneBook.

...

Expand

Short answer: Faculty can purchase priority access for 5 years if they pay the upfront cost for the nodes.

Long answer: High-priority access is available under a “condo model,” where faculty are able to purchase semi-dedicated nodes which get made available to all users when there are unused compute cycles. Under the condo model, faculty researchers fund the capital equipment costs of individual compute nodes, while the university funds the operating costs of running these nodes for five years. Faculty who purchase compute nodes receive access to equivalent resources at a higher priority than other researchers. The faculty can designate others to receive access at the same priority level, such as their graduate students, postdoctoral researchers, etc. With priority access, computational jobs are moved higher in the queuing system, and in most cases begin execution within twelve hours, depending upon other FairShare factors. A priority user can utilize their resources indefinitely. All access to resources is managed through the cluster’s job scheduler. When a priority user is not using their assigned resources, the nodes are made available to all UConn researchers for general use.

...

Using the HPC

How do I log in to the Storrs HPC when I am off campus?

...

Expand

Long wait times for your jobs? Errors about unavailable resources? We’ve been there and understand how frustrating it can be for jobs to take a long time to run. It’s an unfortunate consequence of having such a strong computational research community at UConn. LOTS of incredible research happens here, but it also means that there are LOTS of people competing for resources. There’s no getting around that problem, but there are a couple of steps we can take to increase the odds that our jobs get on ASAP.

  1. Check what resources are available before we submit a job. (And then)

  2. Target our submission to those available resources.

This FAQ will offer guidance on how to do both of those things.


Checking for Available Resources

The below sinfo command will give you a high-level view of what nodes are fully available (“idle”), in partial use (“mix”), not available (“alloc”. for allocated), or otherwise in need of maintenance (all other states besides idle, mix, or alloc). The nodes will be broken down in order by partition.

Code Block
sinfo -o '%14P %.5a %.10l %.6D %.6t %30N %b

The output for that command will look like this, but it will be much longer and provide info on every partition.

Code Block
PARTITION      AVAIL  TIMELIMIT  NODES  STATE NODELIST                       ACTIVE_FEATURES
GeoSciMP          up 1-00:00:00     38    mix cn[410-447]                    location=local,epyc64,cpuonly
class             up    4:00:00      3  inval cn[407-409]                    location=local,skylake,cpuonly
class             up    4:00:00      1  maint cn348                          location=local,skylake,cpuonly
class             up    4:00:00     19  down* cn[244-246,252,254-255,268-271 location=local,haswell,cpuonly
class             up    4:00:00      7  down* cn[335-337,339-340,373,406]    location=local,skylake,cpuonly
class             up    4:00:00      1   comp cn376                          location=local,skylake,cpuonly
class             up    4:00:00     10    mix cn[333,352,362,374-375,391,402 location=local,skylake,cpuonly
class             up    4:00:00     25  alloc cn[329-332,334,338,341-342,345 location=local,skylake,cpuonly
... (etc.)

The above command gives us an overarching picture of usage on the cluster, and from there, we can use a more targeted command to get more information on individual nodes within a partition, like how many cores or GPUs are in use and how many are available. The base sinfo command is below and it targets the priority-gpu partition but we could amend it to target any other partition, like hi-core for example.

Code Block
sinfo -p priority-gpu -t idle,mix -o%10n%20C%10G%10t%20R%b
HOSTNAMES CPUS(A/I/O/T)       GRES      STATE     PARTITION           ACTIVE_FEATURES
gpu06     13/23/0/36          gpu:1     mix       priority-gpu        location=local,gpu,v100,skylake
gpu20     36/28/0/64          gpu:3     mix       priority-gpu        location=local,epyc64,a100,gpu
gpu21     25/39/0/64          gpu:3     mix       priority-gpu        location=local,epyc64,a100,gpu
gpu22     3/61/0/64           gpu:3     mix       priority-gpu        location=local,epyc64,a100,gpu
gpu23     33/31/0/64          gpu:1     mix       priority-gpu        location=local,epyc64,a100,gpu
gtx02     2/18/0/20           gpu:3     mix       priority-gpu        location=local,gpu,gtx,skylake
gtx03     12/8/0/20           gpu:3     mix       priority-gpu        location=local,gpu,gtx,skylake
gtx08     1/19/0/20           gpu:2     mix       priority-gpu        location=local,gpu,gtx,skylake
gtx11     2/18/0/20           gpu:2     mix       priority-gpu        location=local,gpu,gtx,skylake
gtx15     28/4/0/32           gpu:8     mix       priority-gpu        location=local,gpu,rtx,skylake
gtx09     0/20/0/20           gpu:2     idle      priority-gpu        location=local,gpu,gtx,skylake

The column titled “CPUS (A/I/O/T)” tells us how many cores are available. “A” stands for Allocated, “I” stands for Idle, and “T” stands for Total. (“O” stands for Other but you can ignore that column) Since there are 39 cores in the “Idle” column for GPU21, that means 39 cores are available to use. But all 3 of the GPUs on GPU21 are in use so we can’t use any GPUs on that node. So, that gives us an idea of the resources. If I only needed cores and no GPUs, I could target GPU21.

In summary, these two commands can give us a picture of what partitions have resources available, and then what resources are available on individual nodes within that partition.


Targeting a specific partition

The next step is submitting a job targeting a specific partition. If you’re not sure how to target a specific partition, please visit our SLURM Guide where you will see examples of submission scripts that target different partitions and architectures. Another key part of targeting a specific partition is knowing what partitions you are allowed to use and what “account” and “QOS” you must use to access them. To check what partitions you’re allowed to use and how to access them you can use this command.

Code Block
sacctmgr list assoc user=`whoami` -o format=user,account,partition%20,qos%20
      User    Account            Partition                  QOS 
---------- ---------- -------------------- -------------------- 
net10004    jth10001               hi-core              general 
net10004    jth10001           general-gpu              general 
net10004    jth10001                 debug              general 
net10004    jth10001               lo-core              general 
net10004    jth10001               general              general
net10004    jth10001          priority-gpu          jth10001gpu

This tells me that I have access to 6 partitions. To access the priority-gpu partition, I need to include the below three flags in the #SBATCH header of my submission script. This will be different for every individual so you will have to modify this with the partitions you have access to and the account and QOS that are associated with your account.

Code Block
#SBATCH -p priority-gpu     # partition I want to access
#SBATCH -A jth10001         # account I am associated with
#SBATCH -q jth10001gpu      # QOS needed to access partition

If you have further questions about how to check what resources are available and how to target them, please feel free to contact the Storrs HPC admins by sending an email to hpc@uconn.edu.

...

Troubleshooting problems

Why did my job fail?

Expand

There are many reasons a job may fail. A good first step is to use the shist command to check the ExitCode SLURM gave it. The command follows this format: shist --starttime YYYY-MM-DD.

Here’s an example of the output for a job that failed immediately with an ExitCode of 1.

Code Block
JobID         Partition        QOS    JobName      User      State    Elapsed   NNodes      NCPUS        NodeList ExitCode                 End 
------------ ---------- ---------- ---------- --------- ---------- ---------- -------- ---------- --------------- -------- -------------------
73088        priority-+ erm12009g+  submit_rx  jdt10005     FAILED   00:00:00        1         32           gtx21      1:0 2022-11-17T10:05:34 

The ExitCode of a job will be a number between 0 and 255. An ExitCode of 0 means that—as far as SLURM is concerned—the job ran and was completed properly. Any non-zero ExitCode will indicate that the job failed. One could then search the ExitCode on Google to investigate what SLURM thinks caused the job to fail. Sometimes this is helpful but not always. Either way, take note of what you find for future reference.

The next clue to investigate is the NodeList column. Sometimes a job fails because there is something wrong with the compute node our job was run on. If the compute node is the problem (and Storrs HPC staff haven’t fixed it already), the job should fail again with the same ExitCode. We can submit our job specifically to that same node to see if the job fails again. Try adding this to the #SBATCH header of your script to target a specific node. Here, we target gtx21 because that was the node listed in the NodeList column above.

Code Block
#SBATCH --nodelist=gtx21

Once you see the job has failed multiple times on the same node, then you can feel confident that a faulty node is likely the cause. Please submit a help request to Storrs HPC including a screenshot from the shist output.

...

Expand

If the script could not run via sbatch. The errors usually look like this:

Code Block
sh: -c: line 0: unexpected EOF while looking for matching `"'
sh: -c: line 1: syntax error: unexpected end of file

It is usually due to the wrong file format. Your file is still in the Windows format but not the Linux format.

Code Block
$ file comsol.sh # with wrong format
comsol.sh: Little-endian UTF-16 Unicode English text, with CRLF line terminators
$ iconv -f utf-16 -t ascii comsol.sh -o comsol.sh # Convert to ASCII first.
$ dos2unix comsol.sh # change CRLF line terminators to Unix format
$ file comsol.sh
comsol.sh: Bourne-Again shell script text executable

...

Guidelines for Citing the HPC

How can I acknowledge the Storrs HPC in our publications?

...