...
Expand |
---|
Short answer: Faculty can purchase priority access for 5 years if they pay the upfront cost for the nodes. Long answer: High-priority access is available under a “condo model,” where faculty are able to purchase semi-dedicated nodes which get made available to all users when there are unused compute cycles. Under the condo model, faculty researchers fund the capital equipment costs of individual compute nodes, while the university funds the operating costs of running these nodes for five years. Faculty who purchase compute nodes receive access to equivalent resources at a higher priority than other researchers. The faculty can designate others to receive access at the same priority level, such as their graduate students, postdoctoral researchers, etc. With priority access, computational jobs are moved higher in the queuing system, and in most cases begin execution within twelve hours, depending upon other FairShare factors. A priority user can utilize their resources indefinitely. All access to resources is managed through the cluster’s job scheduler. When a priority user is not using their assigned resources, the nodes are made available to all UConn researchers for general use. Please note that priority users will not be given separate partitions. Instead, they will be given a custom QoS because the QoS governs access to to priority resources (a.k.a. Trackable RESources, or TRes). If you are interested in investing in HPC resources, please fill out the HPC Condo Request form. |
...
Expand |
---|
Short answer: First, you need to connect to UConn’s VPN. Then, you should be able to access the HPC. Long Answer: The HPC Cluster only allows the connection of SSH from the campus-wide computers, for example:
In order to To connect to the HPC when you are off campus, you will first need to connect to the UConn Virtual Private Network (VPN). After connecting to the VPN, you will be able to log in to the HPC as you normally do. For instructions on how to install, set up, and connect your personal device(s) to UConn’s VPN, please go to this webpage. |
...
Expand | ||||
---|---|---|---|---|
The node you are on will normally be shown next to your netID when you log in to the Storrs HPC. For instance, if Jonathan the Husky’s netID was jtk10001jth10001, his terminal might look like this.
This would tell us that Jonathan is on the node called “login6.” Another way to check what node you are on is to use the
|
...
Expand | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Long wait times for your jobs? Errors about unavailable resources? We’ve been there and understand how frustrating it can be for jobs to take a long time to run. It’s an unfortunate consequence of having such a strong computational research community at UConn. LOTS of incredible research happens here, but it also means that there are LOTS of people competing for resources. There’s no getting around that problem, but there are a couple of steps we can take to increase the odds that our jobs get on ASAP.
This FAQ will offer guidance on how to do both of those things. Checking for Available Resources The below
The output for that command will look like this, but it will be much longer and provide info on every partition (not just class and debug).
The above command gives us an overarching picture of usage on the cluster, and from there, we can use a more targeted command to get more information on individual nodes within a partition, like how many cores or GPUs are in use and how many are available. The base
The column titled “CPUS (A/I/O/T)” tells us how many cores are available. “A” stands for Allocated, “I” stands for Idle, and “T” stands for Total. (“O” stands for Other but you can ignore that column) Since there are 39 cores in the “Idle” column for GPU21, that means 39 cores are available to use. But all 3 of the GPUs on GPU21 are in use so we can’t use any GPUs on that node. So, that gives us an idea of the resources. If I only needed cores and no GPUs, I could target GPU21. In summary, these two commands can give us a picture of what partitions have resources available, and then what resources are available on individual nodes within that partition. Targeting a specific partition The next step is submitting a job targeting a specific partition. If you’re not sure how to target a specific partition, please visit our SLURM Guide where you will see examples of submission scripts that target different partitions and architectures. Another key part of targeting a specific partition is knowing what partitions you are allowed to use and what “account” and “QOS” you must use to access them. To check what partitions you’re allowed to use and how to access them you can use this command.
This tells me that I have access to 6 partitions. To access the priority-gpu partition, I need to include the below three flags in the #SBATCH header of my submission script. This will be different for every individual so you will have to modify this with the partitions you have access to and the account and QOS that are associated with your account.
If you have further questions about how to check what resources are available and how to target them, please feel free to contact the Storrs HPC admins by sending an email to hpc@uconn.edu. |
...
Expand | ||||
---|---|---|---|---|
There are many reasons a job may fail. A good first step is to use the Here’s an example of the output for a job that failed immediately with an
The
The next clue to investigate is the
Once you see the job has failed multiple times on the same node but does not fail on other nodes, then you can feel confident that a faulty node is a likely the cause. Please submit a help request to Storrs HPC including a screenshot from the |
...
Expand | ||||
---|---|---|---|---|
Short answer: If you received this error, your job most likely failed because the amount of memory (RAM) it needed was larger than the default. We can request more memory for your job using the Long answer: There are several reasons a job may fail from insufficient memory. As of January 2023, the default amount of memory available per CPU is 2 gigabytes. The default
Adding this line will tell SLURM to use 3 gigabytes of RAM per CPU you request. That means if we ask for 2 cores (-n 2), then we’ll be allocated 6 gigabytes of RAM total. Please note that the
We encourage users to please adjust the |
My jobs are failing due to Timeout. I do not have access to priority; how can I resume a job after it times out?
Expand | ||||
---|---|---|---|---|
Short answer: Once the job is cancelled canceled by SLURM due to timeout, it cannot be resumed from that point because SLURM sets the exit code to “0” which denotes job completion. As far as SLURM is concerned, the job is now complete, with no state to resume from. Long answer: One thing you can try is to use the timeout command to stop your program just before SLURM does. You can tell from the return code if the timeout was reached or not. It should set exit code “124”. If so, you can then requeue it with scontrol. Try the following: In your submission script, add the following:
Then, use the timeout command to call your program:
Note: The Disclaimer: This is untested on Storrs HPC; however, it should work as long as everything else is working correctly. |
...