...
Expand | ||||
---|---|---|---|---|
There are many reasons a job may fail. A good first step is to use the Here’s an example of the output for a job that failed immediately with an
The The next clue to investigate is the
Once you see the job has failed multiple times on the same node, then you can feel confident that a faulty node is likely the cause. Please submit a help request to Storrs HPC including a screenshot from the |
My jobs are
...
failing due to insufficient memory. Or with an “out of memory” or “OOM” error. Why is this happening? And how do I fix this?
Expand | ||||
---|---|---|---|---|
Short answer: If you received this error, your job most likely failed because the amount of memory (RAM) it needed was larger than the default. We can request more memory for your job using the Long answer: There are several reasons a job may fail from insufficient memory. The most likely reason this problem is suddenly affecting you is that Storrs HPC Admins had to implement a new change to HPC 2.0. We’ll explain why in a moment, but first, let me go over the solution (assuming this is the problem). As of January 2023, the default amount of memory available per CPU is 2 gigabytes. But you can easily override the default using the --The default
Adding this line will tell SLURM to use 3 gigabytes of RAM per CPU you request. That means if we ask for 2 cores (-n 2), then we’ll be allocated 6 gigabytes of RAM total. Please note that the
We encourage users to please adjust the Now, we’ll explain why this problem is suddenly affecting our users. On HPC 1.0, the memory flags didn’t work so there wasn’t a great way to prevent jobs from failing due to insufficient memory. The problem didn’t happen super often though. The memory flags do work on HPC 2.0, and the default settings were a little too strict. SLURM was only letting 1 job run per node because it assumed all jobs needed the entire node’s memory. There were nodes with 128 cores and 500 GB of RAM where only 1 core and 1 gigabyte of RAM were used. Tons of jobs were piling up in the queue and the job wait times were really long. So, the Storrs HPC Admins reset the default memory available per core to 2 gigabytes of RAM per core. We had to set it this low because some of the node architectures have much less memory than others. Resetting this variable enabled SLURM to allow more than one job to run on a node and reduced the job wait times, provided there is enough memory on that node. This default mem-per-cpu of 2 GB is okay for many users, but not for all. Some of our users run more RAM-intensive programs, meaning the default is not sufficient. So, now it has to be adjusted manually in the #SBATCH header. For more info on For more info on fisbatch, srun, and #SBATCH flags, see this link. |
...