Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

SAS is software developed for advanced analytics, multivariate analyses, business intelligence, data management, and predictive analytics. It runs on Windows, IBM mainframe, Unix/Linux and OpenVMS Alpha. Similar software includes R, Anaconda and Pentaho. 

SAS licensing

The HPC cluster uses the university site license for SAS, More information is available here: https://software.uconn.edu/software/sas/ .

SAS was not designed with High Performance Computing environments in mind.

Due to the nature of SAS and old library dependencies, SAS can only run in a containerized environment through single compute node job submissions.

Using SAS on the cluster

All jobs on the cluster must be submitted through the SLURM scheduler using sbatch for batch jobs or srun/fisbatch for interactive jobs. Please read the SLURM Guide for more details. The preferred way to run SAS jobs is by using a .sas file. Please note that if your job uses many cores or a lot of memory it is better to submit to reserve a whole compute node. (126 cores)

Loading APPTAINER module

You can view a list of all available apptainer versions with:

module avail apptainer

To load the APPTAINER module:

module load apptainer/1.1.3

Grab SAS.sif from the main SAS installation directory and transfer image to a user’s home directory on HPC

[netidhere@loginXX ~]$ rsync -a --progress /gpfs/sharedfs1/admin/hpc2.0/apps/sas/buildfiles/SAS.sif /gpfs/homefs1/putnetidhere

Submitting .sas files with slurm

Single Core job

To submit a job that uses a single core to run a SAS file myprog.sas. Create a script called sasSP.sh:

#!/bin/bash
#SBATCH --ntasks=1
#SBATCH --output=outputfile.txt
#SBATCH --error=outputfile.txt

module load apptainer
apptainer exec --unsquash -H $HOME:/home -B /gpfs/sharedfs1/admin/hpc2.0/apps/sas/9.4:/gpfs/sharedfs1/admin/hpc2.0/apps/sas/9.4 SAS.sif /gpfs/sharedfs1/admin/hpc2.0/apps/sas/9.4/SASFoundation/9.4/sas myprog.sas

Then submit the script:

 sbatch sasSP.sh

Multithread job

To submit a job that uses 10 computational threads on one node, create a submission script sasMP.sh:

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks=10
#SBATCH --output=outputfile.txt
#SBATCH --error=outputfile.txt

module load apptainer
apptainer exec --unsquash -H $HOME:/home -B /gpfs/sharedfs1/admin/hpc2.0/apps/sas/9.4:/gpfs/sharedfs1/admin/hpc2.0/apps/sas/9.4 SAS.sif /gpfs/sharedfs1/admin/hpc2.0/apps/sas/9.4/SASFoundation/9.4/sas myprog.sas

Then submit the script by:

 sbatch sasMP.sh

GUI/Interactive SAS use with SLURM

The needed SAS.sif Apptainer image will be used to launch SAS with GUI functionality on a HPC compute node.

To run an interactive SAS session with GUI functionality, you should "ssh -Y" to the cluster from a Linux machine, macOS machine with X11 enabled, or a Windows machine with X11 enabled. For more info, check out our guide to using X11 on the HPC here. Then, run the below commands.

To open an interactive SAS window with 10 cores available, you will want to do the following:

[netidhere@login6 ~]$ srun --x11 -N 1 -n 10 --pty bash
srun: job 4261315 queued and waiting for resources
srun: job 4261315 has been allocated resources
[netidhere@cn528 ~]$ module load apptainer
[netidhere@cn528 ~]$ apptainer exec --unsquash -H $HOME:/home -B /gpfs/sharedfs1/admin/hpc2.0/apps/sas/9.4:/gpfs/sharedfs1/admin/hpc2.0/apps/sas/9.4 SAS.sif /gpfs/sharedfs1/admin/hpc2.0/apps/sas/9.4/SASFoundation/9.4/sas

Please DO NOT FORGET to EXIT from the nodes so that the other users can use it. Exit out of all the SAS windows and then type exit into the terminal to end your srun session.

[netidhere@cn528 ~]$ exit

Note that any interruption to the network will unfortunately cause your job to crash irrecoverably.

  • No labels