Using GUIs on the HPC

Intro to GUIs with X11

This page is designed to get you up to speed using GUIs on the HPC, and is especially important for proprietary software applications that have no command-line interface and only provide a GUI. If you’re not sure what a GUI is, no worries! You can check out our glossary here.

We have setup our HPC Cluster to run applications with GUIs (e.g., MATLAB, Jupyter Notebooks, XMGrace) using the X11 windowing system. X11 is a windowing system for bitmap displays, common on Unix-like operating systems. It is essentially the middle-man between your personal computer and the HPC, allowing you to interact with a GUI while the program runs on the HPC.

As an important aside, we generally ask users not to run any sort of GUI on the HPC login nodes since graphical applications consume a lot of resources and can negatively impact other users sharing that login node. Instead, we ask users to please run programs with GUIs on compute nodes which prevents any negative impact on fellow users.

The following sections will show the multiple options to connect to HPC with X11 functionality depending on whether you use a Windows (PC), Apple (Mac), or Linux operating system.

How to Install and Setup X11

Windows X11 setup

  • From Windows: Install MobaXTerm. This is a complete suite that includes its own X-server which allows X-Forwarding by default over SSH.

Once MobaXterm is installed, different SSH sessions can be setup similar to that of PuTTy’s ssh terminal session setup.

Click the Session Icon at the top left and MobaXterm will open up the new session settings window.

Enter hpc2.storrs.hpc.uconn.edu or login.storrs.hpc.uconn.edu in the Remote Host field, click the check box next to Specify username, enter your NetIDin the username field, and allow for SSH Port 22 connectivity.

If an error occurs when trying to connect to the login.storrs.hpc.uconn.edu host, then there is an alternative to target one of the new login nodes separately.

In the Remote Host field, the new login nodes can be specified individually with their own host name.

Login 4:

login4.storrs.hpc.uconn.edu

Login 5:

login5.storrs.hpc.uconn.edu

Login 6:

login6.storrs.hpc.uconn.edu

Once the session is all set up, press the OK button and the new SSH session will be added to your session lists under the Star icon on the left or the drop down Session Star button at the top of the MobaXterm GUI.

Double click the new SSH session that was setup and MobaXterm will connect to one of the login nodes and prompt for your NetID password when logging into HPC.

MAC OSX X11 setup

  • From Mac: Use a free X-Server such as XQuartz. With that installed, use the built-in terminal and use the command:

    • ssh -X netidhere@login.storrs.hpc.uconn.ed

    •  OR:
      ssh -Y netidhere@login.storrs.hpc.uconn.edu

MAC OSX local upgrade issues

Sometimes when a local MAC machine gets upgraded, it changes the default accepted key algorithms preventing some users from logging into HPC from the local MAC machine.

To resolve the SSH key algorithm problem, the following config options will need to be added and fixed to the users local ~/.ssh/config on the MAC:


HostkeyAlgorithms +ssh-rsa
PubkeyAcceptedAlgorithms +ssh-rsa


Once the above two lines are added to the local .ssh/config on the MAC, the ssh connection should be successful.

Linux X11 setup

  • From Linux: If you have a GUI login on your local Linux machine with X11 installed, then you already have all the components necessary for X-Forwarding. Just use the terminal commands from the Mac example above.

The local Linux Bash terminal should allow for X11 to be enabled by default, but the following command will help establish X-Forwarding on a Linux machine:

ssh -X netidhere@login.storrs.hpc.uconn.edu

Spawning an interactive Slurm job with X11 functionality

Starting a session

  1. Run/launch MobaXterm or one of the other options above depending on your local Operating System and connect to HPC.

  2. If this is your first time logging in, click "Yes" in the a message "Host key verification failed: The server is unknown. Do you trust the host key?"

  3. Inside the terminal that connected to HPC with X11 enabled, launch an interactive Slurm srun job to reserve your compute nodes, and then you can run your interactive job.

  4. The following example will show how to spawn a 1-core interactive SLURM srun job with x11 enabled and with 10 GB of memory.

    srun --x11 -n 1 --mem=10G -p general -C epyc128 --pty bash

    Once a node is assigned to the srun job, HPC modules can be loaded within the interactive session and the necessary GUI can be called depending on the software being used.

Fisbatch alternative to srun

Fisbatch works similarly to srun mentioned above.

Fisbatch is older, but the command allows users to disconnect and reconnect to the job instead of one job running at a time.

To assign an AMD EPYC node to an interactive fisbatch job:

fisbatch -N 1 -n 126 -p general --constraint='epyc128'

Wait for a node to assign to the fisbatch job, once assigned and launched, fisbatch will act similarly to srun where commands can be typed with the terminal:

module load softwarenamehere

To disconnect from the Fisbatch screen session that spawns on a compute node, the following key strokes can be used:

“Ctrl-a then Ctrl-d”

The screen session that fisbatch spawns on the compute node should detach and the fisbatch job will continue running.

To confirm that the job is still running, the following SLURM command can be entered:

shist

From here, you are back on the login node and you can submit a new fisbatch job or launch another schrodinger calculation and detach from the new job as well.

To reattach to an existing fisbatch job, the following command can be entered:

The fisbatch screen session should reattach and the session enabled for the specific job should still be running.

You can continued detaching various fisbatch jobs with the keystrokes above.

However, fisbatch is limited to the time limit associated by each partition.

Terminating a session

To end your Slurm srun job session or fisbatch job session you can type the word exit to end the job, and the resources will be freed up for other users to use.

It is recommended, when the current calculations on the specific fisbatch jobs finish is to exit out of the fisbatch job(s) to free up the resources for another user to run on the node(s).

Fisbatch will continue running even if a calculation is finished unless the user types the word exit to end the session.

Conclusion

With X-Forwarding established you can open GUIs for any visualization application you need. If you need help with a specific application, let us know the details and we will assist with getting it working.