...
Click the Session Icon at the top left and MobaXterm will open up the new session settings window.
...
Enterlogin hpc2.storrs.hpc.uconn.edu in the Remote Host field, click the check box next to Specify username, enter your NetIDin the username field, and allow for SSH Port 22 connectivity.
If an error occurs when trying to connect to the login.storrs.hpc.uconn.edu host, then the host or there is a problem with one of the login nodes, there is an alternative to target one of the new login nodes separatelyconnect to a different login node by specifying a login node directly.
In the Remote Host field, the new login nodes can be specified individually with their own host name.
...
From Mac: Use a free X-Server such as XQuartz. With that installed, use the built-in terminal and use the command:
ssh -X netidhere@login.storrs.hpc.uconn.ed
OR:
ssh -Y netidhere@login
run the following commands
Code Block |
---|
open -a XQuartz
export DISPLAY=:0 |
Then, in the same terminal, ssh into the cluster with the -Y flag:
Code Block |
---|
ssh -Y YourNetID@hpc2.storrs.hpc.uconn.edu |
MAC OSX local upgrade issues
...
Run/launch MobaXterm or one of the other options above depending on your local Operating System and connect to HPC.
If this is your first time logging in, click "Yes" in the a message "Host key verification failed: The server is unknown. Do you trust the host key?"
Inside the terminal that connected to HPC with X11 enabled, launch an interactive Slurm srun job to reserve your compute nodes, and then you can run your interactive job.
The following example will show how to spawn a 1-core interactive SLURM srun job with x11 enabled and with 10 GB of memory.
Code Block srun --x11 -n 1 --mem=10G -p general -C epyc128 --pty bash
Once a node is assigned to the
srun
job, HPC modules can be loaded within the interactive session and the necessary GUI can be called depending on the software being used.
Fisbatch alternative to srun
Fisbatch works similarly to srun mentioned above.
Fisbatch is older, but the command allows users to disconnect and reconnect to the job instead of one job running at a time.
To assign an AMD EPYC node to an interactive fisbatch job:
Code Block |
---|
fisbatch -N 1 -n 126 -p general --constraint='epyc128' |
Wait for a node to assign to the fisbatch job, once assigned and launched, fisbatch will act similarly to srun where commands can be typed with the terminal:
Code Block |
---|
module load softwarenamehere |
...
used
...
“Ctrl-a then Ctrl-d”
The screen session that fisbatch spawns on the compute node should detach and the fisbatch job will continue running.
To confirm that the job is still running, the following SLURM command can be entered:
shist
From here, you are back on the login node and you can submit a new fisbatch job or launch another schrodinger calculation and detach from the new job as well.
To reattach to an existing fisbatch job, the following command can be entered:
Code Block |
---|
fisattach jobidhere |
The fisbatch screen session should reattach and the session enabled for the specific job should still be running.
You can continued detaching various fisbatch jobs with the keystrokes above.
However, fisbatch is limited to the time limit associated by each partition.
Terminating a session
To end your Slurm srun
job session or fisbatch job session you can type the word exit
to end the job, and the resources will be freed up for other users to use.
It is recommended, when the current calculations on the specific fisbatch jobs finish is to exit out of the fisbatch job(s) to free up the resources for another user to run on the node(s).
...
.
Conclusion
With X-Forwarding established you can open GUIs for any visualization application you need. If you need help with a specific application, let us know the details and we will assist with getting it working.