Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

When looking to run CryoSPARC jobs, a user would need to connect/login into HPC as the cryosparc_user account.

📘 Submitting a CryoSPARC job

Submit an interactive SLURM job:

Info

srun --partition=general-gpu -N 1 -n 62 --gres=gpu:1 --time=12:00:

...

00 --pty bash

Once a node is assigned to the interactive SLURM srun job, start the Cryosparc Master process with the following command:

Info

cryosparcm start

CryoSPARC should run the master process on the current GPU node that was assigned to the SLURM job.

Once the master process is running, the assigned GPU node would need to be enabled as a CryoSPARC worker node.

The following commands will provide the steps to tell CryoSPARC to use the currently assigned GPU node as a worker node and allocate the GPU node to the CryoSPARC default job submission lane:

Info

cd /home/cryosparc_user/cryosparc/cryosparc2_worker/bin

./cryosparcw connect --worker $(hostname) --master $(hostname) --port 39000 --nossd --lane default --newlane

📘 Launching CryoSPARC login GUI from assigned GPU node

Once a GPU node has been assigned as a CryoSPARC worker node, the connection to the CryoSPARC GUI can be established.

At this time, the https domain name for the GPU node will not work.

Connection can be established using the IP address of the GPU node in a web browser.

Open a local web browser to point to the http link for the assigned GPU node IP address, view IP information with the following command: ip a.

The CryoSPARC instance spawns on port 39000.

The link will look like the following:

Info

https://IPaddressOfGPUnode:39000/

Login To view the IP address of the assigned GPU node the following command can be used:

Info

ip a

The IP address of the assigned GPU node will begin with 137.99.x.x

The login screen for CryoSparc should show up , login looking like the following:

...

Login using CryoSparc account details , launch by your own CryoSPARC user account,.

Launch CryoSparc job to the gpu node that was assigned to the interactive SLURM srun job in the previous step.

Info

If there is a network interruption, the CryoSparc Master process on the gpu node will crash. There is a /tmp/ cryosparc .sock file that would need to be removed to refresh and allow cryosparc to continue running on a node that had the brief interruption.

...