Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

When looking to run CryoSPARC jobs, a user would need to connect/login into HPC as the cryosparc_user account.

Contact HPC to be granted access to the cryosparc_user acount

📘 Submitting a CryoSPARC job

...

Info

cd /home/cryosparc_user/cryosparc/cryosparc2_worker/bin

./cryosparcw connect --worker $(hostname) --master $(hostname) --port 39000 --nossd --lane default --newlane

Info

If you get an error saying the installation is incomplete, stop the master Cryosparc instance and then run the worker update command like the following:

cryosparcm stop

cd /home/cryosparc_user/cryosparc/cryosparc2_worker

rsync -a --progress /home/cryosparc_user/cryosparc/cryosparc2_master/cryosparc_worker.tar.gz .

bin/cryosparcw update --override

Once the worker gets updated, restart the main cryosparc master process and the job should work successfully.

📘 Launching CryoSPARC login GUI from assigned GPU node

...

The login screen for CryoSPARC should show up looking like the following:

...

Login using CryoSparc account details by your own CryoSPARC user account,.Launch CryoSparc into the CryoSPARC web GUI using your CryoSPARC account email address and password.

Once logged in, the CryoSPARC web GUI should load looking like the following:

...

The process of launching and submitting CryoSPARC job(s) should be the same.

Launch the CryoSPARC job to the gpu node that was assigned to the interactive SLURM srun job in the previous step.

Info

If there is a network interruption, the CryoSparc Master process on the gpu node will crash. There is a /tmp/ cryosparc .sock file that would need to be removed to refresh and allow cryosparc to continue running on a node that had the brief interruption.

📘 Cleaning up CryoSPARC environment after job finishes

Once the running CryoSPARC job finishes, the following steps are recommended to exit CryoSPARC cleanly and free up the assigned GPU node to allow other users to submit jobs to the GPU node:

Panel
bgColor#FFEBE6
  1. Log off of the CryoSPARC GUI and close the browser tab.

  2. Go back to the terminal that is logged into HPC with the running interactive srun job and stop the master CryoSPARC process using the following command: cryosparcm stop

  3. Exit out of the interactive SLURM job to free up the assigned GPU node.

  4. If the CryoSPARC calculations are all set, it is recommended to log off of the cryosparc_user HPC account.