CryoSPARC

What is CryoSPARC?

CryoSPARC is a state of the art scientific software platform for cryo-electron microscopy (cryo-EM) used in research and drug discovery pipelines.

 

Local cryosparc_user account on HPC

CryoSPARC requires a local user account named cryosparc_user that is used to control needed CryoSPARC resources and job allocations.

When looking to run CryoSPARC jobs, a user would need to connect/login into HPC as the cryosparc_user account.

Contact HPC to be granted access to the cryosparc_user acount

Submitting a CryoSPARC job

Submit an interactive SLURM job:

srun --partition=general-gpu -N 1 -n 62 --gres=gpu:1 --time=12:00:00 --pty bash

Once a node is assigned to the interactive SLURM srun job, start the Cryosparc Master process with the following command:

cryosparcm start

CryoSPARC should run the master process on the current GPU node that was assigned to the SLURM job.

The following commands will provide the steps to tell CryoSPARC to use the currently assigned GPU node as a worker node and allocate the GPU node to the CryoSPARC default job submission lane:

Once the worker gets updated, restart the main cryosparc master process and the job should work successfully.

Launching CryoSPARC login GUI from assigned GPU node

Once a GPU node has been assigned as a CryoSPARC worker node, the connection to the CryoSPARC GUI can be established.

Open a local web browser to point to the http link for the assigned GPU node.

The CryoSPARC instance spawns on port 39000.

The link will look like the following:

To view the IP address of the assigned GPU node the following command can be used:

The login screen for CryoSPARC should show up looking like the following:

 

Login into the CryoSPARC web GUI using your CryoSPARC account email address and password.

Once logged in, the CryoSPARC web GUI should load looking like the following:

 

The process of launching and submitting CryoSPARC job(s) should be the same.

Launch the CryoSPARC job to the gpu node that was assigned to the interactive SLURM srun job in the previous step.

Cleaning up CryoSPARC environment after job finishes

Once the running CryoSPARC job finishes, the following steps are recommended to exit CryoSPARC cleanly and free up the assigned GPU node to allow other users to submit jobs to the GPU node: