Comsol Guide

This how-to article explains how to load and run Comsol on HPC.

 Instructions

comsol with MPI

To run comsol with SLURM, please create following comsol.sh:

#!/bin/bash # For general partition #SBATCH -N 1 #SBATCH -p general #SBATCH --cpus-per-task=126 #SBATCH --ntasks-per-node=1 #SBATCH --output comsol.log # Clear the output log echo -n > comsol.log # Add debugging information. scontrol show job $SLURM_JOBID # Details of input and output files. INPUTFILE=/path/to/input_model.mph OUTPUTFILE=/path/to/output_model.mph # Load our comsol module. module purge module load comsol/6.2 # Run comsol. comsol batch \ -clustersimple \ -np $SLURM_CPUS_PER_TASK \ -mpifabrics shm:ofa \ -inputfile $INPUTFILE \ -outputfile $OUTPUTFILE \ -tmpdir temp

 

Then submit the script by issuing:

$ sbatch comsol.sh

comsol with GPU

$ ssh <NetID>@login.storrs.hpc.uconn.edu $ module load comsol/6.2

To run comsol with SLURM, please create following comsol.sh:

Then submit the script by issuing:

comsol on single node

Some comsol algorithms like PARDISO do not make use of MPI like . To run these, you can specify -np instead of -clustersimple as follows

Then submit the script by issuing:

 

Highlight important information in a panel like this one. To edit this panel's color or style, select one of the options in the menu.

 Related articles

Related content

SLURM Guide
SLURM Guide
More like this
Ansys Fluent Guide
Ansys Fluent Guide
More like this
CryoSPARC
CryoSPARC
More like this
SLURM & General HPC Usage
SLURM & General HPC Usage
More like this
SLURM Job Arrays
SLURM Job Arrays
More like this
MATLAB Guide
MATLAB Guide
More like this