- Introduction to HPC clusters
- UNIX Introduction
- Condo 2017
- Classroom HPC Cluster
- File Transfers
- Cloud Back-up with Rclone
- Globus Connect
- Sample Job Scripts
- Using DDT Parallel Debugger, MAP profiler and Performance Reports
- Using Matlab Parallel Server
- Using ANSYS RSM
- Support & Contacts
- Systems & Equipment
- FAQ: Frequently Asked Questions
- Contact Us
- Cluster Access Request
COVID-19 Research Partition
Iowa State University has acquired equipment under a grant specifically for COVID-19 research.
The equipment consists of:
12 384G Nova nodes (wide nodes)
250 TB of usable storage
These nodes are being made available to researchers for priority access via a new SLURM partition which requires approval by the HPC committee. To request access please apply here.
The nodes are also available to the Nova community as the "scavenger" partition. This is a new partition where there is no charge or penalty for using it, but the jobs will be killed immediately when resources are needed by the jobs in covid19 partition. This is intended to allow the nodes to get more usage, but maintain their stated purpose.
To submit jobs to covid19 partition, specify "-p covid19" in the job script or on the salloc/sbatch/srun command, e.g. the following command requests 1 core in the covid19 partition for 1 hour:
salloc -N 1 -n 1 -p covid19 -t 1:00:00
Note: If your group has also purchased nodes on the Nova cluster, then you need to specify appropriate slurm account using "-A" option (detailed instructions should have been sent to you when you've been added to additional slurm account).
Example of a job script that requests 4 cores on one node in covid19 partition for 1 hour:
!/bin/bash #SBATCH --time=1:00:00 # walltime limit (HH:MM:SS) #SBATCH --nodes=1 # number of nodes #SBATCH --ntasks-per-node=4 # 4 processor core(s) per node #SBATCH --partition=covid19 your commands here