- Home
- About
- Research
- Education
- News
- Publications
- Guides
- Introduction to HPC clusters
- UNIX Introduction
- Nova
- HPC Class
- Condo 2017
- SCSLab
- File Transfers
- Cloud Back-up with Rclone
- Globus Connect
- Sample Job Scripts
- Containers
- Using DDT Parallel Debugger, MAP profiler and Performance Reports
- Using Matlab Parallel Server
- JupyterLab
- JupyterHub
- Using ANSYS RSM
- Nova OnDemand
- Python
- Using Julia
- LAS Machine Learning Container
- Support & Contacts
- Systems & Equipment
- FAQ: Frequently Asked Questions
- Contact Us
- Cluster Access Request
HPC Class
Table of Contents
- Introduction
- Requesting Access to Class Partitions
- Hardware Overview
- Accessing HPC Class Partitions
- Launching Jobs in Class Partitions
- Class Storage
Introduction
HPC Class partitions are Slurm partitions on ISU HPC's Nova cluster that are dedicated to education and class use.
Requesting Access to Class Partitions
HPC Class partitions are available for classes. Instructors on record can request access to Class partitions for themselves and the students on their class lists via this form.
Hardware Overview
Number of Nodes | Processors per Node | Cores per Node | Memory per Node | Local $TMPDIR Disk | GPU | Interconnect |
---|---|---|---|---|---|---|
28 | Two 2.6 GHz 32-core Intel 8358 CPUs | 64 | 500 GB Available | 1.6 TB NVMe | N/A | 100 Gb/s IB |
3 | Two 2.65 GHz 24-core AMD Epyc 7413 CPUs | 48 | 500 GB Available | 960 GB NVMe | Eight Nvidia A100 80GB GPUs | 100 Gb/s IB |
Accessing HPC Class Partitions on Nova
The HPC Class partitions are Slurm partitions on ISU HPC's Nova Cluster. Follow the directions to login to the Nova cluster here: Nova Access and Login Guide
Partition Name | Max Time (Hrs:Min:Sec) | Nodes in Partition (Variable) |
---|---|---|
class-short | 00:15:00 | 22 |
class-long | 12:00:00 | 8 |
class-gpu | 06:00:00 | 2 |
class-gpu-short | 00:15:00 | 1 |
Launching Jobs in Class Partitions
To launch a job in Class partitions, specify a class partition with -p. To specify the class account to run Slurm jobs with, use the -A option. For example, a user in the class ABCD495 in Fall of 2022 with an associated Slurm account of f2022.ABCD.495.1 could run the salloc command:
salloc -p class-short -N 1 -n 4 -t 15 -A f2022.ABCD.495.1
Class instructors and TAs will need to specify account class-faculty instead:
salloc -p class-short -N 1 -n 4 -t 15 -A class-faculty
An sbatch script to request the same allocation would be as follows:
#!/bin/bash
# Copy/paste this job script into a text file and submit with the command:
# sbatch thefilename
# job standard output will go to the file slurm-%j.out (where %j is the job ID)
#SBATCH --time=00:15:00 # walltime limit (HH:MM:SS)
#SBATCH --nodes=1 # number of nodes
#SBATCH --ntasks-per-node=4 # 4 processor core(s) per node
#SBATCH --partition=class-short # class node(s)
#SBATCH --account=f2022.ABCD.495.1 #account to use
# LOAD MODULES, INSERT CODE, AND RUN YOUR PROGRAMS HERE
To view current Class partitions and their status, a user could run the sinfo command and search for 'class':
sinfo | grep class
Note: Users with multiple classes that utilize the cluster or users encountering "Invalid account or account/partition combination specified" errors should specify their relevant class account.
For more information on Slurm commands, see the Managing jobs using Slurm Workload Manager guide. For sample job scripts to use with the sbatch command, see the Slurm script generator for Nova.
Class Storage
There are three class-specific storage locations on Nova:
- /work/class-faculty - Location for instructors to store course documents and files.
- /work/classtmp - Temporary storage location for job data. Available to all students and instructors. The files will be deleted at semester end. Files that are desired to be retained should be moved from /work/classtmp before then.
- /work/class-old-home - Read-only export of the old home directories of the defunct HPC-Class standalone cluster which was replaced in August 2022 by HPC Class partitions on Nova. If you were a recent user and had files in your home directory on the old Class cluster, you can recover them from this location.