HPC Class

Table of Contents

Introduction

The HPC Class instruction partition is a Slurm partition on ISU HPC's Nova cluster that is dedicated to education and classroom use. 

For instructions on how to login to Nova, please refer to https://www.hpc.iastate.edu/guides/nova/access-and-login .

Return to Table of Contents

Requesting Access to Class Partitions

The instruction partition is available for classes. Instructors on record can request access to the instruction partition for themselves and the students on their class lists via this form

Return to Table of Contents

Hardware Overview

Number of NodesProcessors per NodeCores per NodeMemory per NodeLocal $TMPDIR DiskGPUInterconnect
8Two 2.6 GHz 32-core Intel 8358 CPUs64500 GB Available1.6 TB NVMeN/A100 Gb/s IB
3Two 2.65 GHz 24-core AMD Epyc 7413 CPUs48500 GB Available960 GB NVMeEight Nvidia A100 80GB GPUs100 Gb/s IB
8Two 2.3 GHz 18-core Intel 6140 CPUs36189 GB Available1.5 TB NVMeN/A100 Gb/s IB

Return to Table of Contents

Accessing HPC Class Partitions on Nova

The instruction partition is a Slurm partition on ISU HPC's Nova Cluster. Follow the directions to login to the Nova cluster here: Nova Access and Login Guide

Note that class accounts will work only with the instruction partition. 

Return to Table of Contents

Launching Jobs in Class Partitions

To launch a job in Class partitions, specify the instruction partition with -p. To specify the class account to run Slurm jobs with, use the -A option. For example, a user in the class ABCD495 in Fall of 2022 with an associated Slurm account of f2022.ABCD.495.1 could run the salloc command:

salloc -p instruction -N 1 -n 4 -t 15 -A f2022.ABCD.495.1

Class instructors and TAs will need to specify account class-faculty instead:

salloc -p instruction -N 1 -n 4 -t 15 -A class-faculty

 

An sbatch script to request the same allocation would be as follows:

#!/bin/bash

# Copy/paste this job script into a text file and submit with the command:
#    sbatch thefilename
# job standard output will go to the file slurm-%j.out (where %j is the job ID)

#SBATCH --time=00:15:00   # walltime limit (HH:MM:SS)
#SBATCH --nodes=1   # number of nodes
#SBATCH --ntasks-per-node=4   # 4 processor core(s) per node
#SBATCH --partition=instruction    # class node(s)
#SBATCH --account=f2022.ABCD.495.1    #account to use

# LOAD MODULES, INSERT CODE, AND RUN YOUR PROGRAMS HERE

To view current Class partitions and their status, a user could run the sinfo command and search for 'class':

sinfo -p instruction

Note: Users with multiple classes that utilize the cluster or users encountering "Invalid account or account/partition combination specified" errors should specify their relevant class account. 

For more information on Slurm commands, see the Managing jobs using Slurm Workload Manager guide. For sample job scripts to use with the sbatch command, see the Slurm script generator for Nova

Return to Table of Contents

Class Storage

There are two class-specific storage locations on Nova: 

  • /work/class-faculty - Location for instructors to store course documents and files.
  • /work/classtmp - Temporary storage location for job data. Available to all students and instructors. The files will be deleted at semester end. Files that are desired to be retained should be moved from /work/classtmp before then. 

 

Note that home directory space (/home/<username>) has 10G quota. This is for configuration and login files.  It will be quite a bit slower than other disk resources, so it should not be used for high volume access. 

 

Additional Instructions

The guides on the left provide additional instructions. If you need help, please email hpc-help@iastate.edu . 

Return to Table of Contents