- Home
- About
- Research
- Education
- News
- Publications
- Guides
- Introduction to HPC clusters
- UNIX Introduction
- Nova
- HPC Class
- Condo 2017
- SCSLab
- File Transfers
- Cloud Back-up with Rclone
- Globus Connect
- Sample Job Scripts
- Containers
- Using DDT Parallel Debugger, MAP profiler and Performance Reports
- Using Matlab Parallel Server
- JupyterLab
- JupyterHub
- Using ANSYS RSM
- Nova OnDemand
- Python
- Using Julia
- LAS Machine Learning Container
- Support & Contacts
- Systems & Equipment
- FAQ: Frequently Asked Questions
- Contact Us
- Cluster Access Request
Partition Configuration and Limits
The scheduler on Nova is Slurm Resource Manager. To see current partition configuration, issue:
scontrol show partitions
Most of the time users that contributed to the cluster do not need to specify a partition when submitting a job on Nova. A script will place the job to a partition based on the resources requested. The exceptions are:
- debug partition (specify "-p debug" to request access to nodes in this partition)
- gpu partitions (specify "-p gpu" when requesting a GPU node; by default a V100 node will be assigned to the job; specify a100 on gres if needed)
- huge partition (specify "-p huge" when job needs more memory than what is available on regular nodes)
- amd partition (includes nodes with AMD processors)
- long_512 partition (jobs in this partition may be preempted for up to 15 minutes at a time by short class jobs, and then resumed)
- scavenger partition (jobs in this partition will be killed if nodes are needed in other partitions)
- community partition (available to a wider set of users)
- covid19 partition (available only to researchers performing COVID-19 research who requested access)
- class partitions (available only to students taking classes for which instructors requested access)
- priority-a100 partitions (available only to PIs/Senior Personnel on the 2020 MRI grant and groups that purchased NVidia A100 nodes)
At some point the following research partitions have been configured:
Partition Name | Max Time/Job (hours) | Max Nodes per job | Number of Nodes in partition (may change) | |
---|---|---|---|---|
debug | 2 | 1 | 1 | |
short_1node192 | 1 | 1 | 67 | |
long_1node192 | 504 | 1 | 67 | |
short_medium192 | 1 | 8 | 42 | |
long_medium192 | 336 | 8 | 33 | |
short_large192 | 1 | 48* | 25 | |
long_large192 | 336 | 48* | 25 | |
short_1node384 | 1 | 1 | 31 | |
long_1node384 | 504 | 1 | 31 | |
short_medium384 | 1 | 8 | 35 | |
long_medium384 | 336 | 8 | 26 | |
short_large384 | 1 | 48* | 18 | |
long_large384 | 336 | 48* | 18 | |
gpu192*** | 24 | 2 | ||
gpu384*** | 24 | 1 | ||
huge | 24 | 1 | ||
amd | 168 | 40 | ||
public-a100*** | 48 | 1 | 4 | |
priority-a100 | 24 | varies** | 10 | |
a100-long**** | 48 | 1 | 1 | |
long_512***** | 336 | 36 | ||
community | 24 | 1 | 4 |
* Even though the maximum node count per job in large partitions is set to 48, the number of nodes in those partitions can be a limiting factor.
** Only PIs/Senior Personnel on the MRI grant and groups that purchased NVidia A100 nodes have access to the priority-a100 partition; the limits vary by the group and can be seen via "sacctmgr show qos format=name,GrpTRES%40 | grep <group_name>" command.
*** Anyone on the cluster can use GPUs outside of the priority-a100 partition. To request a GPU, one needs to specify "gpu" partition and the number of GPU cards needed. Use the Slurm script generator for Nova to create the job script. If A100 GPU is needed, this should be specified in the --gres option, e.g.:
salloc -N 1 -n 8 -p gpu --gres gpu:a100:1 -t 1:00:00
To get good performance one should not request more than 8 cores per A100 GPU card. Most of the software does not even need more than 5 cores. If your job does require more than 8 cores per GPU, you can add "--gres-flags=disable-binding" to the job script or salloc/srun/sbatch command, but be aware that the performance may be degraded.
**** Jobs in the a100-long partition are limited by 1 GPU card, 16 cores and 128GB of memory.
***** Jobs in the long_512 partition may be suspended in memory for short classroom jobs (up to 15 minutes) and then continue running.
Besides partition limits, each group is limited to maximum 8 times of the resources purchased by the group (but no more than half the cluster) across all running jobs. For example, if a group purchased one 36 cores / 384GB memory node, maximum 288 cores and 3TB of memory will be available to all group's jobs. To see those limits, issue:
sacctmgr show qos format=name,GrpTRES%40