- Home
- About
- Research
- Education
- News
- Publications
- Guides
- Introduction to HPC clusters
- UNIX Introduction
- Nova
- Condo 2017
- Classroom HPC Cluster
- SCSLab
- File Transfers
- Cloud Back-up with Rclone
- Globus Connect
- Sample Job Scripts
- Containers
- Using DDT Parallel Debugger, MAP profiler and Performance Reports
- Using Matlab Parallel Server
- JupyterLab
- JupyterHub
- Using ANSYS RSM
- Open OnDemand
- Python
- Using Julia
- LAS Machine Learning Container
- Support & Contacts
- Systems & Equipment
- FAQ: Frequently Asked Questions
- Contact Us
- Cluster Access Request
Nova
Originally Nova cluster consisted of compute nodes with multicore Intel Skylake Xeon processors, 1.5TB or 11TB of fast NVME local storage and 192GB / 384GB / 3TB of memory. Five of those nodes also have one or two Nvidia Tesla V100-32GB GPU cards. In 2021 the cluster has been expanded with AMD nodes, each having two 32-Core AMD EPYC 7502 Processors, 1.5TB of fast NVME local storage and 528GB of memory. The new GPU nodes in addition have four NVidia A100 80GB GPU cards.
The three service nodes include login node, data transfer node and the management node.
Large shared storage consists of four file servers and eight JBODS configured to provide 338TB of backed up storage per server.
All nodes and storage are connected via Mellanox EDR (100Gbps) switch.
Detailed Hardware Specification
Number of Nodes | Processors per Node | Cores per Node | Memory per Node | Interconnect | Local $TMPDIR Disk | Accelerator Card | CPU-Hour Cost Factor |
---|---|---|---|---|---|---|---|
72 | Two 18-Core Intel Skylake 6140 | 36 | 192 GB | 100G IB | 1.5 TB | N/A | 1.0 |
40 | Two 18-Core Intel Skylake 6140 | 36 | 384 GB | 100G IB | 1.5 TB | N/A | 1.2 |
28 | Two 24-Core Intel Skylake 8260 | 48 | 384 GB | 100G IB | 1.5 TB | N/A | 1.2 |
2 | Two 18-Core Intel Skylake 6140 | 36 | 192 GB | 100G IB | 1.5 TB | 2x NVIDIA Tesla V100-32GB | 2.7 |
1 | Two 18-Core Intel Skylake 6140 | 36 | 192 GB | 100G IB | 1.5 TB | one NVIDIA Tesla V100-32GB | 2.7 |
2 | Two 18-Core Intel Skylake 6140 | 36 | 384 GB | 100G IB | 1.5 TB | 2x NVIDIA Tesla V100-32GB | 3.0 |
1 | Four 16-Core Intel 6130 | 64 | 3 TB | 100G IB | 11 TB | N/A | 6.2 |
2 | Four 24-Core Intel 8260 | 96 | 3 TB | 100G IB | 1.5 TB | N/A | 3.0 |
40 | Two 32-Core AMD EPYC 7502 | 64 | 512 GB | 100G IB | 1.5 TB | N/A | |
15 | Two 32-Core AMD EPYC 7502 | 64 | 512 GB | 100G IB | 1.5 TB | four NVidia A100 80GB |
HPC group schedules regular maintenances every 3 months to update system software and to perform other tasks that require a downtime.
The date of the next maintenance is listed in the message of the day displayed at login (when ssh-ing to the cluster).
Note: Queued jobs will not start if they cannot complete before the maintenance begins. In the output of the squeue command the reason for those jobs will state (ReqNodeNotAvail, Reserved for maintenance) . The jobs will start after the scheduled outage completes.