Systems & Equipment
Research Clusters
Nova
Originally Nova cluster consisted of compute nodes with multicore Intel Skylake Xeon processors, 1.5TB or 11TB of fast NVME local storage and 192GB / 384GB / 3TB of memory. Five of those nodes also have one or two Nvidia Tesla V100-32GB GPU cards. In 2021 the cluster has been expanded with AMD nodes, each having two 32-Core AMD EPYC 7502 Processors, 1.5TB of fast NVME local storage and 528GB of memory. The new GPU nodes in addition have four NVidia A100 80GB GPU cards. The 2022 expansion consists of 54 regular compute nodes (with two 32-Core Intel 8358 processors, 1.6TB of local storage and 512GB of memory each) and 5 GPU nodes with two 24-Core AMD EPYC 7413 processors, eight A100 GPU cards, 960GB of local storage and 512GB of memory each.
The three service nodes include login node, data transfer node and the management node.
Large shared storage consists of six file servers and twelve JBODS configured to provide either 338TB of backed up storage or 457TB non-backed up storage per server.
All nodes and storage are connected via Mellanox EDR (100Gbps) switch.
Additional nodes can be purchase by faculty using the Nova Cluster Purchase Form (need to be logged into Okta in order to access the form).
Detailed Hardware Specification
Number of Nodes | Processors per Node | Cores per Node | Memory per Node | Interconnect | Local $TMPDIR Disk | Accelerator Card | CPU-Hour Cost Factor |
---|---|---|---|---|---|---|---|
72 | Two 18-Core Intel Skylake 6140 | 36 | 192 GB | 100G IB | 1.5 TB | N/A | 1.0 |
40 | Two 18-Core Intel Skylake 6140 | 36 | 384 GB | 100G IB | 1.5 TB | N/A | 1.2 |
28 | Two 24-Core Intel Skylake 8260 | 48 | 384 GB | 100G IB | 1.5 TB | N/A | 1.2 |
2 | Two 18-Core Intel Skylake 6140 | 36 | 192 GB | 100G IB | 1.5 TB | 2x NVIDIA Tesla V100-32GB | 2.7 |
1 | Two 18-Core Intel Skylake 6140 | 36 | 192 GB | 100G IB | 1.5 TB | one NVIDIA Tesla V100-32GB | 2.7 |
2 | Two 18-Core Intel Skylake 6140 | 36 | 384 GB | 100G IB | 1.5 TB | 2x NVIDIA Tesla V100-32GB | 3.0 |
1 | Four 16-Core Intel 6130 | 64 | 3 TB | 100G IB | 11 TB | N/A | 6.2 |
2 | Four 24-Core Intel 8260 | 96 | 3 TB | 100G IB | 1.5 TB | N/A | 3.0 |
40 | Two 32-Core AMD EPYC 7502 | 64 | 512 GB | 100G IB | 1.5 TB | N/A | |
15 | Two 32-Core AMD EPYC 7502 | 64 | 512 GB | 100G IB | 1.5 TB | four NVidia A100 80GB | |
54 | Two 32-Core Intel Icelake 8358 | 64 | 512GB | 100G IB | 1.6TB | N/A | |
5 | Two 24-Core AMD EPYC 7413 | 48 | 512GB | 100G IB | 960GB | eight NVidia A100 80GB |
Condo Cluster
Condo Cluster consists primarily of 134 SuperMicro servers each with two 8-core Intel Haswell processors, 128 GB of memory and 2.5 TB of available local disk. In addition to these compute nodes there are three large memory nodes. Two large memory nodes have four 8-core Intel Ivy Bridge processors and 1 TB of main memory. Third large memory node has four 10-core Ivy Bridge processors and 2 TB of main memory. All nodes and storage are connected via Intel/Qlogic QDR InfiniBand (40 Mb/s) switch. There is also an accelerator node containing a pair of NVIDIA Tesla K20c GPU cards.
The three service nodes include login node, data transfer node and the management node.
As of July 1, 2021 access to the cluster is free for all ISU faculty and their groups. To request the access, faculty fill out the form.
Detailed Hardware Specification
Number of Nodes | Processors per Node | Cores per Node | Memory per Node | Interconnect | Local $TMPDIR Disk | Accelerator Card |
---|---|---|---|---|---|---|
134 | Two 2.6 GHz 8-Core Intel E5-2640 v3 |
16 | 128 GB | 40G IB | 2.5 TB | N/A |
2 | Four 2.6 GHz 8-Core Intel E5 4620 v2 |
32 | 1 TB | 40G IB | 1.8 TB | N/A |
1 | Four 2.2 GHz 10-Core Intel E7-4830 v2 |
40 | 2 TB | 40G IB | 1.3 TB | N/A |
1 | Two 2.3 GHz 10-Core | 20 | 768 GB | 40G IB | 5.5 TB | 2x NVIDIA K20c |
Storage on the Condo cluster is documented within the Condo guide.
Education Partitions
HPC-Class
The HPC-Class partitions support instructional computing and unsponsored thesis development.
HPC-Class partitions currently consist of 28 regular compute nodes and 3 GPU nodes with eight NVIDIA a100 80GB GPU cards each. Each regular compute node has 64 cores, 500 GB of available memory, GigE and EDR (100Gbit) Infiniband interconnects.
The HPC-Class partitions are accessible via the ISU HPC Nova cluster.
Detailed Hardware Specification
Number of Nodes | Processors per Node | Cores per Node | Memory per Node | Interconnect | Local $TMPDIR Disk | GPU |
---|---|---|---|---|---|---|
28 | Two 2.6 GHz 32-core Intel 8358 CPUs | 64 | 500 GB Available | 100G IB | 1.6 TB NVMe | N/A |
3 | Two 2.65 GHz 24-core AMD Epyc 7413 CPUs | 48 | 500 GB Available | 100G IB | 960 GB NVMe | Eight Nvidia A100 80GB GPUs |