Systems & Equipment

    Research Clusters

Nova

Originally Nova cluster consisted of compute nodes with multicore Intel Skylake Xeon processors, 1.5TB or 11TB of fast NVME local storage and 192GB / 384GB / 3TB of memory.  Five of those nodes also have one or two Nvidia Tesla V100-32GB GPU cards. In 2021 the cluster has been expanded with AMD nodes, each having two 32-Core  AMD EPYC 7502 Processors, 1.5TB of fast NVME local storage and 528GB of memory. The new GPU nodes in addition have four NVidia A100 80GB GPU cards. The 2022 expansion consists of 54 regular compute nodes (with two 32-Core Intel 8358 processors, 1.6TB of local storage and 512GB of memory each) and 5 GPU nodes with two 24-Core AMD EPYC 7413 processors, eight A100 GPU cards, 960GB of local storage and 512GB of memory each.

The three service nodes include login node, data transfer node and the management node.

Large shared storage consists of six file servers and twelve JBODS configured to provide either 338TB of backed up storage or 457TB non-backed up storage per server.

All nodes and storage are connected via Mellanox EDR (100Gbps) switch.

Additional nodes can be purchase by faculty using the Nova Cluster Purchase Form (need to be logged into Okta in order to access the form).


Detailed Hardware Specification

Number of NodesProcessors per NodeCores per NodeMemory per NodeInterconnectLocal $TMPDIR DiskAccelerator CardCPU-Hour Cost Factor
72Two 18-Core
Intel Skylake 6140
36192 GB100G IB1.5 TBN/A1.0
40Two 18-Core
Intel Skylake 6140
36384 GB100G IB1.5 TBN/A1.2
28

Two 24-Core Intel Skylake 8260

48384 GB100G IB1.5 TBN/A1.2
2Two 18-Core
Intel Skylake 6140
36192 GB100G IB1.5 TB2x NVIDIA Tesla V100-32GB2.7
1Two 18-Core
Intel Skylake 6140
36192 GB100G IB1.5 TBone NVIDIA Tesla V100-32GB2.7
2Two 18-Core
Intel Skylake 6140
36384 GB100G IB1.5 TB2x NVIDIA Tesla V100-32GB3.0
1Four 16-Core
Intel 6130
643 TB100G IB11 TBN/A

6.2

2Four 24-Core
Intel 8260
963 TB100G IB1.5 TBN/A

3.0

40Two 32-Core  AMD EPYC 750264512 GB100G IB1.5 TBN/A 
15Two 32-Core  AMD EPYC 750264512 GB100G IB1.5 TBfour NVidia A100 80GB  
54Two 32-Core Intel Icelake 8358 64512GB100G IB1.6TBN/A 
5Two 24-Core  AMD EPYC 741348512GB100G IB960GBeight NVidia A100 80GB  

 

 

 

HPC-Class

The HPC-Class partitions support instructional computing and unsponsored thesis development.

HPC-Class partitions currently consist of 28 regular compute nodes and 3 GPU nodes with eight NVIDIA a100 80GB GPU cards each. Each regular compute node has 64 cores, 500 GB of available memory, GigE and EDR (100Gbit) Infiniband interconnects.

The HPC-Class partitions are accessible via the ISU HPC Nova cluster.

Detailed Hardware Specification
Number of NodesProcessors per NodeCores per NodeMemory per NodeInterconnectLocal $TMPDIR DiskGPU
28Two 2.6 GHz 32-core Intel 8358 CPUs64500 GB Available100G IB1.6 TB NVMeN/A
3Two 2.65 GHz 24-core AMD Epyc 7413 CPUs48500 GB Available100G IB960 GB NVMeEight Nvidia A100 80GB GPUs