Systems & Equipment

    Research Clusters

Nova

Originally Nova cluster consisted of compute nodes with multicore Intel Skylake Xeon processors, 1.5TB or 11TB of fast NVME local storage and 192GB / 384GB / 3TB of memory.  Five of those nodes also have one or two Nvidia Tesla V100-32GB GPU cards. In 2021 the cluster has been expanded with AMD nodes, each having two 32-Core  AMD EPYC 7502 Processors, 1.5TB of fast NVME local storage and 528GB of memory. The new GPU nodes in addition have four NVidia A100 80GB GPU cards.

The three service nodes include login node, data transfer node and the management node.

Large shared storage consists of four file servers and eight JBODS configured to provide 338TB of backed up storage per server.

All nodes and storage are connected via Mellanox EDR (100Gbps) switch.

Additional nodes can be purchase by faculty using the Nova Cluster Purchase Form.

Detailed Hardware Specification

Number of NodesProcessors per NodeCores per NodeMemory per NodeInterconnectLocal $TMPDIR DiskAccelerator CardCPU-Hour Cost Factor
72Two 18-Core
Intel Skylake 6140
36192 GB100G IB1.5 TBN/A1.0
40Two 18-Core
Intel Skylake 6140
36384 GB100G IB1.5 TBN/A1.2
28

Two 24-Core Intel Skylake 8260

48384 GB100G IB1.5 TBN/A1.2
2Two 18-Core
Intel Skylake 6140
36192 GB100G IB1.5 TB2x NVIDIA Tesla V100-32GB2.7
1Two 18-Core
Intel Skylake 6140
36192 GB100G IB1.5 TBone NVIDIA Tesla V100-32GB2.7
2Two 18-Core
Intel Skylake 6140
36384 GB100G IB1.5 TB2x NVIDIA Tesla V100-32GB3.0
1Four 16-Core
Intel 6130
643 TB100G IB11 TBN/A

6.2

2Four 24-Core
Intel 8260
963 TB100G IB1.5 TBN/A

3.0


40  
Two 32-Core  AMD EPYC 750264512 GB100G IB1.5 TBN/A 
15Two 32-Core  AMD EPYC 750264512 GB100G IB1.5 TB4x NVidia A100 80GB 

 

Condo Cluster

Condo Cluster consists primarily of 134 SuperMicro servers each with two 8-core Intel Haswell processors, 128 GB of memory and 2.5 TB of available local disk. In addition to these compute nodes there are three large memory nodes. Two large memory nodes have four 8-core Intel Ivy Bridge processors and 1 TB of main memory. Third large memory node has four 10-core Ivy Bridge processors and 2 TB of main memory. All nodes and storage are connected via Intel/Qlogic QDR InfiniBand (40 Mb/s) switch.  There is also an accelerator node containing a pair of NVIDIA Tesla K20c GPU cards.

The three service nodes include login node, data transfer node and the management node.

As of July 1, 2021 access to the cluster is free for all ISU faculty and their groups. To request the access, faculty fill out the form.

 

Detailed Hardware Specification

Number of Nodes Processors per Node Cores per Node Memory per Node Interconnect Local $TMPDIR Disk Accelerator Card
134 Two 2.6 GHz 8-Core
Intel E5-2640 v3
16 128 GB 40G IB 2.5 TB N/A
2 Four 2.6 GHz 8-Core
Intel E5 4620 v2
32 1 TB 40G IB 1.8 TB N/A
1 Four 2.2 GHz 10-Core
Intel E7-4830 v2
40 2 TB 40G IB 1.3 TB N/A
1

Two 2.3 GHz 10-Core
Intel E5-2650 v3

20768 GB40G IB5.5 TB2x NVIDIA K20c

 

Storage on the Condo cluster is documented within the Condo guide.


 

    Education Cluster

HPC-Class

The HPC-Class cluster supports instructional computing and unsponsored thesis development.

HPC-Class currently consists of 44 regular compute nodes and 8 GPU nodes each having two NVIDIA Tesla K20 GPU cards. Each node has 16 cores, 128 GB of memory, GigE and QDR (40Gbit) Infiniband interconnects.

The three service nodes include login node, data transfer node and the management node.

Detailed Hardware Specification
Number of Nodes Processors per Node Cores per Node Memory per Node Interconnect Local $TMPDIR Disk Accelerator Card
44 Two 2.6 GHz 8-Core
Intel E5-2640 v3
16 128 GB 40 GB IB 2.5 TB N/A
8 Two 2.0 GHz 8-Core Intel E5 2650 16 128 GB 40 GB IB 2.5 TB Two NVIDIA K20