Systems & Equipment

    Research Clusters

Nova

Nova cluster consists primarily of regular compute nodes, each with two 18-core Intel Skylake 6140 Xeon processors or two 24-core Intel Skylake 8260 Xeon processors, 1.5TB of fast NVME local storage and either 192GB or 384GB of memory. Besides these compute nodes there are three 3TB memory node with four Intel Xeon processors (one of these nodes has 11TB of fast SSD local storage) and five GPU nodes, that have same amount of memory and local storage as the regular compute nodes, but also haveone or two Nvidia Tesla V100-32GB GPU cards.

The three service nodes include login node, data transfer node and the management node.

Large shared storage consists of four file servers and eight JBODS configured to provide 338TB of backed up storage per server.

All nodes and storage are connected via Mellanox EDR (100Gbps) switch.

Additional nodes can be purchase by faculty using the Nova Cluster Purchase Form.

Detailed Hardware Specification

Number of NodesProcessors per NodeCores per NodeMemory per NodeInterconnectLocal $TMPDIR DiskAccelerator CardCPU-Hour Cost Factor
70Two 18-Core
Intel Skylake 6140
36192 GB100G IB1.5 TBN/A1.0
40Two 18-Core
Intel Skylake 6140
36384 GB100G IB1.5 TBN/A1.2
26

Two 24-Core Intel Skylake 8260

48384 GB100G IB1.5 TBN/A1.2
2Two 18-Core
Intel Skylake 6140
36192 GB100G IB1.5 TB2x NVIDIA Tesla V100-32GB2.7
1Two 18-Core
Intel Skylake 6140
36192 GB100G IB1.5 TBone NVIDIA Tesla V100-32GB2.7
2Two 18-Core
Intel Skylake 6140
36384 GB100G IB1.5 TB2x NVIDIA Tesla V100-32GB3.0
1Four 16-Core
Intel 6130
643 TB100G IB11 TBN/A

6.2

2Four 24-Core
Intel 8260
963 TB100G IB1.5 TBN/A

3.0

 

Condo Cluster

Condo Cluster consists primarily of 158 SuperMicro servers each with two 8-core Intel Haswell processors, 128 GB of memory and 2.5 TB of available local disk. In addition to these compute nodes there are three large memory nodes. Two large memory nodes have four 8-core Intel Ivy Bridge processors and 1 TB of main memory. Third large memory node has four 10-core Ivy Bridge processors and 2 TB of main memory. All nodes and storage are connected via Intel/Qlogic QDR InfiniBand (40 Mb/s) switch.  There is also an accelerator node containing a pair of NVIDIA Tesla K20c GPU cards.

The three service nodes include login node, data transfer node and the management node.

As of July 1, 2021 access to the cluster is free for all ISU faculty and their groups. To request the access, faculty fill out the form.

 

Detailed Hardware Specification

Number of Nodes Processors per Node Cores per Node Memory per Node Interconnect Local $TMPDIR Disk Accelerator Card
192 Two 2.6 GHz 8-Core
Intel E5-2640 v3
16 128 GB 40G IB 2.5 TB N/A
2 Four 2.6 GHz 8-Core
Intel E5 4620 v2
32 1 TB 40G IB 1.8 TB N/A
1 Four 2.2 GHz 10-Core
Intel E7-4830 v2
40 2 TB 40G IB 1.3 TB N/A
1

Two 2.3 GHz 10-Core
Intel E5-2650 v3

20768 GB40G IB5.5 TB2x NVIDIA K20c

 

Storage on the Condo cluster is documented within the Condo guide.


 

    Education Cluster

HPC-Class

The HPC-Class cluster supports instructional computing and unsponsored thesis development.

HPC-Class currently consists of 24 regular compute nodes and 8 GPU nodes each having two NVIDIA Tesla K20 GPU cards. Each node has 16 cores, 128 GB of memory, GigE and QDR (40Gbit) Infiniband interconnects.

The three service nodes include login node, data transfer node and the management node.

Detailed Hardware Specification
Number of Nodes Processors per Node Cores per Node Memory per Node Interconnect Local $TMPDIR Disk Accelerator Card
24 Two 2.6 GHz 8-Core
Intel E5-2640 v3
16 128 GB 40 GB IB 2.5 TB N/A
8 Two 2.0 GHz 8-Core Intel E5 2650 16 128 GB 40 GB IB 2.5 TB Two NVIDIA K20