Systems & Equipment

    Research Clusters

Nova

Nova cluster consists primarily of regular compute nodes, each with two 18-core Intel Skylake 6140 Xeon processors, 1.5TB of fast NVME local storage and either 192GB or 384GB of memory. Besides these compute nodes there is a 3TB memory node with four 16-core Intel 6130 Xeon processors and 11TB of fast SSD local storage and three GPU nodes, that have same amount of memory and local storage as the regular compute nodes, but also have two Nvidia Tesla V100-32GB GPU cards.

The three service nodes include login node, data transfer node and the management node.

Large shared storage consists of four file servers and eight JBODS configured to provide 338TB of backed up storage per server.

All nodes and storage are connected via Mellanox EDR (100Gbps) switch.

Additional nodes can be purchase by faculty using the Nova Cluster Purchase Form.

Detailed Hardware Specification

Number of NodesProcessors per NodeCores per NodeMemory per NodeInterconnectLocal $TMPDIR DiskAccelerator CardCPU-Hour Cost Factor
68Two 18-Core
Intel Skylake 6140
36192 GB100G IB1.5 TBN/A1.0
32Two 18-Core
Intel Skylake 6140
36384 GB100G IB1.5 TBN/A1.2
2Two 18-Core
Intel Skylake 6140
36192 GB100G IB1.5 TB2x NVIDIA Tesla V100-32GB2.7
1Two 18-Core
Intel Skylake 6140
36384 GB100G IB1.5 TB2x NVIDIA Tesla V100-32GB3.0
1Four 16-Core
Intel 6130
643 TB100G IB11 TBN/A

6.2

Condo Cluster

Condo Cluster consists primarily of 192 SuperMicro servers (expandable to 324 servers) each with two 8-core Intel Haswell processors, 128 GB of memory and 2.5 TB of available local disk. In addition to these compute nodes there are three large memory nodes. Two large memory nodes have four 8-core Intel Ivy Bridge processors and 1 TB of main memory. Third large memory node has four 10-core Ivy Bridge processors and 2 TB of main memory. All nodes and storage are connected via Intel/Qlogic QDR InfiniBand (40 Mb/s) switch.  There is also an accelerator node containing a pair of NVIDIA Tesla K20c GPU cards.

Detailed Hardware Specification

Number of Nodes Processors per Node Cores per Node Memory per Node Interconnect Local $TMPDIR Disk Accelerator Card
192 Two 2.6 GHz 8-Core
Intel E5-2640 v3
16 128 GB 40G IB 2.5 TB N/A
2 Four 2.6 GHz 8-Core
Intel E5 4620 v2
32 1 TB 40G IB 1.8 TB N/A
1 Four 2.2 GHz 10-Core
Intel E7-4830 v2
40 2 TB 40G IB 1.3 TB N/A
1

Two 2.3 GHz 10-Core
Intel E5-2650 v3

20768 GB40G IB5.5 TB2x NVIDIA K20c

 

Detailed Hardware Specification - Free Tier

The Condo cluster also contains a collection of older compute nodes which are made available for unfunded research.

Number of Nodes Processors per Node Cores per Node Memory per Node Interconnect Local $TMPDIR Disk Accelerator Card
76 Two 2.0 GHz 8-Core Intel E5 2650 16 128 GB 40G IB 2.5 TB N/A
8 Two 2.0 GHz 8-Core Intel E5 2650 16 128 GB 40G IB 2.5 TB Two NVIDIA K20
4 Two 2.0 GHz 8-Core Intel E5 2650 16 64 GB 40G IB 2.5 TB One NVIDIA K20
1Four 2.0 GHz 8-Core Intel E5 4620321 TB40G IB2.5 TBN/A
       

Storage on the Condo cluster is documented within the Condo guide.


 

    Education Cluster

HPC-Class

The HPC-Class cluster supports instructional computing and unsponsored thesis development.

HPC-Class currently consists of 56 SuperMicro servers each with 16 cores, 128 GB of memory, GigE and QDR (40Gbit) InfiniBand interconnects. In addition eight of these compute nodes contain a two NVIDIA Tesla K20 GPU cards.

Detailed Hardware Specification
Number of Nodes Processors per Node Cores per Node Memory per Node Interconnect Local $TMPDIR Disk Accelerator Card
68 Two 2.0 GHz 8-Core Intel E5 2650 16 128 GB 40 GB IB 2.5 TB N/A
8 Two 2.0 GHz 8-Core Intel E5 2650 16 128 GB 40 GB IB 2.5 TB Two NVIDIA K20