Systems & Equipment

    Research Clusters

Condo Cluster

Condo Cluster consists primarily of 176 SuperMicro servers (expandable to 324 servers) each with two 8-core Intel Haswell processors, 128 GB of memory and 2.5 TB of available local disk. Besides these compute nodes there are two large memory nodes. One large memory node has four 8-core Intel Ivy Bridge processors and 1 TB of main memory. Second large memory node has four 10-core Ivy Bridge processors and 2 TB of main memory. All nodes and storage are connected via Intel/Qlogic QDR InfiniBand (40 Mb/s) switch.

Detailed Hardware Specification

Number of Nodes Processors per Node Cores per Node Memory per Node Interconnect Local $TMPDIR Disk Accelerator Card
176 Two 2.6 GHz 8-Core Intel E5-2640 v3 16 128 GB 40G IB 2.5 TB N/A
1 Four 2.6 GHz 8-Core Intel E5 4620 v2 32 1 TB 40G IB 1.8 TB N/A
1 Four 2.2 GHz 10-Core Intel E7 4830 v2 40 2 TB 40G IB 1.3 TB N/A

 

Condo Cluster has a 256TB shared Lustre scratch diskspace named /lustre, and 756TB of RAID-6 long term NFS disk space under quota. To ensure data integrity of the large NFS space, nightly rsyncs (backups) are performed to a read-only space within Condo Cluster, so that data can be recovered from the day before in case of filesystem corruption.

Additional nodes can be purchase by faculty using the Condo Cluster Purchase Form.

CyStorm Cluster

Access to CyStorm is available until the new Condo cluster is in place with a two-month overlap for any code porting issues. Disk storage of 150GB of permanent storage and access to 2 TB of shared scratch storage is included. If you would like access to the CyStorm cluster, please send a request to hpc@iastate.edu.

Detailed Hardware Specification
Number of Nodes Processors per Node Cores per Node Memory per Node Interconnect Local $TMPDIR Disk Configuration of Node
256 Two 2.2 GHz 4-Core AMD Opteron 2354 8 8 GB 20 GB IB 150 GB normal compute with MPI
60 Two 2.2 GHz 4-Core AMD Opteron 2354 8 8 GB 20 GB IB 150 GB Hadoop

CyEnce (NSF MRI) Cluster

The CyEnce cluster is available to principal and co-investigators on a shared basis. Access to this NSF MRI Cluster is generally limited to the research groups who were par tof the NSF MRI grant and also to other faculty who helped purchase this machine.

CyEnce consists primarily of 240 SuperMicro servers each with 16 cores, 128 GB of memory, and GigE and QDR (40Gbit) InfiniBand interconnects. Another 16 nodes are similar, but also contain 2 Nvidia K20 Kepler GPUs, 16 more contain two 60 core Intel Phi Accelerator cards. One large memory node contains 32 cores and 1 TB of main memory.

Detailed Hardware Specification
Number of Nodes Processors per Node Cores per Node Memory per Node Interconnect Local $TMPDIR Disk Accelerator Card
240 Two 2.0 GHz 8-Core Intel E5 2650 16 128 GB 40 Gb IB 2.5 TB N/A
16 Two 2.0 GHz 8-Core Intel E5 2650 16 128 GB 40 Gb IB 2.5 TB Two Nvidia K20
16 Two 2.0 GHz 8-Core Intel E5 2650 16 128 GB 40 Gb IB 2.5 TB Two 60 core Intel Phi 511OP
1 Two 2.0 GHz 8-Core Intel E5 2650 32 1 TB 40 Gb IB 2.5 TB N/A

CyEnce nodes run RedHat Enterprise Linux 6.4 and use Torque(PBS) for resource and job management. The application of operating system patches occurs as security needs dictate. All nodes allow for unlimited stack usage as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).

CyEnce has a 288 TB shared Lustre scratch disk space named /ptmp and 768 TB of RAID-6 long term NFS disk space under quota. To ensure data integrity of the large NFS space, nightly rsyncs (backups) are performed to a read-only space within CyEnce, so that data can be recovered from the day before in case of file system corruption.

Lightning3 Cluster

The Lightning3 cluster is available on a scheduled basis to participating campus faculty.

Lightning3 is a cluster with a mix of Opteron based servers, consisting of 18 SuperMicro servers with corecounts ranging from 32 to 64, 256 to 512 GB of memory, and GigE and QDR (40Gbit) InfiniBand interconnects.

Detailed Hardware Specification
Number of Nodes Processors per Node Cores per Node Memory per Node Interconnect Local $TMPDIR Disk Accelerator Card
5 Four 2.3 GHz 8-Core AMD 6134 32 256 GB 40 Gb IB 1.6 TB N/A
1 Four 2.3 GHz 8-Core AMD 6134 32 512 GB 40 Gb IB 1.6 TB N/A
7 Four 3.0 GHz 8-Core 32 256 GB 40 Gb IB 1.6 TB N/A
5 Four 2.3 GHz 16-Core AMD 6376 32 256 GB 40 Gb IB 1.6 TB N/A

    Education Clusters

HPC-Class

The HPC-Class cluster supports instructional computing and unsponsored thesis development. The new education cluster will be available spring semester 2014.

HPC-Class currently consists of 48 SuperMicro servers each with 16 cores, 64 GB of memory, GigE and QDR (40Gbit) InfiniBand interconnects. In addition four of these compute nodes contain an NVIDIA Tesla K20 GPU, and four other nodes contain 60-core Intel Xeon Phi Accelerator card.

Detailed Hardware Specification
Number of Nodes Processors per Node Cores per Node Memory per Node Interconnect Local $TMPDIR Disk Accelerator Card
40 Two 2.0 GHz 8-Core Intel E5 2650 16 64 GB 40 GB IB 2.5 TB N/A
4 Two 2.0 GHz 8-Core Intel E5 2650 16 64 GB 40 GB IB 2.5 TB NVIDIA K20
4 Two 2.0 GHz 8-Core Intel E5 2650 16 64 GB 40 GB IB 2.5 TB 60-Core Intel Phi 5110P