- Introduction to HPC clusters
- UNIX Introduction
- Condo 2017
- Classroom HPC Cluster
- File Transfers
- Cloud Back-up with Rclone
- Globus Connect
- Sample Job Scripts
- Using DDT Parallel Debugger, MAP profiler and Performance Reports
- Using Matlab Parallel Server
- Using ANSYS RSM
- Open OnDemand
- Using Julia
- LAS Machine Learning Container
- Support & Contacts
- Systems & Equipment
- FAQ: Frequently Asked Questions
- Contact Us
- Cluster Access Request
Classroom HPC Cluster
Classroom HPC Cluster (hpc-class.its.iastate.edu)
Overview of the Cluster HPC-Class
HPC-Class consists of 44 regular compute nodes and 8 GPU nodes each having two NVIDIA Tesla K20 GPU cards. Each node has 16 cores, 128 GB of memory, GigE and QDR (40Gbit) Infiniband interconnects.
|Number of Nodes||Processors per Node||Cores per Node||Memory per Node||Interconnect||Local $TMPDIR Disk||Accelerator Card|
|44||Two 2.6 GHz 8-Core
Intel E5-2640 v3
|16||128 GB||40G IB||2.5 TB||N/A|
|8||Two 2.0 GHz 8-Core Intel E5 2650||16||128 GB||40G IB||2.5 TB||Two NVIDIA K20|
HPC group schedules regular maintenances every 3 months to update system software and to perform other tasks that require a downtime.
The date of the next maintenance is listed in the message of the day displayed at login (when ssh-ing to the cluster).
Note: Queued jobs will not start if they cannot complete before the maintenance begins. In the output of the squeue command the reason for those jobs will state (ReqNodeNotAvail, Reserved for maintenance) . The jobs will start after the scheduled outage completes.