- Home
- About
- Research
- Education
- News
- Publications
- Guides
- Introduction to HPC clusters
- UNIX Introduction
- Nova
- HPC Class
- Condo 2017
- SCSLab
- File Transfers
- Cloud Back-up with Rclone
- Globus Connect
- Sample Job Scripts
- Containers
- Using DDT Parallel Debugger, MAP profiler and Performance Reports
- Using Matlab Parallel Server
- JupyterLab
- JupyterHub
- Using ANSYS RSM
- Nova OnDemand
- Python
- Using Julia
- LAS Machine Learning Container
- Support & Contacts
- Systems & Equipment
- FAQ: Frequently Asked Questions
- Contact Us
- Cluster Access Request
As of July 1, 2021 the whole Condo Cluster is under Free Tier model. The cluster consists primarily of 134 SuperMicro servers each with two 8-core Intel Haswell processors, 128 GB of memory and 2.5 TB of available local disk. Besides these compute nodes there are three large memory nodes. Two large memory nodes have four 8-core Intel Ivy Bridge processors and 1 TB of main memory. Third large memory node has four 10-core Ivy Bridge processors and 2 TB of main memory. One GPU node has two 10-core Intel Haswell processors, two NVIDIA Tesla K20c accelerator cards, 768 GB of memory and 5.5 TB of available local disk. All nodes and storage are connected via Intel/Qlogic QDR InfiniBand (40 Gb/s) switch.
Detailed Hardware Specification
Number of Nodes | Processors per Node | Cores per Node | Memory per Node | Interconnect | Local $TMPDIR Disk | Accelerator Card | CPU-Hour Cost Factor | Partition |
---|---|---|---|---|---|---|---|---|
134 | Two 2.6 GHz 8-Core Intel E5-2640 v3 | 16 | 128 GB | 40G IB | 2.5 TB | N/A | 1.00 | |
2 | Four 2.6 GHz 8-Core Intel E5 4620 v2 | 32 | 1 TB | 40G IB | 1.8 TB | N/A | 4.50 | fat |
1 | Four 2.2 GHz 10-Core Intel E7-4830 v2 | 40 | 2 TB | 40G IB | 1.3 TB | N/A | 9.45 | huge |
1 | Two 2.3 GHz 10-Core | 20 | 768 GB | 40G IB | 5.5 TB | 2x NVIDIA K20c | 2.00 | gpu |
HPC group schedules regular maintenances every 3 months to update system software and to perform other tasks that require a downtime.
The date of the next maintenance is listed in the message of the day displayed at login (when ssh-ing to the cluster).
Note: Queued jobs will not start if they cannot complete before the maintenance begins. In the output of the squeue command the reason for those jobs will state (ReqNodeNotAvail, Reserved for maintenance) . The jobs will start after the scheduled outage completes.