HPC plays an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). Throughout their history, they have been essential in the field of cryptanalysis. Read more at https://en.wikipedia.org/wiki/Supercomputer
SABER1 is the first High Performance Cluster (HPC) at University of San Diego. Deployed in the year 2015, SABER1 has 6TFLOPS of computational power, 640GB of Cluster Memory and 25 TB of Storage.
As of 2/4/2017
SABER1(below with 18 nodes)
Technical Specifications
- Saber1
- Blade Servers Architecture (Hybrid) – NeXtScale nx360 M5
- Master Node -1
- Compute Nodes – 17
- Storage Node – 1 @ 24TB ( RAID 5/ SAS)
- Nos of Cores per node = 2*8 = 16
- Total cores = (16 cores per nodes) * (18 total nodes) = 288
- Processor –Intel Xeon E5-2630 v3 8C 2.4GHz 20MB 1866MHz 85W
- Network (Cluster, Mgmt) – 10Gb, 1Gb
- Memory – 64 GB , 4GB/Core
- Cluster Memory = 1,088GB
- Operating System – Red Hat 6.7
- Standard Linux Libraries. (gcc , fortran , python etc)
- Cluster Manager – XCAT
- Node Types
- Master Node – Stateful
- Storage Node – Stateful
- Compute Nodes – Stateless
- System Services (DNS/DHCP/NFS)
- File System – NFS
- Resource Manager / Scheduler
- TORQUE / MOAB