The CASIC HPC System provides state of the art computational resources to the Auburn University research community. The resource seeks to serve computational needs of active researchers in a broad sense with its primary objective to augment bioinformatic and genomic research on campus. Through computational research activities, the CASIC HPC System will also enhance training among students and professional staff.
The Auburn University Casic HPC Cluster provides a high level of computer processing capacity through the aggregation of powerful hardware resources, supporting research that deals with large datasets and complex computations.
Visit this site, http://www.auburn.edu/casichpc, for updates.
The head and login nodes are IBM iDataPlex dx360 M4 servers, with the following specifications:
- 2 CPUs, with a total of 16 cores
- 128GB of memory
- 300GB local disk
RHEL 6.2 has been deployed on these nodes.
A combination of 80 Compute Nodes
There are multiple classes of compute nodes on this cluster:
- 71 Standard nodes. These nodes have 128GB of memory, and 2 CPUs with a total of 16 cores.
- 6 Large Memory nodes These nodes have 256GB
- 3 Fast Fat nodes. These nodes have 256GB of memory and a faster processor.
- Over 1200 Sandy Bridge Cores
CentOS 6.2 has been deployed on these nodes.
- The storage nodes are IBM iDataPlex dx360 M4 servers
- These 2 NSD servers are the storage nodes for the GPFS cluster
IBM DS3512 and EXP3512 expansion units
- 80TB of disk presented to all nodes via GPFS IBM - General Parallel File System
- All nodes are Infiniband FDR14 connected
- 45,159 Watts
- 154,347 BTU per hour (max)
- 5 IEC 309 connectors on 208V 60A 3 phase circuits
For more Information: Send your email request to email@example.com
Last Updated: November 8th, 2016