The cluster currently has 6 generally available compute nodes. There are an additional 9 compute nodes and 7 GPU nodes dedicated to specific departments and research groups. The cluster is accessed and managed via login/admin nodes, with supporting storage and network infrastructure.
General Access Compute Nodes
- 6 Nodes (240 cores)
- Dell R640 Server, 2 physical CPUs per 1U chassis
- Dual Intel Gold 6248 CPU @ 2.50GHz
- 2 sockets/node x 20 cores/socket = 40 cores/node
- 128 GB/node of 2900 MHz RAM
- 480 GB local SSD
- Linux OS (AlmaLinux 9)
Admin Node
- Dell R640, 1U
- Dual Intel Xeon 4210 @ 2.2GHz
- 64 GB RAM
- 2TB PCIe SSD
- Linux OS (AlmaLinux 9)
Interconnect Fabric
- Mellanox FDR10 Infiniband Fabric – max data rate 40Gb/sec
- Dedicated 10Gb ethernet for internode communication
Storage/filesystem
- Dell PowerVault ME4 series SAN – SSD/HDD hybrid
- 10Gb iSCSI fabric
- Online expandable capacity