Hardware elements of the BSBC high performance compute cluster include load-sharing head nodes configured with two 2.5 GHz Intel Quad Core Xeon E5420 CPUs, 16 GB of RAM, and mirrored 250 GB hard drives for system redundancy. Configuration of an Ethernet “heartbeat” interface between head nodes provides high availability failover for services, applications and mounted NFS and Lustre filesystems.
The 89 heterogeneous compute nodes provide 752 CPU cores and have a variety of hard drive, RAM, and processor configurations as shown in the table below. The BSBC cluster incorporates forty Nvidia Tesla S1070 units, each with four GPUs sharing 960 streaming processor cores and 16 GB RAM. Each Tesla is shared between two compute nodes via dual on-board GEN2-PCIe x 16 interfaces; yielding two GPUs or 480 processor cores per compute node for a total of 160 GPUs and 38,000 streaming processor cores delivering up to 160 teraflops of performance.
The latest addition to the BSBC cluster is a “super workstation” node configured for running Amber 12, a GPU accelerated version of PMEMD. This node is configured with 4 – GEN3-PCI-E slots each populated with a GTX-Titan GPU allowing 4 simultaneous, independent Amber simulations.
Twenty-four hot-swappable 1-TB hard drives are configured as a RAID 5 to provide a 20-TB storage subsystem.
A SilverStorm 9080 96-port InfiniBand switch communicates with Qlogic or Mellanox InfiniBand cards in each node. This high-speed interconnect is used for migrating computational data between compute nodes and head nodes and provides an aggregate bandwidth of 20 gigabits per second.
Four 48-port gigabit Ethernet network switches provide a communication interface between all nodes and are also capable of carrying computational data between compute nodes and the head nodes. A fifth switch provides 10/100 Mbps ports for IPMI node management.