A Dell cluster called Artemis is on track to solve problems important to Australia and the world. Commissioned by the University of Sydney, the new cluster is the university’s first high-performance computer (HPC) service.
Like research institutions everywhere, the University of Sydney is seeking to accommodate increasingly data-centric workflows. This new HPC service, established via a partnership with Dell Australia, was custom designed to facilitate access to big data for research across a range of disciplines.
“Artemis will enable researchers from diverse fields to perform state-of-the-art computational analysis and improve collaboration between research groups by providing a common set of tools and capabilities with consistent access mechanisms,” said NHMRC Australia Fellow, Professor Edward Holmes from the Charles Perkins Centre.
Artemis will be available at no cost to University of Sydney researchers and will support a number of scientific domains, including molecular biology, economics, mechanical engineering and physical oceanography.
One of the primary use cases for the 1,512-core Dell cluster is unlocking the secrets of infectious diseases, like the Ebola virus. Professor Holmes is part of a team that is sampling DNA data to track the spread of Ebola in West Africa. High-performance computers can boost processing and analysis times by an order of magnitude, which according to Dr. Holmes, enables real-time epidemic tracking, where previously Ebola researchers were limited to retrospective studies.
This real-time capability facilitates targeted emergency response strategies, integral to getting resources where they are most needed to help the affected population and ultimately block further transmission.
Artemis is a fully managed service that is housed inside a Dell datacenter and connected to the university network over 10 Gigabit Ethernet. The system is comprised of 56 standard compute nodes, two high memory compute nodes offering 10 terabytes of fast DDR4 memory, and five GPU compute nodes, each with two NVIDIA Tesla K40 graphics processors. Each node contains two Intel Xeon E5-2680 V3 12-core processors for a total of 1,512 Haswell cores in all. The standard and high-memory nodes are based on Dell PowerEdge R630 servers while the GPU nodes are based on Dell PowerEdge R730 boxes. The 56Gb/s FDR non-blocking InfiniBand fabric connects all nodes and the high performance Lustre file system.