San Diego Supercomputer Center makes high performance computing resources available to researchers via a “condo cluster” model.
Many homebuyers have found that the most affordable path to homeownership leads to a condominium, in which the purchaser buys a piece of a much larger building. This same model is in play today in the high performance computing centers at many universities.
Under this “condo cluster” model, faculty researchers buy a piece of a much larger HPC system. In a common scenario, researchers use equipment purchase funds from grants or other funding sources to buy compute nodes that are added to the cluster. This model gives the buyers access to all the goodness of a professionally managed HPC cluster for a price that is far less than they would pay if they built their own systems in their academic departments.
A case in point: The San Diego Supercomputer Center operates a condo cluster to serve the computational science needs of faculty and students on the University of California San Diego campus. This system, known as the Triton Shared Computing Cluster, is UC San Diego’s primary HPC resource for research faculty.
The condo cluster works well when researchers have contract, grant, or faculty startup funds that they can use to buy equipment, according to Ron Hawkins, program manager for the Triton Shared Computing Cluster. The condo approach allows the researchers to contribute a relatively small fraction of the resources used in the system to obtain access to a much larger pool of resources.
“Under the condo model, researchers get access to the cluster proportional to the number of resources they contribute,” Hawkins says. “But the beauty is, they could buy one compute node and then be able to run jobs that take 10 compute nodes or 20 compute nodes. So they get access to a much larger resource than they could afford just for their lab or their group.”
The Triton Shared Computing Cluster has about 400 compute nodes based on the x86 processor architecture developed by Intel and about 300 accelerators. The system, launched in 2013, is a highly heterogeneous cluster that has grown organically over time as researchers have bought additional nodes for the system.
In 2017, after evaluating proposals from multiple technology vendors, the SDSC moved to standardize the system on Dell EMC PowerEdge servers with Intel® Xeon® processors. Since then, the standard server components in the system are the Dell EMC PowerEdge C6400 four-node compute chassis and the Dell EMC PowerEdge R740 server, which is used for one- and two-node requirements.
The Triton Shared Computing Cluster serves as the go-to HPC resource for more than 35 labs or groups on the UC San Diego campus. That equates to hundreds of system users running very diverse workloads.
In the realm of the hard sciences, the system supports applications for genomics, biomedical research, engineering, computational chemistry, biology, geophysics, earthquake simulations, climate research and more.
“Our science users run the gamut, from biomedical research looking at causes and treatments for pediatric brain disease, to causes and treatments for neurological disease in aging brains, to new materials for lithium ion rechargeable batteries, to chemistry research in protein structures,” Hawkins says.
The Triton Shared Computing Cluster also runs data-intensive applications used by economists, political scientists, business faculty members and others. Many of these researchers are now using advanced data analytics tools and machine and deep learning techniques to analyze large datasets.
“The use of computational methods is broadening into virtually every scientific domain now,” Hawkins notes. “And that’s partly driven by the big data phenomenon and, in general, the eagerness to apply computational methods, including machine learning and neural networks, to all types of research.”
And this brings us to another important benefit of the condo cluster model: the democratization of HPC. By making the power of an HPC cluster available for the price of a server node or two, this model clears a path to HPC for a wide range of users, including newcomers to computational science.
Just bring your nodes and your data — and start running your workloads.
To learn more
For a look at the operational details for the Triton Shared Computing Cluster, visit the San Diego Supercomputer Center’s Triton program site.