Visit additional Tabor Communication Publications
November 09, 2012
An interview with Beowulf pioneer Thomas Sterling on the iconic Beowulf Bash
With each successive year, the SC conference kicks off on Monday with a bigger bang. The Technical Sessions are already in full swing as exhibitors scramble to complete their booths prior to the Gala Opening Monday evening. But the celebration surrounding the opening of a new SC doesn’t end there, as those in the know leave the convention center and head to the biggest open HPC community party of the year: the Beowulf Bash.
Before packing my bags for SC12 in Salt Lake City, I caught up with one of the iconic Beowulf pioneers, Thomas Sterling, to get his views on what makes the Beowulf Bash so special.
“I may not have gone where I intended to go, but I think I have ended up where I needed to be.”
- Douglas Adams, “The Long Dark Teatime of the Soul”
Addison Snell: Thomas, the SC12 Beowulf Bash is being held Monday night at the Clark Planetarium, a great location, with a theme that the HPC community will appreciate: The Hitchhiker’s Guide to the Galaxy.
Thomas Sterling: Douglas Adams was just fabulous. I think he would’ve appreciated the ironies of Beowulf clusters, how the inmates had taken over the asylum. There’s no better example of an Improbability Computer than a Beowulf cluster.
Snell: For those who don’t remember their HPC history, remind us what Beowulf was all about.
Sterling: The Beowulf Project was a NASA project started in 1993 to find an option for lower-cost computing, at a time when a gigaflop could cost close to a million dollars. It was a time when workstation farms were being employed for throughput computing with experimental software, and it occurred to us that we could use the same approach with lower-end PC processors.
These commodity clusters were a paradigm shift. UNIX workstation clusters with high-end interconnects didn’t optimize for performance per cost. We went with Linux and low-end Ethernet. Meanwhile open-source software as a paradigm for delivering functionality came along at exactly the right time.
Snell: How did the community around Beowulf start to grow?
Sterling: The concept of community happened more than it was planned. It was clear that to successfully ply this technology we would need adopters, so we did outreach to universities. Jack Dongarra organized conference at Emory University where we presented the first of a series of tutorials, and it became the highest subscribed tutorial at SC97. I wrote the book How to Build a Beowulf.
We didn’t anticipate the community we were creating, but people understood that they own their clusters; they are not owned by them. There was a natural attraction to students. It allowed them to try things themselves. Everyone was empowered to be part of the community.
Snell: How did the Beowulf Bash get started?
Sterling: I can take neither credit nor blame for starting the Beowulf Bash. It was first done by Don Becker, Todd Needham, Doug Eadline, and a number of other people as a small party of young geeks. But it instantaneously created its own culture, a party where you could let your hair down and be yourself and not try to impress each other, like you had to at the “good” parties, the vendor parties. This was the party for the other people, the rest of us.
Snell: With the community focus and sense of empowerment, do you think there is a commonality with what we’re seeing with GPU computing today?
Sterling: Yes, I do think so, and for the same reasons. GPUs emerged from purpose-built graphics engines and in some cases get a demonstrative benefit in performance per dollar. A lot of people find that exciting and want to try it out. Again it’s the concept of empowerment. I don’t anticipate that the current configurations are the final solutions, but the interest of people in using them is driving us there.
Snell: What do you think the will the future bring for the Beowulf Bash?
Sterling: Everybody likes a good bottle of beer. I really hope that companies will continue to sponsor this relatively low-cost event that attracts a different element across the HPC community. This is a different personality that encourages people to be creative. That’s what Beowulf did for HPC.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.