Visit additional Tabor Communication Publications
November 08, 2011
WEST LAFAYETTE, IN, Nov. 8 -- For Tyler Reid, a junior in computer science from Zionsville, being a member of Purdue’s student supercomputing team means a chance to get his hands on some of the latest hardware, an opportunity too good to pass up.
The six-member team, which built its own supercomputer this semester, will be competing Nov. 14-16 in the 2011 Cluster Challenge, the student competition at SC11, the world’s largest supercomputing conference. The conference is being held in Seattle Nov. 12-18. ITaP is sponsoring the Cluster Challenge team with Intel.
“I love to solve problems on the fly,” says Reid, who’s on Purdue’s Cluster Challenge team for the first time. “I also am looking forward to competing against teams of students from all over the world.”
The Purdue team was one of eight qualifiers for the 2011 competition, one of just four from the U.S. The team will compete against teams from China, Russia and other countries. Purdue’s is the lone team from the Big Ten.
In addition to Reid, the Cluster Challenge team members are Alex Bartol, a senior in computer science from Fort Wayne; John Blaas, a senior in computer and information technology from Lafayette; Joad Fattah, a junior in computer science from Carmel; Michael Heffernan, a senior in computer science from Kokomo; and Andrew Huff, a junior in computer science from Cary, N.C
“This year's team has a nice mix of experience, ingenuity, and skills, with three of the members returning from last year's competition,” says Mike Baldwin, a Purdue atmospheric scientist serving as faculty advisor.
Intel is Purdue’s partner in the Cluster Challenge and with 160 of the company’s processors inside, a hundred times more than a typical personal computer, the 2011 entry is akin to a mini version of the Hansen cluster supercomputer Purdue installed over the summer and the three other clusters ITaP has built in partnership with Purdue Faculty since 2008. Researchers in earth and atmospheric sciences, chemistry, physics, computer science, aeronautics and astronautics, electrical and computer engineering and materials engineering, among other fields, use Hansen.
Likewise, the Cluster Challenge team — limited to undergraduates — has to prepare its machine to run an assigned selection of real research software crunching voluminous sets of sample data as quickly and efficiently as possible. The 2011 applications are used for studying the actions of chemical molecules, the colliding and merging of galaxies, the basic workings of biological life and the motion of the oceans. Most of this year’s Purdue team helped build Hansen as student workers at the Rosen Center for Advanced Computing, ITaP’s research computing unit.
“We have fewer nodes, more processing power per node, and we have solid state drives for each node,” Fattah said of this year’s Cluster Challenge entry. “So overall, everything is more power efficient and faster, with the bottlenecks in particular made both fewer and faster.”
The students share the load monitoring and adjusting their supercomputer around the clock as they work to process as much data as possible and to meet a 26-amp power usage limitation that’s a nod to what are becoming problematic energy demands from high-performance computing systems.
The Purdue team began working on its machine even before the semester started and is tailoring the research software to run it to best advantage on the cluster’s hardware.
“We are all much more familiar with the applications compared to last year,” Bartol says.
As part of their involvement in the Cluster Challenge, the students on the team are in a high-performance computing class taught by Baldwin, an earth and atmospheric sciences professor. But they spend hours outside class getting ready and have to juggle other classes, homework and exams to attend SC11 for the competition.
“It’s been a real-eye-opening experience as far as how hard it is to get certain applications to run,” Blaas says. “But it’s also been pretty fun, actually going from the ground up.”
“I am graduating in December so being able to get this chance to delve deeper into high-performance computing has been very beneficial in developing skills,” Blaas adds. “I would like to stick to an academic computing environment, but I can see my skills being applicable to a lot of the bigger companies that are now offering cloud computing as a service.”
Like Bartol and Fattah, Heffernan is participating in his second Cluster Challenge. All three were team members in 2010.
“I decided to join again because not only did I have a great experience last year, I also learned a lot,” Heffernan says. “Participating in the Cluster Challenge has exposed me to areas of computer science that aren't taught in school. This makes me a more dynamic and well-rounded individual and potential employee.”
Huff, who hopes to pursue a doctorate in computer science, echoed those sentiments.
“Designing and building a small cluster sounded like an interesting endeavor,” Huff says. “It’s allowed me to dive into areas of high-performance computing that I normally don't get to touch so it's a great learning opportunity.”
Source: Purdue University
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.