Visit additional Tabor Communication Publications
November 12, 2009
WEST LAFAYETTE, Ind., Nov. 11 -- Facebook for scientists -- but built to facilitate serious research rather than socializing -- and an award-winning method for putting idle computers to work on scientific breakthroughs are Purdue-developed technologies in the spotlight at the SC09, the world's largest high-performance computing conference.
Purdue University is highlighting the HUBzero and DiaGrid technologies at the university's booth at SC09, which opens Monday (Nov. 16) in Portland, Ore., and ends five days later.
HUBzero is a soon-to-be open source software platform developed by Purdue for deploying and applying computational research tools, visualizing and analyzing results interactively and publishing them, all through a familiar Web browser. Built-in social networking features akin to Facebook create communities of researchers and educators in science, engineering, medicine and almost any field or subject matter.
DiaGrid works by pooling computers over the Purdue campus network and off campus via the Internet and fast research networks. Whenever machines in the pool are idle, such as at night or when their owners are at lunch, the system sends work to them. Campus Technology Magazine selected DiaGrid for a 2009 international Campus Technology Innovators Award.
Purdue has created an automated system to link the computers of SC09 participants to the pool during the conference. The Purdue booth includes a scoreboard to keep track of whose machines are running the most jobs.
The booth is designed to promote Purdue; Information Technology at Purdue (ITaP), the university's central information technology organization; and the Rosen Center for Advanced Computing, ITaP's research and discovery arm. ITaP technologists developed HUBzero and DiaGrid.
"DiaGrid and HUBzero are model technologies for enabling research that Purdue is making available to the world," says John Campbell, associate vice president in charge of research computing for ITaP, who heads the Rosen Center. "As the premier conference for research computing, SC09 is a prime place to showcase these technologies."
Purdue's booth also will provide academic information to potential Purdue students and information to job seekers about positions with Purdue, ITaP and the Rosen Center. Nearly 10,000 people attended the conference in 2008.
Purdue has become a recognized leader in cyberinfrastructure with the development of HUBzero, which powers nanoHUB.org and many other Web-based "hubs" for research collaboration, says Michael McLennan, senior research scientist and hub technology architect at Purdue. NanoHUB is an international resource for nanotechnology theory, simulation and education with tens of thousands of users.
"Like no other platform, HUBzero can host interactive simulation tools. So, users aren't just reading about research, they can experience it," McLennan says. "HUBzero allows users to work together as they interact with content."
Other hubs link researchers transforming laboratory discoveries into new medical treatments, and Purdue is now working in a consortium with Indiana and Clemson universities and the University of Wisconsin to advance the technology even further.
A hub will be at the center of the Network for Earthquake Engineering Simulation (NEES), a $105 Million National Science Foundation program announced in September, which is led by Purdue. Purdue electrical and computer engineering Professor Rudolf Eigenmann, co-principal investigator of NEES, will give a workshop titled "Cyberinfrastructure for Earthquake Engineering" at the Purdue booth.
McLennan will host two workshops on HUBzero and one about nanoHUB during the conference. Purdue scientist Mathieu Luisier will offer a workshop on using massive supercomputers to simulate nanoscale electronic devices for the next generation of electronics, a central focus of nanoHUB.
DiaGrid includes computers in student computer labs, offices, server rooms and supercomputing clusters and is the first multi-campus collaboration of its kind. Purdue's partners in DiaGrid are IU, Indiana State University, the universities of Notre Dame, Louisville and Wisconsin, Purdue's Calumet and North Central campuses, and Indiana University-Purdue University Fort Wayne.
Together, they now make nearly 30,000 processors available for research jobs ranging from understanding the Solar System's formation to imaging the structure of viruses at near-atomic resolutions in an effort to develop new ways of battling viral illnesses, from swine flu and the common cold to West Nile virus and AIDS.
"The sheer size and ingenuity of the initiative, as well as the diversity of computing resources represented in the grid, really set the project apart," Geoffrey Fletcher, editorial director of Campus Technology, said in announcing the Campus Technology Innovators Award for DiaGrid.
Source: Purdue University
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.