Visit additional Tabor Communication Publications
October 14, 2009
PITTSBURGH, Oct. 14 -- Researchers at Carnegie Mellon University and Intel Labs Pittsburgh (ILP) have combined low-power, embedded processors typically used in netbooks with flash memory to create a server architecture that is fast, but far more energy efficient for data-intensive applications than the systems now used by major Internet services.
An experimental computing cluster based on this so-called Fast Array of Wimpy Nodes (FAWN) architecture was able to handle 10 to 100 times as many queries for the same amount of energy as a conventional, disk-based cluster. The FAWN cluster had 21 nodes, each with a low-cost, low-power off-the-shelf processor and a four-gigabyte compact flash card. At peak utilization, the cluster operates on less energy than a 100-watt light bulb.
The research team, led by David Andersen, Carnegie Mellon assistant professor of computer science, and Michael Kaminsky, senior research scientist at ILP, received a best paper award for its report on FAWN at the Association for Computing Machinery's annual Symposium on Operating Systems Principles Oct. 12 in Big Sky, Mont.
A next-generation FAWN cluster is being built with nodes that include Intel's Atom processor, which is used in netbooks and other mobile or low-power applications.
Developing energy-efficient server architectures has become a priority for datacenters, where the cost of electricity now equals or surpasses the cost of the computing machines themselves over their typical service life. Datacenters being built today require their own electrical substations and future datacenters may require as much as 200 megawatts of power.
"FAWN systems can't replace all of the servers in a datacenter, but they work really well for key-value storage systems, which need to access relatively small bits of information quickly," Andersen said. Key-value storage systems are growing in both size and importance, he added, as ever larger social networks and shopping Web sites keep track of customers' shopping carts, thumbnail photos of friends and a slew of message postings.
Flash memory is significantly faster than hard disks and far cheaper than dynamic random access memory (DRAM) chips, while consuming less power than either. Though low-power processors aren't the fastest available, the FAWN architecture can use them efficiently by balancing their performance with input/output bandwidth. In conventional systems, the gap between processor speed and bandwidth has continually grown for decades, resulting in memory bottlenecks that keep fast processors from operating at full capacity even as the processors continue to draw a disproportionate amount of power.
"FAWN will probably never be a good option for challenging real-time applications such as high-end gaming," Kaminsky said. "But we've shown it is a cost-effective, energy efficient approach to designing key-value storage systems and we are now working to extend the approach to applications such as large-scale data analysis."
The work was supported in part by gifts from Network Appliance, Google and Intel Corp., and by a grant from the National Science Foundation. In addition to Andersen and Kaminsky, the research team included Ph.D. computer science students Jason Franklin, Amar Phanishayee and Vijay Vasudevan, and graduate student Lawrence Tan.
About Carnegie Mellon
Carnegie Mellon (www.cmu.edu) is a private, internationally ranked research university with programs in areas ranging from science, technology and business, to public policy, the humanities and the fine arts. More than 11,000 students in the university's seven schools and colleges benefit from a small student-to-faculty ratio and an education characterized by its focus on creating and implementing solutions for real problems, interdisciplinary collaboration and innovation. A global university, Carnegie Mellon's main campus in the United States is in Pittsburgh, Pa. It has campuses in California's Silicon Valley and Qatar, and programs in Asia, Australia and Europe. The university is in the midst of a $1 billion fundraising campaign, titled "Inspire Innovation: The Campaign for Carnegie Mellon University," which aims to build its endowment, support faculty, students and innovative research, and enhance the physical campus with equipment and facility improvements.
Source: Carnegie Mellon University
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.