Visit additional Tabor Communication Publications
May 05, 2008
SANTA CLARA, Calif., May 5 -- Tensilica, Inc. and the U.S. Department of Energy's Lawrence Berkeley National Laboratory today announced a collaboration program to explore new design concepts for energy-efficient high-performance scientific computer systems.
The joint effort is focused on novel processor and systems architectures using large numbers of small processor cores, connected together with optimized links, and tuned to the requirements of highly-parallel applications such as climate modeling. These demanding scientific problems require 100 to 1000 times higher computation throughput than today's high-end computing installations, but conventional systems require so much electricity, generate so much heat, and require such complex physical installations that the costs would be prohibitive. This collaboration in application-directed supercomputing aims at making "exascale systems" (up to one quintillion floating point operations per second) feasible and cost-effective.
The two organizations are well-suited for such a collaboration. Tensilica is the recognized leader in configurable processor technology and has become a leading provider of energy efficient processors for mobile audio and video applications. The Berkeley Lab Computing Sciences organization manages one of the world's leading supercomputing centers and has extensive experience in deploying leading-edge computer architectures to accelerate scientific discovery.
"Our studies show that energy costs make current approaches for supercomputing unsustainable," stated Horst Simon, Associate Laboratory Director, Computing Sciences for Berkeley Lab. "Hardware-software co-design using tiny processor cores, such as those made by Tensilica, holds great promise for systems that reduce power costs and increase practical system scale. Such processors, by their nature, must deliver maximum performance while consuming minimal power -- exactly the challenge facing the high performance computing community. One of the most compute-intensive applications is modeling global climate change, a critical research application and the perfect pilot application for energy-efficient computing optimization."
"Berkeley Lab is a world leader in providing supercomputing resources to support research across a wide range of disciplines, but their experience in climate modeling is especially well-suited for this project," stated Chris Rowen, Tensilica's president and CEO. "If we can better understand the factors influencing climate change -- and do so in a dramatically more energy-efficient way -- then we open the door for other breakthroughs. We are delighted to be able to contribute to this effort, applying Tensilica Xtensa processors and software to help solve a problem of global significance. The same ultra-efficient processor technology that powers cellular phones can now contribute to a breakthrough in energy-efficient scientific computing."
The team will use Tensilica's Xtensa LX extensible processor cores as the basic building blocks in a massively parallel system design. Each processor will dissipate a few hundred milliwatts of power, yet deliver billions of floating point operations per second and be programmable using standard programming languages and tools. This equates to an order-of-magnitude improvement in floating point operations per watt, compared to conventional desktop and server processor chips. The small size and low power of these processors allows tight integration at the chip, board and rack level and scaling to millions of processors within a power budget of a few megawatts.
The co-design effort will use automatic generation of processor designs, including simulation models, FPGA-based hardware implementation, and software tools to enable rapid prototyping and evaluation of processor instructions sets, interfaces, multi-processor communications mechanisms, and application enhancements.
The research effort also will address the challenges of optimizing memory and communication bandwidth to the massive array of processors, distribution of application functions across the array, and development of suitable prototyping and software development methods for large-scale application-optimized systems.
About Lawrence Berkeley National Laboratory
Lawrence Berkeley National Laboratory (Berkeley Lab) has been a leader in science and engineering research for more than 70 years, and holds the distinction of being the oldest of the U.S. Department of Energy's National Laboratories. The Lab manages a number of national user facilities, including the National Energy Research Scientific Computing Center (NERSC), which provides supercomputing resources to 2,900 users at national laboratories and universities. Managed by the University of California, Berkeley Lab conducts unclassified research across a wide range of scientific disciplines with key efforts in fundamental studies of the universe; quantitative biology; nanoscience; new energy systems and environmental solutions; and the use of integrated computing as a tool for discovery. For more information, go to www.lbl.gov.
Tensilica, Inc. is the recognized leader in configurable processor technology and has leveraged that technology to become the leading supplier of licensable controllers and DSP cores for mobile audio and video applications. Tensilica offers the broadest line of controller, CPU, network, and specialty DSP processors on the market today -- including full software toolchain and modeling support - in both an off-the-shelf format via the Diamond Standard Series cores and with full designer configurability with the Xtensa processor family. The modern design behind all of Tensilica's processor cores provide semiconductor companies and system OEMs with the lowest power, smallest area solutions for high-volume products including mobile phones and other consumer electronics, networking and telecommunications equipment, and computer peripherals. For more information on Tensilica's patented, benchmark-proven processors, visit www.tensilica.com.
Source: Tensilica, Inc.
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.