This week HP Labs distinguished technologist, Parthasarathy Ranganathan, told Stacey Higginbotham that we are working our way from the information age to the insight age. For this shift to happen, however, computing architectures will need to keep pace with analytics, handling storage and massive processing in far more efficient ways.
Processing, storing and analyzing vast amounts of data is getting cheaper, which is certainly a good thing since technology delivers an ever-expanding assortment of new devices and instruments. The problem is, simply throwing more processing power at big data problems isn’t sustainable as it entails a race to pack more transistors on to already energy-hungry chips.
Since the pace of data gathering is outpacing processing capabilities, some are looking to companies like Intel with its 3-D transistor advancement. As Higginbotham notes, however, while this “is cool, it only gets us so far in cramming more transistors on a chip and reducing the energy level needed. For example, a 22 nanometer chi using the 3-D transistor structure consumes about 50 percent less energy than the current generation Intel chip, but less than an Intel chip using the older architecture would at 22 nanometers…And when we’re talking about adding a billion more people to the web, or transitioning to the next generation of supercomputing, a 50 percent reduction in energy consumption on the CPU is only going to get us so far.”
With that in mind, she points to the fact that the DoD estimates that powering an exascale supercomputer would require not one, but two complete power plants. She also slips in the aside that “this is why the folks at ARM think they have an opportunity and why the use of GPUs in high performance computing is on the rise.”