Computers are so much more than the interface most Web-connected humans interact with on a daily basis. While many of us take for granted the thousands of instructions that must be communicated across a vast array of hardware and software, this is not the case for the computer scientists and engineers working to shave nanoseconds from computing times, people like University of Wisconsin researcher Mark Hill.
As Amdahl Professor of Computer Science at the University of Wisconsin, it’s Professor Hill’s job to identify hidden efficiencies in computer architecture. He studies the way that computers take zeros and ones and transform this binary language into something with a more human-bent, like social network interaction and online purchases. To do this, Hill traces the chain reaction from the computational device to the processor to the network hub and to the cloud and then back again.
Professor Hill’s interesting and important research was the subject of a recent feature piece by prominent science writer Aaron Dubrow.
The opaqueness of computers is primarily a feature not a bug. “Our computers are very complicated and it’s our job to hide most of this complexity most of the time because if you had to face it all of the time, then you couldn’t get done what you want to get done, whether it was solving a problem or providing entertainment,” explains Hill.
Over the last few decades, it made sense to keep this complexity hidden as the pretty much the entire computing industry road the coat tails of Moore’s law. With computing power doubling approximately every 24 months, faster and cheaper systems were a matter of course. As this “law” reaches the limits of practicality from an atomic and financial perspective, computer engineers are essentially forced to start examining all the other computational elements that come into play to identify untapped efficiencies. Waiting for faster processors is no longer a viable growth strategy.
One area that Hill has focused on is the performance of computer tasks. He times how long it takes a typical processor to complete a common task, like a query from Facebook or perform a web search. He’s looking at both overall speed how long each individual step takes.
One of his successes had to do with a rather inefficient process called paging that was implemented when memory was much smaller. Hill’s fix was to use paging selectively by employing a simpler address translation method for certain parts of important applications. The result was that cache misses were reduced to less than 1 percent. A solution like this would allow a user to do more with the same setup, reducing the number of servers they’d need and saving big bucks in the process.
“A small change to the operating system and hardware can bring big benefits,” notes Hill.
Hill espouses a more unified computational approach, and he’s confident that hidden inefficiencies exist in sufficient quanities to offset the Moore’s law slowdown.
“In the last decade, hardware improvements have slowed tremendously and it remains to be seen what’s going to happen,” Hill says. “I think we’re going to wring out a lot of inefficiencies and still get gains. They’re not going to be like the large ones that you’ve seen before, but I hope that they’re sufficient that we can still enable new creations, which is really what this is about.”
The forward-thinking researcher is a proponent of using virtual memory protocols and hardware accelerators like GPUs to boost computational performance. The “generic computer” is last century, according to Hill. “That’s not appropriate anymore,” he says. “You definitely have to consider where that computer sits. Is it in a piece of smart dust? Is it in your cellphone, or in your laptop or in the cloud? There are different constraints.”
Hill along with dozens of top US computer scientists have penned a community white paper outlining many of the challenges and paradigm shifts facing computing in the 21st century. These include a transition from the single computer to the network or datacenter, the importance of communication as it relates to big data, and the new energy-first reality, where power and energy are becoming dominant constraints. The paper also gets into describing potential disruptive technologies that are coming down the pike. However, with no miracle technologies in hand, computer scientists must do what they can to optimize existing hardware and software.
Read the paper here:
http://www.cra.org/ccc/files/docs/init/21stcenturyarchitecturewhitepaper.pdf