One Group’s Answer to Transistors Behaving Badly

By Michael Feldman

May 11, 2010

Over the last 50 years, the semiconductor business has enjoyed what is perhaps the most thrilling ride of any industry ever conceived. Today semiconductors are a $250 billion business that account for nearly 10 percent of the world’s GDP. At the foundation of its success is Moore’s Law, the chipmaker’s mantra that promises better, faster and cheaper transistors every 18 to 24 months. But the laws of physics are conspiring to bring this ride to an end.

The problems are well known. CMOS-based transistors are increasingly harder to manufacture at nanometer scale. And even as technologies are perfected to do so, the materials themselves are becoming unsuitable for such small geometries. At 22 nm, Intel’s process node slated for 2011, gate oxide will be only 4 to 5 atoms thick and the gates themselves will be 42 atoms across. Manufacturing these devices in reasonable volumes and within reasonable power envelopes is going to be a challenge.

In fact, the analyst team at iSuppli has predicted that the expense of manufacturing sub-20nm devices would not be economically feasible. That is, the cost of the fabs could not be recouped by the volume of chips produced at those process nodes. Thus, they concluded, Moore’s Law would be repealed in about five years.

Most of the efforts to address the problem of shrinking transistor geometries have focused on making the devices behave more precisely, using technologies like X-ray lithography and hafnium insulators, to name just two. But what if instead of trying to make the transistors better, we purposefully try to make them worse.

Although it sounds counter-intuitive, developing processors that are naturally error-prone is exactly what one team of researchers from the University of Illinois and the University of California, San Diego has set out to do. Called stochastic processors, the idea is to under-design the hardware, such that it is allowed to behave non-deterministically under both stressful and nominal conditions. Error tolerance can be provided by either the hardware or the software.

The rationale is that by relaxing the design and manufacturing constraints, it will be much simpler and much cheaper to produce such processors in volume. And because voltage scaling and clock frequency restrictions are eased, significant power savings and performance increases can be realized.

The stochastic model would represent a significant departure from the way semiconductor devices are designed today. Even though processors have evolved significantly over the decades — scalar to superscalar, single-core to multicore, etc. — the basic assumption has always been that the hardware must behave flawlessly. “It’s the contract that the hardware provides to the software today,” says Rakesh Kumar, a computer scientist at the University of Illinois, Urbana-Champaign, who is part of the Stochastic Processor Research group there. The research is being funded by Intel, DARPA, the NSF, and the GigaScale Systems Research Center (GSRC), a consortium of academic, government and industry organizations devoted to next-generation hardware and software.

The idea behind stochastic processors is relatively simple: Build a chip that computes correctly, say, 99 percent of the time. Such a device is specifically designed to let errors occur under both worst-case and nominal conditions. The advantage of this model is that, compared to a 100 percent error-free processor, a stochastic implementation requires a lot less manufacturing precision and takes a lot less power to run.

Kumar’s stochastic research group has designed a Niagara processor (an open source processor design developed by Sun Microsystems) that allows for a 1 to 4 percent error rate. Based on circuit level simulation with CAD design tools, the researchers determined they could save between 25 to 40 percent on power compared to the default (deterministic) design. That might seem like a lot, but it points to how much of a traditional processor design is now being devoted to keeping the transistors from throwing off errors.

It also explains why multicore designs introduce another level of challenges for chipmakers. For example, if two of the cores on a quad-core processor can run (flawlessly) at 2.0 GHz, one can run at 1.5 GHz, and the last core can only run error-free at 1.0 GHz, the chip has to be binned at 1.0 GHz. That’s money down the drain as far as the chipmaker is concerned. Ideally, they would like to ship a 2.0 GHz product and use some sort of scheme to compensate for the variability in the other two cores. A stochastic design would make this possible.

Of course, compensating for that variability is the tricky part. Kumar says error tolerance can be accomplished in hardware or in software. Hardware correction would be the most obvious and, from the programmer’s perspective, the most palatable way to ensure correct program execution. But error tolerance in software provides more flexibility.

“Our vision is that all the errors that are produced get tolerated by the software,” says Kumar. Part of the group’s research involves how to write application software in such a way that takes into account a non-deterministic processor. Kumar believes this shift in thinking is inevitable. Because the hardware variability problem is going to keep getting worse as process geometries shrink, it will eventually make more sense for the programmer to code for non-determinism rather that write the software for the least common denominator hardware. On balance, Kumar believes the ideal would be to employ hardware correction only when it is too onerous to compensate for the errors in software.

HPC applications might be especially at home on stochastic processors since many of these codes are fundamentally optimization problems. In other words, they are noise tolerant to a great extent, relying on probability distributions rather than a single correct computation. Monte Carlo methods are just one example of a class of algorithms used in HPC that rely on optimization techniques, but almost any simulation or matrix math-based code has some level of optimization built in — think climate modeling, data mining, and object recognition apps. In these cases, says Kumar, “you’re not going after one answer, you’re going after a good answer.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Activist Investor Starboard Buys 10.7% Stake in Mellanox; Sale Possible?

November 20, 2017

Starboard Value has reportedly taken a 10.7 percent stake in interconnect specialist Mellanox Technologies, and according to the Wall Street Journal, has urged the company “to improve its margins and stock and explore Read more…

By John Russell

Installation of Sierra Supercomputer Steams Along at LLNL

November 20, 2017

Sierra, the 125 petaflops (peak) machine based on IBM’s Power9 chip being built at Lawrence Livermore National Laboratory, sometimes takes a back seat to Summit, the ~200 petaflops system being built at Oak Ridge Natio Read more…

By John Russell

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visitors to the Colorado Convention Center in Denver for the larg Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Harness Scalable Petabyte Storage with HPE Apollo 4510 and HPE StoreEver

As a growing number of connected devices challenges IT departments to rapidly collect, manage, and store troves of data, organizations must adopt a new generation of IT to help them operate quickly and intelligently. Read more…

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visit Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some Read more…

By Doug Black

Student Cluster LINPACK Record Shattered! More LINs Packed Than Ever before!

November 16, 2017

Nanyang Technological University, the pride of Singapore, utterly destroyed the Student Cluster Competition LINPACK record by posting a score of 51.77 TFlop/s a Read more…

By Dan Olds

Hyperion Market Update: ‘Decent’ Growth Led by HPE; AI Transparency a Risk Issue

November 15, 2017

The HPC market update from Hyperion Research (formerly IDC) at the annual SC conference is a business and social “must,” and this year’s presentation at S Read more…

By Doug Black

Nvidia Focuses Its Cloud Containers on HPC Applications

November 14, 2017

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing ta Read more…

By George Leopold

HPE Launches ARM-based Apollo System for HPC, AI

November 14, 2017

HPE doubled down on its memory-driven computing vision while expanding its processor portfolio with the announcement yesterday of the company’s first ARM-base Read more…

By Doug Black

OpenACC Shines in Global Climate/Weather Codes

November 14, 2017

OpenACC, the directive-based parallel programming model used mostly for porting codes to GPUs for use on heterogeneous systems, came to SC17 touting impressive Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Leading Solution Providers

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Share This