Intel’s Fryman: “It’s not that we love CMOS; it’s the only real choice.”

By John Russell

September 1, 2016

Forget for a moment the prevailing high anxiety over Moore’s law’s fate. In the near-term – which could easily mean a decade – CMOS will remain the only viable, volume technology driving computing. Pursue alternatives? Of course, urged Josh Fryman, principal engineer and engineering manager, Intel. But more can and must be done to advance CMOS-based architecture and Intel, not surprisingly, has a few ideas.

Fryman was one of three speakers scanning the horizon at ISC2016’s Scaling Beyond the End of Moore’s Law session. It was fascinating conversation covering quantum computing, neuromorphic computing, and today’s workhorse, CMOS.

Damian Steiger, a researcher at the Platform for Advanced Scientific Computing and the Institute for Theoretical Physics of ETH Zurich, tackled quantum computing. Figuring out how to actually implement quantum computing and identifying killer quantum applications to attract needed funding formed much of his talk. He had three applications in mind although wasn’t especially optimistic we’ll see useful quantum computers anytime soon with the possible exception of government-funded efforts aimed at decrypting RSA.

Karheinz Meier from the Human Brain Project tackled neuromorphic computing. Here, the near-term future seems brighter. Meier expects the recent availability of three large-scale neuromorphic computing systems for application development to push progress more quickly. (See HPCwire article, Think Fast – Is Neuromorphic Computing Set to Leap Forward?)

It fell to Fryman, the opening speaker, to remind everyone that as promising as many new directional efforts look, it takes years to work out the bugs and turn a new technology into a large-scale manufacturing-friendly process. Interestingly, according to Fryman, advancing CMOS will mostly involve reviving old ideas that were problematic in the past but are unavoidable now. It will also require thinking far more holistically about how hardware and software play together.

Josh Fryman, Intel
Josh Fryman, Intel

“We need to find the Neo of the next generation [computational technology],” agreed Fryman, referring to the protagonist in the film, The Matrix, whose abilities jumped over those around him, “but once you find it, once you work out the techniques, you still have a long haul to make it something we could use, something viable for mass production.

“Until then what are we going to do? The short answer is CMOS is going to continue. It’s not because it is necessarily the best technology, it’s not because we particularly like it and adore it, it’s because we have no choice to keep everything moving forward.”

In setting the context for his talk, Fryman emphasized it’s important to remember that Moore’s law is a business statement not a technology law. That said, Moore’s law has become a surrogate for many things, including the pace of semiconductor technology advance. Its current “difficulties” (Dennard scaling, et al.) have, of course, been widely discussed with Intel holding strong against the growing opinion that Moore’s law’s days are numbered. (See HPCwire article, Moore’s Law – Not Dead – and Intel’s Use of HPC to Keep it Alive)

Fryman noted the classic recipe for engineers to achieve Moore’s law for transistors has been “to scale your dimensions, to scale your supply, and you’re done. You just keep turning the crank on this over and over. The running joke is years ago in the fab we used just a handful of elements in the periodic table. Today we use just about all of the elements except for a handful to get the same job done. [But at the end of the day] it’s still just a recipe.”

From an engineering perspective, what happens when the recipe fails? Fryman briefly reminded the audience that change is hardly new in electronics but that a few common underlying characteristics have been necessary for progress.

“If you look at the evolution of electronics, moving from mechanical to electromechanical, to vacuums tubes, to bipolar, to NMOS, to PMOS, and ultimately CMOS, and now you have this questions about what is coming. If you look at the trend line historically, each of the crossings is defined by having three basic components. You have to have gain; signal to noise control; and scalability, although scalability is really an overused term. What does it really mean? You’re talking about three dimensions: performance, energy, and pricing. These are the three fundamentals for something to actually be a viable technology and it needs to be ‘friendly to high volume manufacturing.’”

Intel KNL Phi die shot
Intel KNL Phi die shot

As there is no obvious technology to replace CMOS now, the focus must be on how to use what we know. This is doable, maintains Fryman, but will require rethinking existing approaches and in some instances re-learning old lessons. He said a trio of strategies will drive advances in underlying CMOS and compute architectures.

  • Remove waste to reclaim efficiency. Die area, for example, has ballooned to accommodate accumulating features such as pipelines, onchip floating point, out-of-order execution, etc. In many cases performance, and in most cases power consumption, have suffered. Review of accumulated features with an eye towards simplification and elimination will play a role.
  • Use known techniques. Over the years, lots of manufacturing and chip design approaches have been tried and tested and well characterized, including their drawbacks, “but people wanted to avoid them because they were considered hard at some level, too hard to program, to hard to use, too hard to design. But when you are running out of other knobs [to adjust] these are not as hard anymore.”
  • Multidisciplinary solutions. Tackling physical manufacturing problems will only work so far; offloading or streamlining performance and tracking Moore’s law will require blended software, hardware, and manufacturing processes.

Far from pessimistic Fryman believes making further progress using these techniques is do-able, if challenging, and offered a few directional examples including one on handling resiliency at small feature size.

“Everybody is worried that once you get down to 7 nm you are going to have higher variability and failures and what am I going to do about it. There are two ways to look at it. There are reactive measures, so if something fails, an ECC failure, a soft upset, what am I going to do about it? I’ll have to react, I’ll have to kill, I’ll have to restart,” said Fryman.

“There’s also the proactive side which is I am going to plan ahead for this future and I am going to design my system in software and the hardware level to periodically check itself, to check if I am leading to a failure situation should I bring down my voltage, should I migrate work away from something.

“From a user experience. I have a classic software layer. I’ve got run time sitting on top of hardware, how does that interact with the entire stack. I’ve got user codes. I’ve got runtimes. I’ve got programming support tools. All these things need to be aware of the underlying assumptions in the system,” he said.

Power management is another area likely to involve tighter links between software and hardware. He cited work from a Polaris test chip in the 2006-2007 timeframe. “I can look at fine grained power management techniques. This is another known technique that’s way beyond clock handling. There are 21 dynamic sleeper readings in the actual tile, a whole bunch of tiles on the die, and you let the system turn the tiles on and off in the sleep state, which give a significant energy savings.”

Fryman again emphasized this is known technique but it’s hard do because it extends beyond hardware and has software implications: how do you structure your code, how do you know when you can take advantage of something like this, etc.

“We are going to have to start thinking outside the box and [in many instances] go back to existing techniques and say so, do we really need cache coherency across an entire machine. Maybe not. Do we really need cache coherency across 1000 cores on a die or 100 cores on a die, probably not. Are we willing to take the complexity from software for a simpler more efficient, more scalable hardware? Really what I am saying moving forward is we need to take your heads out of the sand, pardon the pun, and rethink what we have been doing,” he said.

Fryman says the industry is moving into another era that he calls “the disaggregation of the datacenter.” In a fully connected model, he said, there is “no system you can design that can get the bandwidth.” More and more compute will push out to the edges and “it will look different and this is where machine learning an other algorithms come in and neuromorphic might be a big deal. I see the industry not as stagnant but going through this shift to the edge, which is a very different design point than the classic PC or tablet.”

The Intel engineer was careful not to reveal too much, “Eventually turning the knob on transistors, as we have been doing, will not work. When that is is highly debatable, which is why I chuckle. I’m not supposed to talk about post 7 nm but I can simply say it’s actively being looked into.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

U.S. CTO Michael Kratsios Adds DoD Research & Engineering Title

July 13, 2020

Michael Kratsios, the U.S. Chief Technology Officer, has been appointed acting Undersecretary of Defense for research and engineering. He replaces Mike Griffin, who along with his deputy Lis Porter, stepped down last wee Read more…

By John Russell

Supercomputer Research Reveals Star Cluster Born Outside Our Galaxy

July 11, 2020

The Milky Way is our galactic home, containing our solar system and continuing into a giant band of densely packed stars that stretches across clear night skies around the world – but, it turns out, not all of those st Read more…

By Oliver Peckham

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprised of Intel Xeon processors and Nvidia A100 GPUs, and featuri Read more…

By Tiffany Trader

Xilinx Announces First Adaptive Computing Challenge

July 9, 2020

A new contest is challenging the computing world. Xilinx has announced the first Xilinx Adaptive Computing Challenge, a competition that will task developers and startups with finding creative workload acceleration solutions. Xilinx is running the Adaptive Computing Challenge in partnership with Hackster.io, a developing community... Read more…

By Staff report

Reviving Moore’s Law? LBNL Researchers See Promise in Heterostructure Oxides

July 9, 2020

The reality of Moore’s law’s decline is no longer doubted for good empirical reasons. That said, never say never. Recent work by Lawrence Berkeley National Laboratory researchers suggests heterostructure oxides may b Read more…

By John Russell

AWS Solution Channel

Best Practices for Running Computational Fluid Dynamics (CFD) Workloads on AWS

The scalable nature and variable demand of CFD workloads makes them well-suited for a cloud computing environment. Many of the AWS instance types, such as the compute family instance types, are designed to include support for this type of workload.  Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: 1) Industries of the Future (IotF), chaired be Dario Gil (d Read more…

By John Russell

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprise Read more…

By Tiffany Trader

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

In this exclusive interview for HPCwire – sadly not face to face – Steve Conway, senior advisor for Hyperion Research, talks with Dr.-Ing Bastian Koller about the state of HPC and its collaboration with Industry in Europe. Koller is a familiar figure in HPC. He is the managing director at High Performance Computing Center Stuttgart (HLRS) and also serves... Read more…

By Steve Conway, Hyperion

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time Read more…

By John Russell

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Leading Solution Providers

Contributors

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

John Martinis Reportedly Leaves Google Quantum Effort

April 21, 2020

John Martinis, who led Google’s quantum computing effort since establishing its quantum hardware group in 2014, has left Google after being moved into an advi Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This