Intel’s Fryman: “It’s not that we love CMOS; it’s the only real choice.”

By John Russell

September 1, 2016

Forget for a moment the prevailing high anxiety over Moore’s law’s fate. In the near-term – which could easily mean a decade – CMOS will remain the only viable, volume technology driving computing. Pursue alternatives? Of course, urged Josh Fryman, principal engineer and engineering manager, Intel. But more can and must be done to advance CMOS-based architecture and Intel, not surprisingly, has a few ideas.

Fryman was one of three speakers scanning the horizon at ISC2016’s Scaling Beyond the End of Moore’s Law session. It was fascinating conversation covering quantum computing, neuromorphic computing, and today’s workhorse, CMOS.

Damian Steiger, a researcher at the Platform for Advanced Scientific Computing and the Institute for Theoretical Physics of ETH Zurich, tackled quantum computing. Figuring out how to actually implement quantum computing and identifying killer quantum applications to attract needed funding formed much of his talk. He had three applications in mind although wasn’t especially optimistic we’ll see useful quantum computers anytime soon with the possible exception of government-funded efforts aimed at decrypting RSA.

Karheinz Meier from the Human Brain Project tackled neuromorphic computing. Here, the near-term future seems brighter. Meier expects the recent availability of three large-scale neuromorphic computing systems for application development to push progress more quickly. (See HPCwire article, Think Fast – Is Neuromorphic Computing Set to Leap Forward?)

It fell to Fryman, the opening speaker, to remind everyone that as promising as many new directional efforts look, it takes years to work out the bugs and turn a new technology into a large-scale manufacturing-friendly process. Interestingly, according to Fryman, advancing CMOS will mostly involve reviving old ideas that were problematic in the past but are unavoidable now. It will also require thinking far more holistically about how hardware and software play together.

Josh Fryman, Intel
Josh Fryman, Intel

“We need to find the Neo of the next generation [computational technology],” agreed Fryman, referring to the protagonist in the film, The Matrix, whose abilities jumped over those around him, “but once you find it, once you work out the techniques, you still have a long haul to make it something we could use, something viable for mass production.

“Until then what are we going to do? The short answer is CMOS is going to continue. It’s not because it is necessarily the best technology, it’s not because we particularly like it and adore it, it’s because we have no choice to keep everything moving forward.”

In setting the context for his talk, Fryman emphasized it’s important to remember that Moore’s law is a business statement not a technology law. That said, Moore’s law has become a surrogate for many things, including the pace of semiconductor technology advance. Its current “difficulties” (Dennard scaling, et al.) have, of course, been widely discussed with Intel holding strong against the growing opinion that Moore’s law’s days are numbered. (See HPCwire article, Moore’s Law – Not Dead – and Intel’s Use of HPC to Keep it Alive)

Fryman noted the classic recipe for engineers to achieve Moore’s law for transistors has been “to scale your dimensions, to scale your supply, and you’re done. You just keep turning the crank on this over and over. The running joke is years ago in the fab we used just a handful of elements in the periodic table. Today we use just about all of the elements except for a handful to get the same job done. [But at the end of the day] it’s still just a recipe.”

From an engineering perspective, what happens when the recipe fails? Fryman briefly reminded the audience that change is hardly new in electronics but that a few common underlying characteristics have been necessary for progress.

“If you look at the evolution of electronics, moving from mechanical to electromechanical, to vacuums tubes, to bipolar, to NMOS, to PMOS, and ultimately CMOS, and now you have this questions about what is coming. If you look at the trend line historically, each of the crossings is defined by having three basic components. You have to have gain; signal to noise control; and scalability, although scalability is really an overused term. What does it really mean? You’re talking about three dimensions: performance, energy, and pricing. These are the three fundamentals for something to actually be a viable technology and it needs to be ‘friendly to high volume manufacturing.’”

Intel KNL Phi die shot
Intel KNL Phi die shot

As there is no obvious technology to replace CMOS now, the focus must be on how to use what we know. This is doable, maintains Fryman, but will require rethinking existing approaches and in some instances re-learning old lessons. He said a trio of strategies will drive advances in underlying CMOS and compute architectures.

  • Remove waste to reclaim efficiency. Die area, for example, has ballooned to accommodate accumulating features such as pipelines, onchip floating point, out-of-order execution, etc. In many cases performance, and in most cases power consumption, have suffered. Review of accumulated features with an eye towards simplification and elimination will play a role.
  • Use known techniques. Over the years, lots of manufacturing and chip design approaches have been tried and tested and well characterized, including their drawbacks, “but people wanted to avoid them because they were considered hard at some level, too hard to program, to hard to use, too hard to design. But when you are running out of other knobs [to adjust] these are not as hard anymore.”
  • Multidisciplinary solutions. Tackling physical manufacturing problems will only work so far; offloading or streamlining performance and tracking Moore’s law will require blended software, hardware, and manufacturing processes.

Far from pessimistic Fryman believes making further progress using these techniques is do-able, if challenging, and offered a few directional examples including one on handling resiliency at small feature size.

“Everybody is worried that once you get down to 7 nm you are going to have higher variability and failures and what am I going to do about it. There are two ways to look at it. There are reactive measures, so if something fails, an ECC failure, a soft upset, what am I going to do about it? I’ll have to react, I’ll have to kill, I’ll have to restart,” said Fryman.

“There’s also the proactive side which is I am going to plan ahead for this future and I am going to design my system in software and the hardware level to periodically check itself, to check if I am leading to a failure situation should I bring down my voltage, should I migrate work away from something.

“From a user experience. I have a classic software layer. I’ve got run time sitting on top of hardware, how does that interact with the entire stack. I’ve got user codes. I’ve got runtimes. I’ve got programming support tools. All these things need to be aware of the underlying assumptions in the system,” he said.

Power management is another area likely to involve tighter links between software and hardware. He cited work from a Polaris test chip in the 2006-2007 timeframe. “I can look at fine grained power management techniques. This is another known technique that’s way beyond clock handling. There are 21 dynamic sleeper readings in the actual tile, a whole bunch of tiles on the die, and you let the system turn the tiles on and off in the sleep state, which give a significant energy savings.”

Fryman again emphasized this is known technique but it’s hard do because it extends beyond hardware and has software implications: how do you structure your code, how do you know when you can take advantage of something like this, etc.

“We are going to have to start thinking outside the box and [in many instances] go back to existing techniques and say so, do we really need cache coherency across an entire machine. Maybe not. Do we really need cache coherency across 1000 cores on a die or 100 cores on a die, probably not. Are we willing to take the complexity from software for a simpler more efficient, more scalable hardware? Really what I am saying moving forward is we need to take your heads out of the sand, pardon the pun, and rethink what we have been doing,” he said.

Fryman says the industry is moving into another era that he calls “the disaggregation of the datacenter.” In a fully connected model, he said, there is “no system you can design that can get the bandwidth.” More and more compute will push out to the edges and “it will look different and this is where machine learning an other algorithms come in and neuromorphic might be a big deal. I see the industry not as stagnant but going through this shift to the edge, which is a very different design point than the classic PC or tablet.”

The Intel engineer was careful not to reveal too much, “Eventually turning the knob on transistors, as we have been doing, will not work. When that is is highly debatable, which is why I chuckle. I’m not supposed to talk about post 7 nm but I can simply say it’s actively being looked into.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Insights from Optimized Codes on Cineca’s Marconi

February 15, 2019

What can you do with 381,392 CPU cores? For Cineca, it means enabling computational scientists to expand a large part of the world’s body of knowledge from the nanoscale to the astronomic, from calculating quantum effe Read more…

By Ken Strandberg

What Will IBM’s AI Debater Learn from Its Loss?

February 14, 2019

The utility of IBM’s latest man-versus-machine gambit is debatable. At the very least its Project Debater got us thinking about the potential uses of artificial intelligence as a way of helping humans sift through al Read more…

By George Leopold

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst of bankruptcy proceedings. According to Dutch news site Drimb Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE Systems With Intel Omni-Path: Architected for Value and Accessible High-Performance Computing

Today’s high-performance computing (HPC) and artificial intelligence (AI) users value high performing clusters. And the higher the performance that their system can deliver, the better. Read more…

IBM Accelerated Insights

Medical Research Powered by Data

“We’re all the same, but we’re unique as well. In that uniqueness lies all of the answers….”

  • Mark Tykocinski, MD, Provost, Executive Vice President for Academic Affairs, Thomas Jefferson University

Getting the answers to what causes some people to develop diseases and not others is driving the groundbreaking medical research being conducted by the Computational Medicine Center at Thomas Jefferson University in Philadelphia. Read more…

South African Weather Service Doubles Compute and Triples Storage Capacity of Cray System

February 13, 2019

South Africa has made headlines in recent years for its commitment to HPC leadership in Africa – and now, Cray has announced another major South African HPC expansion. Cray has been awarded contracts with Eclipse Holdings Ltd. to upgrade the supercomputing system operated by the South African Weather Service (SAWS). Read more…

By Oliver Peckham

Insights from Optimized Codes on Cineca’s Marconi

February 15, 2019

What can you do with 381,392 CPU cores? For Cineca, it means enabling computational scientists to expand a large part of the world’s body of knowledge from th Read more…

By Ken Strandberg

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

Iowa ‘Grows Its Own’ to Fill the HPC Workforce Pipeline

February 13, 2019

The global workforce that supports advanced computing, scientific software and high-speed research networks is relatively small when you stop to consider the magnitude of the transformative discoveries it empowers. Technical conferences provide a forum where specialists convene to learn about the latest innovations and schedule face-time with colleagues from other institutions. Read more…

By Elizabeth Leake, STEM-Trek

Trump Signs Executive Order Launching U.S. AI Initiative

February 11, 2019

U.S. President Donald Trump issued an Executive Order (EO) today launching a U.S Artificial Intelligence Initiative. The new initiative - Maintaining American L Read more…

By John Russell

Celebrating Women in Science: Meet Four Women Leading the Way in HPC

February 11, 2019

One only needs to look around at virtually any CS/tech conference to realize that women are underrepresented, and that holds true of HPC. SC hosts over 13,000 H Read more…

By AJ Lauer

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Assessing Government Shutdown’s Impact on HPC

February 6, 2019

After a 35-day federal government shutdown, the longest in U.S. history, government agencies are taking stock of the damage -- and girding for a potential secon Read more…

By Tiffany Trader

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This