Heterogeneous Processing In The Age Of Nanocore (Part I)

By the High-End Crusader

August 24, 2007

In a series of three articles, the High-End Crusader ponders the future impact of industry’s ever-evolving many-core technology on both parallel computing and heterogeneous processing. In the first article, he explains the meltdown of monolithic, monothreaded, out-of-order scalar processors as vehicles for delivering steadily increasing performance.

The high-end computing community needs to reconceptualize both parallel computing and heterogeneous processing in tandem with industry’s unsteady progression from multicore infancy to many-core adolescence to the full maturity of nanocore.

Nanocore is a notional scaling range for the number of cores in a chip multiprocessor. The low end of the range is the inflection point, sometime after 64 cores, when wholly innovative microarchitectural strategies are required to scale further. The high end of the range is the time, perhaps in 2014, when we can integrate 1,024 cores on a single processor die.

The conventional justification of heterogeneous processing, which has much to recommend it, is that distinct processor types, with distinct core execution models, can produce higher execution efficiencies on modern applications comprised of disparate subcomputations with disparate algorithmic characteristics and thus disparate architectural needs. Standard metrics of execution efficiency include operations per second per dollar and operations per second per watt. A slogan that nicely captures this line of thinking is, make the common parallelism case fast and low power, tightly integrate all parallelism cases so that there is minimum overhead in “switching” among them, and scale to the heavens, which goes nicely with execution efficiency.

Note that “tight integration” is first and foremost tight integration among the disparate cores on a heterogeneous processor die.

On occasion, provision is made for locality. For example, there is an admirable special-purpose machine in development that optimizes short-range communication at the expense of long-range communication, targeting strongly localizable applications. In contrast, a general-purpose heterogeneous-processing system achieves high execution efficiencies on a broad range of applications, no matter what their parallelism or locality characteristics.

Another observation is that the vast cloud of systems collectively known as “high-performance computers” has bifurcated into two effectively disjoint sets, depending on whether government funding is driving performance regimes well above any level the private sector would consider a market sweet spot.

This sweet spot is constantly evolving; it might be anywhere from a few tens to a few hundreds of sustained TFs/s in 2011. Pseudo-commercial super clusters are red herrings that obscure this clear picture.

Be this as it may, only a handful of systems are scaling ambitiously. The reference machine is, of course, Japan’s Keisoku Keisanki, which is running in the Riken lab today at 2 PFs/s. This is a machine whose heroic _useful_ scalability is due to brilliantly engineered integration of heterogeneous processors. Here is the break down of its 10 PFs/s. First, 1.5 PFs/s comes from (presumably out-of-order) scalar processors. Then, there is another 0.5 PFs/s coming from vector processors. Finally, the heavy lifting is done by “Grape-7”, which is an array of identical special-purpose devices (SPD) specially optimized for finite-element analysis. This SPD array makes up the remaining
8 PFs/s.

Sustained performance will be remarkably high on each of 21 preselected applications.

The hardware genius of the Japanese machine is the tight integration of these widely disparate processor types. The jury is still out on the valiant Japanese effort to supply adequate system software, which may or may not succeed. All told, the Japanese are investing $1 billion to stand up this machine. In high-end computing, the Japanese government is not a pussy.

In the United States, there are: 1) several ambitious efforts at the national labs, 2) the irreplaceable HPCS program — whose two components are Cray’s heterogeneous distributed-shared-memory Cascade machine and IBM’s homogeneous distributed-cluster PERCS machine, and 3) the ubiquitous IBM Blue Gene/ machine, about whose usefulness opinions differ, and the newer IBM Cyclops machine, which, in your correspondent’s opinion, is less hype and more machine. These machines will go toe to toe with the Keisoku Keisanki, which will prospectively deliver a usable 10 PFs/s in the same time frame.

In a word, the fat of the market for the private sector is mid-range rather than high-end HPC. The only machines that are pushing the envelope are machines “on the government curve” — in Japan, in the U.S., in the EU, and even in China.

This is the strategic backdrop for this series of articles.

Technology Happens

In 1998, say, vendors were quite happy to enforce two of Moore’s processor laws. For reference, the computing power of a monolithic, monothreaded, out-of-order scalar processor, measured as logic-transistor-Hz per square centimeter, increases by a factor of 2.4 every 3 years. And, the computing power of a monolithic, monothreaded, out-of-order scalar processor, measured as logic-transistor-Hz per dollar, increases by a factor of 2.8 every 3 years.

Starting, say, in 2002, vendors were increasingly stymied by a few unintended consequences of enforcing the law, especially in its product form at full-chip scale. There were two problems. First, high clock frequency and high voltage turn chips into toasters. Second, increasing clock frequency makes things seem farther away (you can’t get very far in a single clock cycle).

There is a simple solution: Throw sequential computing off the raft!

We appear to have reached a new consensus: monolithic, monothreaded, out-of-order scalar processor are obsolete and should be replaced. The reasons are given below.

Instruction-level parallelism, i.e., all the expensive out-of-order machinery for extracting parallelism from single threads, has been exhausted (the ILP Wall). Even if transistors are free, power is expensive (the Power Wall). Salvation by cache, i.e., exploiting data reuse, only works for strongly localizable algorithms (the Memory Wall).

But transistors _are_ free (and fast), by Moore’s law!

So, mix a few larger, more complex, hotter cores and many smaller, simpler, cooler cores on a single die. This raises a few questions.

What is the correct microarchitecture, including the cache architecture, for a nanocore die? What is the correct on-chip interconnection network? What are the correct nanoarchitectures for the nanocores? How should the operating system allocate and control critical resources, e.g., cache bandwidth, that are shared among cores? How should we program these systems?

What is abundantly clear is that nanocore implies a wholesale move to parallel computing. With a thousand cores on a die and a hundred threads per in-order multithreaded core, someone or something had better master thread-level parallelism (TLP).

Yet, in spite of the divergence between the government and market curves, the fates of elite and mainstream parallel computing are tightly coupled; both communities must participate in the necessary reinvention of parallel computing. Indeed, success is a package deal. Consider that responsibility for optimizing parallel computations is shared in a subtle way among the architecture, the language, the programming environment, the compiler, the runtime, and the programmer/algorithm designer — all of whom are parallel.

Fortunately, we already know something about parallel computing.

Walls That Imprison Killer Micros

First, the ILP Wall. Scalar (monothreaded) ILP is out-of-order execution, register renaming, branch prediction, (wasted) speculation, etc. At best, this is an expensive solution to high single-thread performance — assuming that such a thing matters to you. Vector (monothreaded) ILP is still a contender whenever the application is vectorizable. Note that both forms of (monothreaded) ILP have the amount of parallelism they generate sharply curtailed by high intensity of either control-dependent computation or data-dependent memory addressing.

TLP, i.e., multithreading, is a more robust form of parallelism. Indeed, since multithreading escapes the curtailment just alluded to, this could help define an unexplored applications space, whose data structures are sometimes sparse, irregular graphs, and whose application domains include such areas as biological, financial, and national-security computing. This space might perhaps best be characterized as lying in the intersection of data-centric computing and adaptive on-line task control.

What is multithreading’s execution efficiency? Is there a sufficiently fine granularity of thread-level parallelism and synchronization at which in-order multithreading matches the control and datapath execution efficiencies of vector processors on vectorizable applications? In other words, would ultra fine-grained memory-based synchronization of threads — both blocking and nonblocking — allow in-order multithreading to _subsume_ vector pipelining? Of course, this presumes consistently low thread state and thus perhaps no persistent temporal locality.

Finally, does it make sense to mix and match these things, which is one form of heterogeneity?

Second, the Power Wall. The processor power wall is only part of the general power wall, but it merits discussion. Analysis of the static and dynamic power equations shows us that parallelism is _the_ energy-efficient way to achieve performance. We can scale down the frequency and voltage of each core and increase the number of cores. Throwing the ILP machinery off the raft is another big win.

Smaller cores improve spatial efficiency on parallel codes. This gives us more ops per second per dollar. Smaller cores provide finer-grained ability to perform dynamic voltage scaling and power down. This gives us more ops per second per watt. Smaller cores also impact resilience favorably.

Still, in high-bandwidth systems, the global system interconnect is the principal consumer of power. Memory also uses considerable power. Even the on-chip network uses power. For power management, we must consider the relative power consumption of each component.

Third, the Memory Wall. Unrealistic expectations of achieving salvation via large caches have allowed the industry to live in total denial of the memory-latency wall. Let’s walk through this.

Suppose you want to double the performance of an application without increasing the actual aggregate DRAM bandwidth. Can you do this by increasing the size of the cache? As the problem size increases, what is the asymptotic growth of arithmetic intensity, i.e., the number of operations performed per operand received?

For dense matrix-matrix multiply or dense LU, you must make the cache 4x bigger. For sorting or FFTs, you must square the size of the cache. For sparse or dense matrix-vector multiply, it is impossible. And don’t even think about parallel graph algorithms with pointer chasing.

Fast clocks and deep interconnects make major latency inevitable. Algorithms differ enormously in their localizability. While latency avoidance is no substitute for latency tolerance, it can be a marvelous complement.

The memory-bandwidth wall is a potential showstopper in achieving nanocore. At first glance, it appears that off-chip bandwidth requirements grow linearly with the number of cores. If we don’t solve “chip I/O”, then nanocore simply won’t happen. We need all of the following even for _desktop_ nanocore: 1) high chip I/O, i.e., pin bandwidth, 2) high local off-chip interconnect bandwidth, and 3) high local aggregate DRAM bandwidth.

Hardware bandwidth is sine qua non, but latency and actual bandwidth, i.e., the bandwidth generated by parallelism, are related by Little’s law, as we shall soon see.

At large scale, processor parallelism and global system bandwidth are the foundations of tolerating long-range network/memory latency. Historically, the more long range the latency, the more challenging it has been to tolerate it.

Latency Tolerance 101

Consider a processor connected to a memory via a pipe. Assume there are no hardware-bandwidth limitations. Words are requested from memory and arrive.

To sustain an actual bandwidth of b words per cycle, we must sustain concurrency c, i.e., the number of memory references outstanding in each cycle, equal to b times the total word-access latency t in cycles. This is Little’s law.

In a multiprocessor, this picture is simply replicated in space. This form of tolerating latency is just memory pipelining. Processors supply the parallelism that is the source of this concurrency. Note that latency comes in many forms. For example, there is: 1) memory latency, 2) synchronization latency, and 3) branch latency.

Computing Past Memory Walls

Latency tolerance handles memory-latency walls, but memory-bandwidth walls are a genuine problem.

For a nanocore-die’s memory-bandwidth walls, we need engineering solutions to increase all of the following: 1) the nanocore-die pin bandwidth, 2) the local
(memory) and global (network) interconnect bandwidths, and 3) the aggregate hardware DRAM bandwidth per gigabyte. For a nanocore’s memory-bandwidth walls, we need to increase the hierarchical on-chip-network bandwidths.

We know today how to deal with memory-latency walls. We simply use in-order multithreaded cores to tolerate latency at multiple space scales — there are various hierarchical on-chip and off-chip latencies. As the latency increases, we simply increase the degree of multithreading. Since threads are fully virtualized, this doesn’t affect the programming model at any scale.

None of this will work without sensible hierarchical caches. Such caches: 1) reduce bandwidth requirements, 2) do not themselves waste bandwidth, e.g., by generating coherence traffic, and 3) enable exploitation of on-chip “spatial” dependence locality.

Think about hierarchy. A “processor” can be: 1) a single core, 2) a physically compact group of cores, or 3) the entire processor die. Any “processor” has a boundary between it and its environment, and local store (or “cache”) within that boundary.

A sensible cache reduces bandwidth requirements, and does not create unnecessary traffic across its boundary. Temporal locality with respect to a given processor boundary may be implemented by “spatial” locality between two enclosed processor boundaries.

Conclusion Of Part I

Even though heterogeneous processing maintains execution efficiencies in the face of heterogeneity within individual applications, that doesn’t mean that the world only needs one heterogeneous-processing architecture. Still, here is one possibility.

A nanocore logic die is a reasonable implementation of a heterogeneous hierarchical-shared-cache multiprocessor. System memory, both UMA local DRAM memory per die and NUMA global DRAM memory, is separate. A multidie nanocore system is a shared-memory multiprocessor of shared-cache multiprocessors — of in-order multithreaded processors!

Sophisticated, multilevel system software provides scheduling strategies and other system functions that maximize — at each multiprocessor level — the performance extracted from scarce system resources, often the system bandwidth at various space scales.

Ultra fine-grained thread-level parallelism — billion-way parallelism — and ultra fine-grained synchronization enables efficient implementation of an expressive high-level parallel language.

Modulo memory uniformity, which changes the performance model, a single execution model bridges single-die workstations and massively multidie supercomputers.

Here is your correspondent’s personal wish-list for parallel computing. In his opinion, we need: 1) an expressive parallel language with high-level programming abstractions that blends deterministic functional programming and nondeterministic transactional state updating, 2) heterogeneous-processing systems that execute programs in something reasonably close to dependence order, 3) compilation techniques for parallel programs that use programmer assertions to aid in discovering hierarchical dependence graphs, 4) a synergy between die microarchitectures and system architectures that enables efficient execution of partially ordered programs, and finally 5) a hierarchy of alternating latency-tolerance and latency-avoidance techniques, e.g., layering global task-reference concurrency on top of global memory-reference concurrency, that scales to full-system, long-range latency management even in ambitious multi-petaflops (or multi-petaops) “government-curve” systems.

We need to reinvent parallel computing because, as many-core intensifies, it will become our common fate, and we never got it right in the first place.
We need to reinvent heterogeneous processing because, quite apart from useful-scalability imperatives, there are many distinct types of heterogeneity, even many distinct types of processor heterogeneity, and we will need to make intelligent choices about the type (or types) of heterogeneity our applications need.

Next part: open problems and types of heterogeneity.


The High-End Crusader, a noted expert in high-performance computing and communications, shall remain anonymous. He alone bears responsibility for these commentaries. Replies are welcome and may be sent to HPCwire editor Michael Feldman at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputer Research Reveals Star Cluster Born Outside Our Galaxy

July 11, 2020

The Milky Way is our galactic home, containing our solar system and continuing into a giant band of densely packed stars that stretches across clear night skies around the world – but, it turns out, not all of those st Read more…

By Oliver Peckham

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprised of Intel Xeon processors and Nvidia A100 GPUs, and featuri Read more…

By Tiffany Trader

Xilinx Announces First Adaptive Computing Challenge

July 9, 2020

A new contest is challenging the computing world. Xilinx has announced the first Xilinx Adaptive Computing Challenge, a competition that will task developers and startups with finding creative workload acceleration solutions. Xilinx is running the Adaptive Computing Challenge in partnership with Hackster.io, a developing community... Read more…

By Staff report

Reviving Moore’s Law? LBNL Researchers See Promise in Heterostructure Oxides

July 9, 2020

The reality of Moore’s law’s decline is no longer doubted for good empirical reasons. That said, never say never. Recent work by Lawrence Berkeley National Laboratory researchers suggests heterostructure oxides may b Read more…

By John Russell

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: 1) Industries of the Future (IotF), chaired be Dario Gil (d Read more…

By John Russell

AWS Solution Channel

Best Practices for Running Computational Fluid Dynamics (CFD) Workloads on AWS

The scalable nature and variable demand of CFD workloads makes them well-suited for a cloud computing environment. Many of the AWS instance types, such as the compute family instance types, are designed to include support for this type of workload.  Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Penguin Computing Brings Cascade Lake-AP to OCP Form Factor

July 7, 2020

Penguin Computing, a subsidiary of SMART Global Holdings, Inc., announced yesterday (July 6) a new Tundra server, Tundra AP, that is the first to implement the Intel Xeon Scalable 9200 series processors (codenamed Cascad Read more…

By Tiffany Trader

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprise Read more…

By Tiffany Trader

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

In this exclusive interview for HPCwire – sadly not face to face – Steve Conway, senior advisor for Hyperion Research, talks with Dr.-Ing Bastian Koller about the state of HPC and its collaboration with Industry in Europe. Koller is a familiar figure in HPC. He is the managing director at High Performance Computing Center Stuttgart (HLRS) and also serves... Read more…

By Steve Conway, Hyperion

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time Read more…

By John Russell

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Leading Solution Providers


Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This