Core Economics

By Michael Feldman

January 11, 2008

With all major chipmakers committed to the multicore path, it seems only a matter of time before manycore (processors with greater than 8 cores) becomes the standard architecture across all computing sectors. The 128-core NVIDIA GPUs, the Cisco’s 188-core Metro network processor, and the 64-core Tilera TILE64 processor are three early examples of this trend. The 80-core prototype demonstrated by Intel is an indication that even the most mainstream segments of the computer industry are looking to enter the manycore realm.

While most discussions of manycore tend to focus on software development challenges or memory bandwidth limitations, an even more fundamental issue is the economic model that will drive these products into the marketplace. This is the topic that researchers Joseph Sloan and Rakesh Kumar at the University of Illinois at Urbana-Champaign addressed recently in a paper titled, Hardware/System Support for Four Economic Models for Many Core Computing (http://passat.crhc.uiuc.edu/rakeshk/techrep_economic.pdf).

In the current model, customers buy systems containing processors that satisfy the average or worst-case computation needs of their applications. This means when the application requirements change, either the user has to live with the pain of a performance mismatch or go through the expense of purchasing new systems (or new chips) to realign system performance with the applications. Sloan and Rakesh argue that as the number of cores increase, matching the performance needs with applications becomes increasingly difficult and the associated cost of buying unused computing power becomes more prohibitive.

The chip vendors are effected as well. As the number of cores increase, chipmakers must decide on the number of processor configurations to apply to a given market segment. If one can fit 100 cores on a die, how many different variations can be rationalized? Certainly not 100. Intel will have to deal with a smaller version of this problem in its upcoming 45nm Nehalem microarchitecture. So far, the company has described only 2-, 4- and 8-core processor designs for Nehalem. But with the combination of different cache sizes, memory controller architectures and clock speeds, the new processor family will probably end up being the largest Intel has ever supported. When tens or hundreds of cores are the norm, practical considerations will limit the number of unique designs to a very small subset of possible core layouts.

In their paper, Sloan and Kumar propose four related economic models (five actually) for manycore computing. The overall approach is that the customer will usually need fewer cores than are physically present on the chip, but at times may want to use more of them. The authors suggest that chips be developed in such a way as to allow users to pay only for the computing power they need, rather than the peak computing power that is physically present. This can be accomplished with small pieces of logic incorporated into the processor that enables the vendor to disable/enable individual cores. (Presumably, disabled cores would draw little, if any, power.) Enabling or disabling cores involves contacting the vendor, who authenticates the chip and sends activation codes that are used to unlock or lock the specified cores. The user ends up paying only for the desired computing power.

Of the models proposed, the most restrictive approach, the IntelligentBaseline model, forces the user to make a onetime decision about the number of cores needed. In this model, the vendor enables the user-selected subset of cores on the chip before shipping. Each of the other four models — UpgradesOnly, Limited Up/Downgrade, CoresOnRent and PayPerUse — offers a way to change the available processing power of the chip dynamically:

  • The UpgradesOnly model is based on the fact that computation requirements tend to increase over time. The user initially purchases enough cores to satisfy their current processing requirements. Additional cores can be enabled anytime during the processor lifetime, avoiding a system upgrade until the user needs more processing power than is physically available on the chip.
  • The Limited Up/Downgrade model recognizes the fact that average computational needs may sometimes increase temporarily. This allows the user to scale up and down as computational needs warrant. Downgrades involve disabling the number of selected cores and providing some sort of refund to the user.
  • The CoresOnRent model recognizes that there are environments where computational requirements change a lot even over a short periods of time (months). In this case, it may be more reasonable for the user to rent cores rather than own them. In this model, the user contacts the vendor to get access to a specific number of cores for a specific lease period. When the lease expires, the user has the option to renew the lease — with more or less cores.
  • The PayPerUse model is the most unrestricted model. It frees the user from estimating computing requirements at all and just bills the user based on actual core usage over a specified lease period. Like the CoresOnRent model, the user never owns the cores.

The underlying assumption to all this is that the cost of manufacturing the processor does not rise linearly with the number of cores on the die, which allows the chip vendor to sell underutilized processors at a profit. According to Kumar, this is indeed the case. His assumption is that the factors that determine the cost of manufacturing often have nothing to do with the number of cores on a die.

“Going from a one-core chip to a manycore chip may often represent increased costs — due to higher design/verification overhead,” explains Kumar. “But, multiplying the number of cores on a manycore chip will increase costs only marginally [since] the same design can be stamped multiple times to multiply the number of cores on a die. In fact, one of the main reasons for going to many cores is the high degree of IP reuse, i.e., the computational power can be multiplied without much increased cost.”

Kumar admits that chip costs are dependent upon the die area, and if the number of cores increased that area, costs would increase linearly as well. But his contention is the die area is usually fixed because of yield considerations, so the cost does not change much.

Another issue is the strong coupling of the memory system with the peak performance of the processor. Sloan and Kumar suggest that the memory architecture should be composable to support system balance.

“Designing a composable memory hierarchy may not be a big technical challenge,” contends Kumar. “It is just that a strong need was not there in the desktop and mobile domains. Composable memory hierarchies have often been designed in server systems. For example, Capacity on Demand for IBM System i offer clients the ability to non-disruptively activate (no IPL required) processors and memory. Same for Unisys as well as Sun systems too. You can simply have a middleware or microcode that allows/disallows access to certain regions of memory. Alternatively, some the techniques that we developed for supporting and enforcing the proposed models can also be used for memory hierarchies. Composability can also be attained by physically modifying the memory controller or disk controller to decouple memory regions.”

However, the authors admit that in some cases composability may be difficult to achieve because system architectures may require memory hierarchies that are closely coupled with the core count. They also point out a number of other areas of concern, including compatibility with software licensing models (already an area of contention for multicore processors) and privacy/security issues related to vendors having access to customers’ hardware.

“I think that there is no clear answer as to what are the new economic models that we need or whether we need new economic models at all,” says Kumar. “But now may be the time when a discussion needs to start among academics, industry people, and everyone else who has a stake in it. At least an awareness of the issues is needed.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Battle Brews over Trump Intentions for Funding Science

February 27, 2017

The battle over science funding – how much and for what kinds of science – Read more…

By John Russell

Google Gets First Dibs on New Skylake Chips

February 27, 2017

As part of an ongoing effort to differentiate its public cloud services, Google made good this week on its intention to bring custom Xeon Skylake chips from Intel Corp. Read more…

By George Leopold

Thomas Sterling on CREST and Academia’s Role in HPC Research

February 27, 2017

The US advances in high performance computing over many decades have been a product of the combined engagement of research centers in industry, government labs, and academia. Read more…

By Thomas Sterling, Indiana University

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPE Extreme Performance Solutions

Manufacturers Reaping the Benefits of Remote Visualization

Today’s manufacturers are operating in an ever-changing atmosphere, and finding new ways to boost productivity has never been more vital.

This is why manufacturers are ramping up their investments in high performance computing (HPC), a trend which has helped give rise to the “connected factory” and Industrial Internet of Things (IIoT) concepts that are proliferating throughout the industry today. Read more…

Weekly Twitter Roundup (Feb. 23, 2017)

February 23, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPE Server Shows Low Latency on STAC-N1 Test

February 22, 2017

The performance of trade and match servers can be a critical differentiator for financial trading houses. Read more…

By John Russell

HPC Financial Update (Feb. 2017)

February 22, 2017

In this recurring feature, we’ll provide you with financial highlights from companies in the HPC industry. Check back in regularly for an updated list with the most pertinent fiscal information. Read more…

By Thomas Ayres

Rethinking HPC Platforms for ‘Second Gen’ Applications

February 22, 2017

Just what constitutes HPC and how best to support it is a keen topic currently. Read more…

By John Russell

Thomas Sterling on CREST and Academia’s Role in HPC Research

February 27, 2017

The US advances in high performance computing over many decades have been a product of the combined engagement of research centers in industry, government labs, and academia. Read more…

By Thomas Sterling, Indiana University

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

Leading Solution Providers

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This