Chips Galore

By Michael Feldman

February 16, 2007

In the past couple of weeks we've been entertained by a plethora of announcements describing “breakthrough” semiconductor/microprocessor technology that promise far reaching effects on the industry. These include P.A. Semi's unveiling of its ultra-low-power dual-core Power processor, AMD's Barcelona quad-core chip re-announcement, Intel's demo of its 80-core teraflop processor, the unveiling of a new massively parallel stream processor, and the introduction of IBM's new embedded DRAM technology. And that's just a sample. February has ushered in a veritable Cambrian explosion of semiconductor gadgetry.

Most of these advancements won't make their effect felt until 2008 — and well beyond that in the case of Intel's 80-core wonder. How much effect? Separating hype from substance is always a challenge when the announcements are still warm, but I'll take a shot at it.

Eighty Cores, No Waiting

Intel demonstrated its 80-core prototype this week at the International Solid-State Circuits Conference (ISSCC). The 1+ teraflop chip is implemented at 65nm technology. The version demonstrated at the conference was made up of rather simple floating-point cores to achieve the teraflop-grabbing headlines. But to claim equivalence to a rack of HPC servers or a mainframe, as some analysts have done, is just hyperventilation. To produce a commercially viable version with Intel Architecture-based cores will undoubtedly require a much smaller feature size and more ingenious engineering. Nevertheless, the 62 watt power consumption is nothing short of amazing, while the on-chip routers and 3-D memory stacking offers an innovative approach to a many-core architecture.

More Barcelona

Also taking advantage of the ISSCC stage, AMD released some additional details of its upcoming quad-core “Barcelona” Opteron processor, scheduled to be released in mid-2007. In what should make high performance computing applications especially happy, the new processor will double Barcelona's floating-point execution pipeline to a width of 128 bits, allowing twice as many FP instructions and data to flow through. Most of the other announced features are related to new energy-conserving features. For example, the PowerNow! technology will be enhanced to provide dynamic adjustment of core frequencies, so that individual units don't run hot if they're idle or have reduced loads. The system memory interface will also include the capability to powers down memory logic when not in use. Additionally, the design takes advantage of “clock gating” to enable automatic shut-down of logic areas that are not being utilized.

Extreme Clock Gating

Clock gating, the feature that dynamically turns off circuits that are not being used, is becoming more commonplace as designers obsess about energy conservation. The feature is taken to a new level in P.A. Semi's new dual-core PA6T-1682M PWRficient processor, which sips just 5-13 watts of power at 2 GHz. In this implementation, clock gating is used systematically to shut off unused circuits throughout the processor. With the new emphasis on energy conservation throughout IT, it seems likely to become a more popular technique to reduce power consumption. According to Mark Hayter, Chief System Architect at P.A. Semi, using clock gating requires that the processor architect integrate the methodology during the design phase; it's not something that can be retrofitted.

“I think most people have done some degree of clock gating,” observed Hayter, “but certainly not to the level of granularity that we've done.”

A New Stream Processor
 
Another high performance embedded architecture that garnered some attention at ISSCC, was SPI's new Stream Processor. Bill Dally, the CS chair at Stanford, co-founded SPI (Stream Processor Inc.) in 2004 with the idea to develop a highly parallelized stream processor for digital signal processing. The processor consists of a heterogenous core set, including a data-parallel unit (DPU) and two MIPS cores — one for the OS and one to manage DSP threads and offload compute-intensive functions to the DPU. The current incantation of the DPU runs billion of operations per second (gigaops), but the young startup is already looking towards a teraops version for the next generation.

According to Matthew Papakipos, PeakStream chief technology officer, “Bill Dally at SPI has been a leader in the academic research developing stream processor hardware architectures. This research has been very influential in a wide variety of modern hardware designs including programmable graphics processors, the IBM/Sony/Toshiba Cell processor and upcoming many-core CPU designs. SPI's processor is likely to make a significant impact on digital signal processing for embedded applications.”

A standard C programming interface (along with some stream processing extensions) is included to provide developers with a familiar software environment. Access to the memory hierarchy is managed by the compiler/runtime system to take advantage of data locality. Sounds promising.

Embedded DRAM challenges SRAM

Also announced at ISSCC was IBM's new embedded dynamic RAM (DRAM) for on-chip memory. The new DRAM is designed to replace the static RAM (SRAM) currently being used for on-chip cache in most processors. Up until now, because it was so much slower than SRAM, DRAM was mostly relegated to off-chip memory. Although not quite as speedy as SRAM, DRAM has the advantages of smaller dimensions, less memory leakage, and overall better performance characteristics.

Especially for multi-core, high performance chips, the imbalance between memory and processor speeds continues to be one of the fundamental problems that limits overall application performance. Processors that consist of cores floating in a vast sea of SRAM cache is the result. The Itanium processor is probably the most extreme example of this type of arrangement, but it is by no means the only one.

In the past, IBM used embedded DRAM in its PowerPC-based processor for the Blue Gene/L supercomputers. This new technology will enable on-chip DRAM to be used in mainstream IBM chips in 2008 as part of it 45nm offerings. IBM claims that embedded DRAM will effectively double processor performance beyond what could have been achieved with traditional scaling.

For IBM at least, this seems like a can't-miss technology. They've been touting the praises of embedded DRAM for awhile and the technology circumvents many of the shortcomings of SRAM — especially for high-end chips. AMD is looking at Z-RAM technology from Innovative Silicon for denser caches, so in this case it may choose not to leverage the IBM partnership. Intel is researching “floating-body cell” memory to replace SRAM, but is still uncommitted as far as commercial production.

Super FPGAs

Researchers at Worcester Polytechnic Institute (WPI) want to develop a new kind of reconfigurable computing device that has the superior performance and energy consumptions characteristics of an ASIC, but the programmability of an FPGA. According to this week's announcement: “Using a type of parallel computing called stream processing, the chip will complete hundreds of calculations simultaneously, enabling it to perform up to 300 times faster than microprocessors and about 15 times faster than FPGAs.”

DARPA is funding this with a very modest 18-month, $150,000 award, so don't expect any miracles in the next year or two.

Quantum Leaping

Perhaps the biggest processor-related news of recent weeks came from a company that has very little to do with digital computing. On Tuesday, a small Canadian startup named D-Wave demonstrated its prototype quantum computer. In front of a few hundred people in Mountain View, California, D-Wave's 16-qubit computer (the actual quantum hardware was off-site) ran three different applications. The company then proceeded to put forth its vision of how QC will change the nature of computing forever. As the first commercial QC vendor, D-Wave is looking to become the Cray of quantum computing.

To really understand quantum computing requires a course in graduate physics and perhaps a belief in parallel universes as well. But even mere mortals can appreciate the possibilities for this new technology. Read our feature coverage by me and Bob Feldman (no relation) in this week's issue to see what all the hubbub is about.

And for an overall perspective of the week's HPC news, chips or otherwise, check out John West's excellent wrap-up: The Week in Review.

—-

As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire