The Week in Review

By John E. West

February 16, 2007

Here's a collection of highlights, selected totally subjectively, from this week's HPC news stream as reported at insideHPC.com and HPCwire.

>>Schroedinger's Computer

The quantum was much in the news this week as Canadian tech start up D-Wave Systems unveiled Orion, a 16-qubit superconducting adiabatic quantum computer processor. The commercial version of this early prototype system will ultimately be targeted at solving NP-hard problems that conventional digital computers have a hard time with.

There are lots of questions about the technology, however. First are the fundamental questions raised by some experts on whether there is enough evidence to prove that the calculations taking place are actually quantum, and not just an exotic analog calculation happening at 4 millikelvin. Then there are questions about whether the technology will scale by the 1,000 times or more needed to address problems too hard for computers to solve today.

Still, most everyone does agree that this is an important step in a very interesting direction, and to their credit D-Wave is very open about both the questions and the promise of their technology. If you'd like to do your own digging I recommend Scientific American's online coverage as a good place to start, along with this article by Ashlee Vance at The Register.

>>The International Solid State Circuits Conference

A lot of the goings on in IT reported this week came out in association with presentations made at the ISSCC (International Solid State Circuits Conference) in San Francisco. The major chip companies were all showcasing their technology futures.

Intel gave us more details on their 1 TFLOPS 80-core experimental chip. Yes the chip only has a 32-bit address space, and yes it has dramatically simplified circuitry (about 1/3 the number of transistors on conventional chips from Intel). But Intel's advance is important in that it's spurring a whole new conversation about what operating systems and software might look like if they didn't have to spend so many millions of lines of code managing what used to be a scarce resource: the compute core.

AMD's discussions on its Barcelona quad-core offering focused on its own claims that it performs 40 percent better than Intel's quad-core line, and on its innovations in power and thermal management. Among other features Barcelona chips power down memory logic when not in use, and employ clock gating to shut down areas of the chip not in use.

IBM was talking about Power6, where their approach is to improve performance by cranking up the clock to nearly 5 GHz. This is clearly a contrarian approach by IBM. I understand that the move to hafnium-juiced chips will have stave off the fundamental physics problems that IBM is going to encounter on this path, but this approach appears to have a much shorter lifespan than the approach IBM's chip competitors are taking, and I wonder whether this isn't simply buying time while the company adjusts its path forward.

In a much more interesting move IBM announced an evolution in computer memory technology that may enable it to put up to three times more memory on the chip with the processor. IBM says it's been able to speed up DRAM to the point that it's nearly as fast as SRAM, enabling it to replace SRAM as the choice for memory on the processing chip.

>>Electricity use by servers in the U.S. doubles

There was a lot of coverage in the IT press this week about a new Lawrence Berkeley study commissioned by AMD of power consumption in servers. It's now estimated that servers account for 1.2 percent of all electricity use in the US (about the same as all the color TVs in the country) at a cost of about $2.7B. More troubling for the global warming crowd, the study shows that electricity use in servers doubled from 2000 to 2005. You can find the entire study in PDF form at http://enterprise.amd.com/Downloads/svrpwrusecompletefinal.pdf.

>>Bucket o'news

Stream Processors, Inc. started talking this week about their new stream-processor for digital signal processing. The chip contains two MIPS cores (one for Linux-level tasks and IO; the other for real-time DSP work) in addition to a “data parallel unit” that can offload hefty tasks with VLIW and SIMD (tip of the hat to Chris Aycock for that one).

Some in the IT world started thinking about where all that hafnium that Intel and IBM are talking about is going to come from once it starts being used to make the world's computer chips. It seems that only 50 tons are produced worldwide each year. Not to worry, says IBM's Chief Technologist Bernard Meyerson in a piece carried by Reuters. The hafnium in one cubic centimeter could be spread across 10 football fields worth of silicon wafers. “That assumes a 50-atom-high pile of it,” said Meyerson, “which frankly would be an extraordinarily large amount for materials like this one.” Whew: dodged that one.

Several new systems came online this week, including the largest shared-memory system in Canada. The 5 TFLOPS SGI Altix 4700 system will be used by the Réseau de calcul de haute performance (RQCHP) at the University of Montreal for research in physics, chemistry, engineering, medicine, computer science, biochemistry, bioinformatics, and several other fields.

—–

John West summarizes the headlines in HPC every day at insideHPC.com, and writes on leadership and career issues for technology professionals at InfoWorld and on his own blog at http://onlytraitofaleader.com/. You can contact him at [email protected].

www.onlytraitofaleader.com  Leadership and career skills to help scientists, engineers, and technologists find success doing what they love to do. No time to keep up? Subscribe to the RSS feed!

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire