The Week in Review

By John E. West

April 16, 2009

Here is a collection of highlights, selected totally subjectively, from this week’s HPC news stream as reported at insideHPC.com and HPCwire.

10 words and a link

Top chip engineer leaves Sun for Microsoft

Visual timeline of the rise and bankruptcy of Silicon Graphics

Nehalem memory cheat sheet

Sun revamps HPC line, new Nehalem, networking, storage

Intel announces Q1, posts profit

Cisco buys scheduling software maker

NCSA mentors students in high performance systems

IBM stream computing prototype achieves 21x in finance

NCAR Cheyenne moves forward

U Mich picks SiCortex for heart research

Software enables proteomics research on Amazon EC2

Recap of the German Windows-HPC user group

Argonne raises machine room temps, explores energy-aware job scheduling

DOE calls for INCITE proposals, 1.3 billion hours at stake

The right way to exascale

Dan Reed reposted an essay on his blog that recently appeared at the CACM blog in which he talks about the shortcuts (my word, not his) we took to get to petascale, and his hope that we take a longer view on the way to exascale.

He writes (referring to some of the original petascale activities in the early 1990s):

At the time, most of us were convinced that achieving petascale performance within a decade would require some new architectural approaches and custom designs, along with radically new system software and programming tools. We were wrong, or at least so it superficially seems. We broke the petascale barrier in 2008 using commodity x86 microprocessors and GPUs, Infiniband interconnects, minimally modified Linux and the same message-based programming model we have been using for the past twenty years.

However, as peak system performance has risen, the number of users has declined. Programming massively parallel systems is not easy, and even terascale computing is not routine. Horst Simon explained this with an interesting analogy, which I have taken the liberty of elaborating slightly. The ascent of Mt. Everest by Edmund Hillary and Tenzing Norgay in 1953 was heroic. Today, amateurs still die each year attempting to replicate the feat. We may have scaled Mt. Petascale, but we are far from making it pleasant or even routine weekend hike.

This raises the real question, were we wrong in believing different hardware and software approaches were needed to make petascale computing a reality? I think we were absolutely right that new approaches were needed. However, our recommendations for a new research and development agenda were not realized. At least in part, I believe this is because we have been loathe to mount the integrated research and development needed to change our current hardware/software ecosystem and procurement models.

Reed’s suggested solution?

I believe it is time for us to move from our deus ex machina model of explicitly managed resources to a fully distributed, asynchronous model that embraces component failure as a standard occurrence. To draw a biological analogy, we must reason about systemic, organism health and behavior rather than cellular signaling and death, and not allow cell death (component failure) to trigger organism death (system failure). Such a shift in world view has profound implications for how we structure the future of international high-performance computing research, academic-government-industrial collaborations and system procurements.

I agree with this point of view, and it has echoes of some of the comments Thomas Sterling made at the HPCC conference a couple weeks ago in Newport as well, in the sense that both advocate a revolutionary, rather than an evolutionary, approach to exascale. My own reason for agreeing with this point of view is that while, yes, we can build petacale machines, we are getting between one and five percent of peak on general applications. This is what an evolutionary model gets you. We are well past the point when a flop is worth more than an hour of application developer’s time. We need to encourage the development of integrated hardware/software systems that help programmers write correct, large scale applications that get 15, 20, or even 30 percent of peak performance. To mangle Hamming, the purpose of supercomputing is discovery, not FLOPS.

Not that I think it will happen. The government has been stubbornly unwilling to coordinate its high end computing activities around any of the several research agendas that it has funded the creation of, but not the implementation (you could pick an arbitrary starting point with PITAC reports, or move either way in time to find sad examples of neglect). My own observations from inside part of this system is that the government has largely begun to think of HPC as “plumbing” that should “just work” in support of R&D, not as an object of R&D itself. There are a few exceptions (mostly in parts of DOE), but without leadership that starts in the President’s office (probably with the science advisor pushing an effort to get POTUS to make his department secretaries fall in line), this is not likely to change on its own.

Our curse is that we have something that kind of works. One of my grad school professors used to say that the most dangerous computational answers are those that “look about right.” If we had a model that was totally broken, we’d be forced to invest in new models of computation and because of the scale of that investment we’d be encouraged to make a coordinated effort of it. But our model isn’t totally broken, and as long as it kind of works, I don’t see anyone willing to dump out the existing rice bowls and start over.

Leak in supercomputer building forces replacement of $4M in gear

Late last week Indystar.com reported that a steam leak in a building being built to house Indiana University’s supercomputers forced the replacement of $4.2M in support gear.

According to IU architect Bob Meadows:

He says repairs could set back construction of the $32.7 million project by three months, with completion now scheduled in July.

No computers have been installed in the Data Center, but generators and battery backup systems in the building must be replaced.

New VP, new business for Cray

Cray has been in the process of formalizing a new line of business for at least a couple of months. Its “custom engineering” business will help customers with specialized requirements get access to Cray’s engineering bench to help them build custom solutions:

“Our custom engineering efforts are focused on leveraging our supercomputing technology, experience and innovation and tailoring these into computing, storage and consulting solutions designed to meet very specific customer needs,” said Peter Ungaro, president and CEO of Cray. “We are very excited to have Skip help us continue to grow and expand in this important aspect of our business and add to the overall strength of our leadership team.”

From what I understand, the team will focus not just on tweaking Cray gear for special installations, but on addressing customer needs and building solutions out of whatever technologies make sense. I would expect, however, that the Cray stockroom would be the first stop.

This week the company announced the leadership for the new business:

Cray Inc. today announced the appointment of John “Skip” Richardson to the position of vice president of business development for the company’s custom engineering team. With more than 20 years of business development experience in the technology and aerospace industries, Richardson will be responsible for promoting Cray’s custom engineering solutions to government agencies, commercial customers and systems integrators.


Prior to joining Cray, Richardson served as vice president of corporate business development at Sarnoff, Inc., a subsidiary of SRI International, where he was responsible for managing and growing business-to-business and government contracts in research and development for U.S. Department of Defense and commercial clients. Prior to Sarnoff, Richardson held various business development roles at Digimarc, IBM, Halliburton and Honeywell.

—–

John West is part of the team that summarizes the headlines in HPC news every day at insideHPC.com. You can contact him at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Quantinuum Reports 99.9% 2-Qubit Gate Fidelity, Caps Eventful 2 Months

April 16, 2024

March and April have been good months for Quantinuum, which today released a blog announcing the ion trap quantum computer specialist has achieved a 99.9% (three nines) two-qubit gate fidelity on its H1 system. The lates Read more…

Mystery Solved: Intel’s Former HPC Chief Now Running Software Engineering Group 

April 15, 2024

Last year, Jeff McVeigh, Intel's readily available leader of the high-performance computing group, suddenly went silent, with no interviews granted or appearances at press conferences.  It led to questions -- what's Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Institute for Human-Centered AI (HAI) put out a yearly report to t Read more…

Crossing the Quantum Threshold: The Path to 10,000 Qubits

April 15, 2024

Editor’s Note: Why do qubit count and quality matter? What’s the difference between physical qubits and logical qubits? Quantum computer vendors toss these terms and numbers around as indicators of the strengths of t Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Computational Chemistry Needs To Be Sustainable, Too

April 8, 2024

A diverse group of computational chemists is encouraging the research community to embrace a sustainable software ecosystem. That's the message behind a recent Read more…

Hyperion Research: Eleven HPC Predictions for 2024

April 4, 2024

HPCwire is happy to announce a new series with Hyperion Research  - a fact-based market research firm focusing on the HPC market. In addition to providing mark Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire