My Day at SC07 … and a Whole Lot of News

By Derrick Harris

November 19, 2007

For anyone (such as myself) tasked with covering cutting-edge enterprise  IT news and announcements, it is unlikely there has been a busier time than last week, which included Supercomputing 07 in Reno, Nev.; Oracle OpenWorld in San Francisco; and the Microsoft TechEd IT Forum in Barcelona, Spain. And although I usually like to comment on the big news of the week, there is no way I can even begin to touch upon everything that was announced within the last seven days, so I’m not even going to try. However, we will do our best to bring you more in-depth coverage of many of these announcements in the coming weeks and into the new year.

Nevertheless, I was able to make it to Reno for a day of SC07, and I sat in on a couple of very interesting sessions. The first was a Birds-of-a-Feather session called “Supercomputers or Grids: That is the Question!” which was chaired by Wolfgang Gentzsch (D-Grid) and Dieter Kranzlmueller (Johannes Kepler University Linz), and featured panellists Francine Berman (San Diego Supercomputer Center), Erwin Laure (CERN and EGEE), Satoshi Matsuoka, (Tokyo Institute of Technology and NAREGI) and Michael Resch (High Performance Computing Center Stuttgart and PRACE). Despite its title, though, this session focused on the use of supercomputers and grids, although the panellists all had slightly different takes on how the two architectures work together.

Berman, for example, said the focus should be on finding “the right tool for the right job,” and she presented types of applications that are better suited for one architecture over another. In her mind, organizations looking to get important work done need not make a binding decision to use one platform over another when the reality is that both can — and should — have a place in an organization’s HPC plans. EGEE’s Laure, on the other hand, is of the belief that supercomputers and grids are “two fundamentally different things living in the same ecosystem.” To add credence to this statement, he presented the notion that while supercomputers exist to solve the most demanding computing problems, the purpose of grids is the federation of computation and data, which makes for an effective tool for collaborative research and allows for dynamic reconfiguration. The next step, he added, is to federate supercomputers and grids so that researchers have seamless access to the features of both. Resch echoed — to a degree — this sentiment in his intentionally provocative presentation, concluding that the actual model for the co-existence of the two platforms is “supercomputers on grids.” “Grid is the ecosystem,” said Resch, analogizing supercomputers to power plants and grid to power grids.

In my opinion, though, the star of the show was Matsuoka, who presented his vision of grids shedding their early goals of making PCs pretend to be supercomputers and focusing instead on making supercomputers act like Internet datacenters (IDCs). According to Matsuoka, the ultimate business model for large-scale grids might well be in aggregating HPC resources and granting virtual access to these resources to end-users, much the same way Web standards and protocols make transparent Web access to IDC resources. In such a model, he said, highly managed supercomputers would offer better service quality than, say, a grid of PCs, and offering access to backend resources that exceed what you can do on your laptop is added value that will keep people coming back. It sounds to me like a beefed up Network.com, and not entirely unlike what TeraGrid is doing with its Scientific Gateways, but nonetheless is a grand idea that shouldn’t be too difficult to make happen should the right people wish it so.

I also got a chance to attend a “CTO Roundtable” featuring Nancy Stewart, senior vice president and chief technology officer in the information systems division for Wal-Mart Stores Inc; Kevin Humphries, senior vice president of technology systems for FedEx Corporate Services; Reza Sadeghi, CTO of MSC Software; and Anna Ewing, executive vice president of operations and technology and chief information officer of The Nasdaq Stock Market Inc. As you might imagine, there is no shortage of valuable insights when the IT masterminds of some of the world’s largest corporations share the stage, but I want to share just a few key, if not obvious, observations.

First, and this the obvious one, Wal-Mart is huge, gigantic, ginormous, and any other adjective indicating sheer size. Stewart made this crystal clear when discussing the company’s most-pressing data problem — its 400-billion-row table, which ultimately will top a trillion rows. Managing this data and the HPC environment necessary to process it is no small undertaking, nor is it a job for anyone but Wal-Mart. According to Stewart, the retail giant doesn’t have SLAs with any of the ISVs with whom it does business because they simply could not afford to pay for an outage of even an hour (the day after Thanksgiving, for example, Wal-Mart expects to be doing business in the neighborhood of $2 billion per hour). For this reason, as well as for ensured reliability, serviceability and dynamic changes, Wal-Mart builds about 80 percent of its software in-house.

Stewart also gave the audience a look into Wal-Mart’s overall environmental policies and efforts, which range from IT concerns like using virtualization to reduce power usage, to mandating smaller packages from product manufacturers. The latter, for what it’s worth, leads to less resource consumption across the board, from the actual materials used in production to the amount of gas used by delivery vehicles in transporting the same number of units.

Finally, and speaking of delivery, FedEx’s Humphries used a good portion of his energy bemoaning the lack of talent available to deal with his company’s increasingly fabric-like IT infrastructure. More and more, he said, and thanks to grid technologies, HPC is becoming embedded in the general IT environment of large enterprises, and the islands of skills that once sufficed are no longer cutting it. Of course, anyone in the grid world has heard this all before, as the elimination of application silos inherently presents its own problems in terms of realigning and retraining IT staff to handle a new platform. The question this begs me to ask is why FedEx — and any other companies experiencing the same issue — doesn’t invest in educating university students in the technologies that make its business run. Due to the proprietary nature of the corporate world, I don’t expect them to offer up parts of their software like Google and, most recently, Yahoo, but companies like FedEx could throw a little money at the problem and make sure universities have the resources to teach students about how to build, maintain and manage large-scale, distributed corporate infrastructures.

As for the rest of this week’s issue, make sure to check out the features that originally ran in HPCwire’s live coverage of SC07, and please note that “cloud computing” is officially the new buzzword and buzz technology, with Yahoo following Google in taking it to universities (more on this next week), and IBM now offering its “Blue Cloud” solutions. Other items that definitely are worth checking out include: “OGF Spec Makes Grids Interoperable”; “Azul, GemStone Ally on Extreme Transaction Processing”; “Microsoft Announces New System Center Offerings”; “Majitek Licensing GridSystem for Free to Technical Community”; “Microsoft Supports SOA with Windows HPC Server 2008”; and “HP Advances Flexibility of Blades Across the Datacenter.” Oh, and did I mention that Oracle, Microsoft and Sun all announced new virtualization platforms? Something tells me we’ll be hearing more about this …

—–

Comments about GRIDtoday are welcomed and encouraged. Write to me, Derrick Harris, at [email protected].

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire