The Weekly Top Five – 03/03/2011

By Tiffany Trader

March 3, 2011

The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the new “Trestles” system at SDSC; Canada’s big supercomputing allocation; NVIDIA’s CUDA Toolkit enhancements; a joint public-private manufacturing initiative; and Supermicro’s latest compact offerings.

SDSC’s “Trestles” Comes Online

“Trestles,” the newest supercomputer at the San Diego Supercomputer Center (SDSC), debuted this week. This 100-teraflop system is available to users of the TeraGrid, the country’s largest open science infrastructure.

Trestles features Appro’s latest quad-socket server outfitted with 8-core AMD Magny-Cours Opterons connected via a QDR InfiniBand fabric. With 324 nodes, the cluster has a total of 10,368 cores. Each server node sports 64 gigabytes (GB) of DDR3 memory and 120 GB of flash memory. On the whole, the system has 20 terabytes of memory and 38 terabytes of flash memory, and runs at a peak speed of 100 teraflop/s. Based on the latest TOP500 list, Trestles would come in at #111.

UCSD and SDSC are pioneering the use of flash technology in HPC systems, a move away from the slower spinning-disk hard drives. Flash got its start in small devices, like mobile phones and laptop computers, but its use in high-end systems is starting to gain momentum. Benefits of flash include faster read/write speeds, higher reliability and better energy-efficiency.

Trestles will serve as a bridge until the more powerful 245-teraflop “Gordon” supercomputer is installed later this year. Gordon is part of a five-year, $20 million NSF-funded project and will employ a large amount of flash memory. ”Dash,” deployed last April, is SDSC’s other flash-based system. All three clusters employ a similar architecture, one that leverages commodity parts in novel ways to maximize performance.

Trestles has been in development since last August, when SDSC announced the $2.8 million NSF award. The supercomputer will be available to TeraGrid users through 2013.

Compute Canada Announces Largest Allocation

Compute Canada, a national platform of advanced computing resources, in partnership with SciNet, Canada’s largest supercomputer center, have announced the largest allocations ever made on Canadian supercomputers, intended to enable huge scientific advances. The high-end resources will be used to boost research in a diverse array of scientific disciplines. Applications include aerospace design, climate modeling, medical imaging, galaxy formation, proton collision and more.

The grants were awarded on a competitive basis, taking into account both the projects’ scientific merit and computational need. It is the role of SciNet and Compute Canada to help Canadian reseachers to create tools and products that improve lives.

Dr. Seth Dworkin, a researcher at the University of Toronto’s Mechanical and Industrial Engineering department, is using the computing resources to study the combustion of biofuels, aiming to develop cleaner burning substitutes for aviation use. Dr. Dworkin states, “The expertise and computational resources at SciNet are helping us tackle problems of combustion-generated emissions using simulations of unprecedented size and accuracy. We’re learning more and more about the formation and nanostructure of atmospheric pollutants and are now able to apply that knowledge to the design of engines and alternative fuels.”

New Manufacturing Effort Relies on HPC

A major manufacturing effort was launched by the Obama administration on Wednesday. The National Digital Engineering and Manufacturing Consortium (NDEMC) was formed to bolster the nation’s small manufacturing enterprises (SMEs) by increasing their access to high-end computing resources. The collection of public-private interests was organized by the Council on Competitiveness in light of poor HPC adoption among small-to-medium-sized manufacturing outfits. Matching underserved manufacturing companies with cutting-edge modeling and simulation tools has been shown to improve product quality, reduce implentation times, and cut costs.

In addition to the Council on Competitiveness, project partners include the Ohio Supercomputing Center, the National Center for Manufacturing Sciences, and Purdue University. Joining them are private partners are Deere & Co., General Electric, Proctor & Gamble, and Lockheed Martin.

Also announced was the Midwest Project for SME-OEM Use of Modeling and Simulation. Part of the greater NDEMC project, the Midwest Project is also engaged in using the power of simulation and modeling to increase manufacturing output.

The US Department of Commerce contributed $2 million in funding with an additional 2.5 million coming from industrial partners. As reported in our feature story, the project is expected to start within the next four to six weeks and last for 18 months. Supporters hope that a successful outcome will lead to to renewed suppport and additional funding.

In related news, the Ohio Supercomputer Center (OSC) has signed a collaboration agreement with the Procter & Gamble Company. The parties will work together on innovative modeling and simulation projects.

NVIDIA Updates CUDA Toolkit

This week NVIDIA announced its latest CUDA Toolkit, version 4.0. The release was designed to enable more developers to take advantage of the parallel programming capability of GPU computing. New features include unified virtual addressing, GPU-to-GPU communication and expanded C++ template libraries.

The company describes the updates as follows:

  • NVIDIA GPUDirect 2.0 Technology – Offers support for peer-to-peer communication among GPUs within a single server or workstation. This enables easier and faster multi-GPU programming and application performance.
  • Unified Virtual Addressing (UVA) – Provides a single merged-memory address space for the main system memory and the GPU memories, enabling quicker and easier parallel programming.
  • Thrust C++ Template Performance Primitives Libraries – Provides a collection of powerful open source C++ parallel algorithms and data structures that ease programming for C++ developers. With Thrust, routines such as parallel sorting are 5X to 100X faster than with Standard Template Library (STL) and Threading Building Blocks (TBB).

A release candidate of CUDA Toolkit 4.0 is available starting this Friday.

For more in-depth analysis, check out our feature coverage.

Supermicro Debuts 8-Way Server

Supermicro introduced its 8-Way Enterprise Server at the CeBIT trade show in Hannover, Germany, on Monday. The solution comes with up to sixty-four Xeon processor cores packed into a 5U enclosure that also includes 64 DIMMs, up to 10 PCI 2.0 expansion slots, and 24 2.5″ hard drives. An 80-core option is coming soon, according to the company. Supermicro also launched its SuperBlade system, which supports 20 GPUs in a single 7U blade.

Charles Liang, CEO and president of Supermicro, reported on the new offerings:

Compared to other GPU-enabled blade solutions Supermicro’s GPU SuperBlade provides more than double the number of GPUs per 1U of rack space, and our 8-Way SuperServer is unique in the industry with support for eight next-generation ten-core Intel Xeon MP processors in a 5U form factor, up to 2TB of memory and 10 PCI-E 2.0 slots for high availability and virtualization support. Taken together with Supermicro’s broad line of 1U, 2U and 4U GPU supercomputing servers, we have established undisputed leadership in the new and rapidly evolving GPU computing business space. These new products play well into our strategy to deliver the end-to-end IT hardware needs of datacenter, HPC and server farm customers.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire