The Weekly Top Five – 04/07/2011

By Tiffany Trader

April 7, 2011

The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover Intel’s “Westmere EX” launch party; the Albert Einstein Institute’s new cluster; TACC’s Lonestar 4 inauguration; Penguin Computing’s financial markets server; and NextIO’s partnership with Bright Computing.

Intel Launches New Westmere EX Processor Family

This week Intel Corp. announced a new family of server processors designed to accelerate mission-critical computing. The new Xeon E7 processor family (codenamed “Westmere EX”) is targeted at the kind of data-intensive applications used in business intelligence, real-time data analytics and virtualization.

Based on a 32-nanometer (nm) process technology, the new Intel Xeon CPUs support up to 10 cores with Intel Hyper-Threading Technology, and, according to the company, deliver up to 40 percent greater performance than the previous generation Xeon 7500 (“Nehalem EX”) processors. Datacenter managers will welcome new security features, such as Intel Advanced Encryption Standard New Instruction (AES-NI) and Intel Trusted Execution Technology (Intel TXT).

The Xeon E7 chips are garnering a lot of support from server makers with more than 35 E7-based platforms already shipping. The list of OEM partners includes AMAX, Bull, Cisco, Cray, Dawning, Dell, Fujitsu, Hitachi, HP, Huawei, IBM, Inspur, Lenovo, NEC, Oracle, PowerLeader, Quanta, SGI, Supermicro and Unisys.

To get a better sense of how this news affects the HPC space, check out our feature coverage. Editor Michael Feldman explains that while “the principle destination for these chips will be ‘mission-critical’ enterprise servers…, a number of vendors — SGI, Cray, Supermicro, and AMAX, thus far — are also using the E7s to build scaled-up HPC machinery.”

Albert Einstein Institute Sees Stars with New Cluster

The Max Planck Institute for Gravitational Physics (Albert Einstein Institute) in Potsdam, Germany, has inaugurated a new high performance computer, named “Datura.” The ceremony took place during a symposium about “German High Performance Computing in the new Decade,” where leaders from different institutions met to exchange ideas.

The 25.5 teraflop machine contains 2,400 processors in 200 servers, and comes equipped with 4.8 terabytes of memory. The supercomputer architecture employs NEC’s LX parts, which rely on standard components and open-source software.

Datura will be used to simulate collisions of black holes and neutron stars. Prof. Luciano Rezzolla, head of the Numerical Relativity Group, expounds on the significance of the new system:

“By studying the behaviour of neutron stars and black holes for a longer period of time in our ‘virtual laboratory’ we expect to find new phenomena. Moreover we will be able to produce even more precise predictions for the characteristic forms of gravitational wave signals, because we can model the motion of these in-spiralling neutron stars and black holes for a longer period of time.”

TACC Welcomes ‘Lonestar 4’ Supercomputer

The Texas Advanced Computing Center (TACC) deployed its newest Lonestar supercomputer this week. Lonestar 4 replaces the previous Lonestar system, which was a productive part of the NSF TeraGrid network for almost four years. The new supercomputer is the result of a $12 million project that involved multiple partners, including the National Science Foundation (NSF), The University of Texas at Austin, The University of Texas System, the UT Institute for Computational Engineering and Sciences, Texas A&M University, Texas Tech University.

Vendor parters Dell, Intel, Mellanox Technologies and Data Direct Networks all contributed to creating one of the most powerful academic supercomputers in the world. Lonestar 4 was designed with 1,888 Dell M610 PowerEdge blade servers, which each employ two six-core Intel Xeon 5600 “Westmere” processors. Additional specs include 44.3 terabytes total memory and 1.2 petabytes raw disk. With 302 teraflops of processing power, Lonestar 4 is the third largest system on the NSF TeraGrid. It will provide almost 200 million processor core hours per year to the national scientific community.

While Lonestar 4 can and will be used to support a multitude of scientific disciplines, it will be particularly adept at modeling solid earth geophysics, where specific tasks involve seismic wave propagation, mantle convection and the dynamics of polar ice sheets.

Omar Ghattas, the Jackson Chair in Computational Geosciences in the departments of Geological Sciences and Mechanical Engineering and in the Institute for Computational Engineering and Sciences (ICES) at The University of Texas at Austin, cited further evidence of Lonestar’s value to the geoscience community:

“Geophysical simulations are characterized by a number of computational challenges, including a wide range of length and time scales, highly heterogeneous media, a need for dynamically adaptive resolution and assimilating sparse observational data into the simulations. All of these significantly stress the hardware system. Lonestar 4’s much greater memory bandwidth, faster CPU clock speed, and faster interconnect relative to other TeraGrid systems combine to promise substantially faster turn-around time for our simulations.”

Penguin Computing Introduces New Server to Wall Street

At the the 8th Annual HPC Linux Financial Markets Conference (aka HPC on Wall Street Conference) in New York City this week, Penguin Computing debuted its Altus 1750 server, a dual socket 1U system purpose-built to support the fastest clock speeds available for AMD Opteron x86 chips. In addition to its high frequency CPUs, the server’s dense design and low power draw make it a good fit for high frequency trading and other low latency applications.

Penguin has portrayed the solution as offering a competitive price/performance combination among comparable high clock speed (including over-clocked) systems. According to the press release, the Altus 1750 is the only platform of its kind to implement AMD Opteron CPUs.

Penguin Computing CEO Charles Wuischpard, comments on the server:

“Altus 1750 combines AMD’s industry leading multicore processors with raw GHz performance that’s uniquely ours. As an AMD Platinum Elite partner we are fully committed to providing best-in-class AMD solutions for the scientific and financial communities.”

NextIO and Bright Computing Combine Talents

Another solution aimed at the financial community was unleashed at HPC Linux Financial Markets Conference this past week, this one from NextIO and Bright Computing. The duo announced a joint GPGPU cluster computing and cluster management solution that will leverage Bright Computing’s software to monitor metrics from NextIO’s GPU-based processing products. Two new NextIO appliances, vCORE Express and vCORE Extreme, will implement  Bright Cluster Manager, Bright Computing’s cluster management software.

NextIO solutions employ GPU technology to accelerate computationally-intensive applications, like those found in business, oil and gas, high performance computing, digital media and financial services. Andy Walsh, director of Tesla Marketing at NVIDIA, conveys the relevance of GPU computing to the financial industry:

“The banking and financial services sector relies on computation to stay competitive and many firms are looking to GPU computing to accelerate their applications. The NextIO system combined with the Bright Cluster Manager responds to this need, giving firms powerful cluster performance and ease of manageability.”

Dr. Matthijs van Leeuwen, CEO of Bright Computing, explains that Bright Cluster Manager gives NextIO customers “full visibility of all metrics down to the individual GPU, as part of an intuitive, GUI-driven provisioning, monitoring, and management capability,” adding that “NextIO customers benefit from incredible compute power, without nasty surprises or system management headaches.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire