The Weekly Top Five – 01/20/2011

By Tiffany Trader

January 20, 2011

The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover Durham University’s newest “Cosmology Machine”; NetApp’s Engenio acquisition; the 2010 ACM Turing Award winner; SGI’s ArcFiniti storage archive; and an MRAM data storage advance worthy of patenting.

‘Cosmology Machine’ Helps Solve Riddles of Universe

Durham University’s renowned Institute for Computational Cosmology (ICC) is now home to a powerful new server and storage cluster. The fourth generation “Cosmology machine,” aka COSMA4, will enable researchers to perform fine-grained simulations of the Universe, leading to greater knowledge of galaxies, stars and planets. A team of 20 cosmology researchers make up the cluster’s main user base, although there are 100 registered users in total.

Professor Carlos Frenk, director of the ICC, commented on the significance of the new system:

“Unlike other sciences it is very difficult to ‘test’ theories on the Universe. Brain power alone is not enough to calculate the complex algorithms. However, our new server and storage cluster does enable us to experiment with the Universe and answer fundamental questions that we all have about our cosmic environment, how does gravity operate and how does the Universe expand, for example.”

The server and storage cluster features 220 IBM iDataPlex servers and eight IBM DS3500 storage devices. With 25 teraflops of computing power, the COSMA4 is seven times faster than its predecessor, COSMA3, and 50 times faster than COSMA2, now decommissioned.

The cluster was designed to be powerful but eco-friendly. It is currently running at 91 percent efficiency, as measured by the Linpack benchmark, and has an energy-efficiency rating of 400 megaflops per watt of energy consumed, equivalent to a 19th place ranking on the most recent Green500 list. To illustrate the machine’s green credentials, it uses about the same power as COSMA2, despite being many times faster.

NetApp Takes Over Engenio Business from LSI

NetApp has announced an agreement to purchase the Engenio external storage systems business of LSI Corporation for $480 million in cash. NetApp officials believe they can extract profit from the Engenio line in emerging markets such as video and high performance computing applications. If all goes as planned, the deal will be closed in 60 days. According to the official announcement, the transaction will be reflected on NetApp’s earnings statement by the end of the second quarter of its 2012 fiscal year.

President and CEO of NetApp, Tom Georgens, highlighted some possible benefits to the company:

“We’re excited about the acquisition of the Engenio business and the opportunity to significantly expand our addressable market and generate greater revenue growth. Our customers and partners have helped us emerge as an innovation leader and one of the fastest growing storage vendors in shared, virtualized IT infrastructures. With Engenio we will have a strategic storage platform to capitalize on new, high-growth opportunities that we don’t currently reach with our FAS platform. NetApp also gains a proven OEM-based revenue stream that is run by a talented Engenio team. We believe that the synergies between NetApp and Engenio will create a compelling combination that will help us continue to scale our business and fuel our continued growth.”

As part of the deal, NetApp will keep the Engenio engineering team. The Engenio business unit will be folded into NetApp’s business functions under the direction of Manish Goel, executive vice president of NetApp Product Operations. The NetApp and Engenio sales teams will be merged.

For additional coverage of this story, check out HPCwire Editor Michael Feldman’s in-depth analysis.

Harvard Professor Receives ACM Turing Award

This week, Harvard professor Leslie G. Valiant was named the winner of the 2010 ACM A.M. Turing Award. The Association for Computing Machinery, or ACM, selected Valient “based on his fundamental contributions to the development of computational learning theory and to the broader theory of computer science.”

An innovator in machine learning, Valiant has made numerous important contributions to the field of artificial intelligence, including natural language processing, handwriting recognition, and computer vision. He has also developed models for parallel and distributed computing and is currently working on the forefront of computational neuroscience.

Valiant even helped create the technology behind IBM’s Watson computer, which achieved a win on the popular Jeopardy quiz show last month, when it took part in an exhibition match against two champion human players.

“Leslie Valiant’s accomplishments over the last 30 years have provided the theoretical basis for progress in artificial intelligence and led to extraordinary achievements in machine learning,” remarked ACM President Alain Chesnais.

“His profound vision in computer science, mathematics, and cognitive theory have been combined with other techniques to build modern forms of machine learning and communication, like IBM’s ‘Watson’ computing system, that have enabled computing systems to rival a human’s ability to answer questions,” Chesnais added.

The Turing Award, named after British mathematician Alan M. Turing, carries a $250,000 honorarium, sponsored by Intel Corporation and Google Inc.

SGI Announces ArcFiniti Storage Archive

SGI debuted its AcrFiniti archive storage solution this week, which the company is marketing as “a fully integrated disk-based solution that targets the exploding problem of unstructured, file-based data sprawl.” The release comes one year after SGI acquired the COPAN MAID technology and marks the company’s second set of enhancements.

According to SGI officials, ArcFiniti is available in five configurations, ranging from 156 TB to 1.4 PB of usable archive capacity. At the high-end, a single 1.4 PB rack represents as much data as 20 years of HD-TV video or 10 billion photos on Facebook. All files are always network-accessible.

Brent Van Scyoc, vice president of the Federal Solutions Group at Alliance Technology Group, comments on the release:

“SGI is delivering a very elegant and simple solution to what is a common but complex challenge for many of our customers. Customers need a reliable, scalable archive solution that can be dropped into an existing work environment and grow with the business. SGI ArcFiniti delivers such a solution, and we are excited to work with SGI to bring this to our customers.”

German Research Center PTB Patents Faster MRAM Design

Researchers at Physikalisch-Technische Bundesanstalt (PTB), Germany’s national metrology institute, have invented a super-fast MRAM data storage mechanism.

According to the release:

Magnetic Random Access Memories (MRAM) are the most important new modules on the market of computer storage devices. Like the well known USB-sticks, they store information into static memory, but MRAM offer short access times and unlimited writing properties. Commercial MRAMs have been on the market since 2005. They are, however, still slower than the competitors they have among the volatile storage media.

PTB researchers have created a speedier MRAM, employing a special chip connection to reduce response times from 2 ns to below 500 ps. Using real-world data rates, that’s like going from 400 MBit up to 2 GBit.

The announcement explains that current DRAM and SRAM solutions lose their memory if there is an interuption in power supply. The MRAM, however, could be immune to such disruptions because information is not stored in the form of an electric charge, but is retained using magnetic spins. That is why MRAMs are considered universal storage chips. In addition to non-volatile storage, the technology offer such benefits as “faster access, a high integration density and an unlimited number of writing and reading cycles.”

The European patent is being granted this spring, while the US patent was granted in 2010. The researchers are currently looking for an industrial partner to take on development and manufacturing responsibilities.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire