The Week in Review

By Tiffany Trader

October 7, 2010

Here is a collection of highlights from this week’s news stream as reported by HPCwire.

Unclassified Computing Scales to New Heights at Livermore Lab

Netlist Accelerates MSC.Software Simulation Performance with HyperCloud Memory

Supercomputers Assist Cleanup of Decades-Old Nuclear Waste

VELOX Project Launches First Fully Integrated Transactional Memory Stack

LONI Installing High-Speed Network Resources for SCinet

HP Expands Converged Infrastructure Portfolio

ScaleMP Extends SMP Capabilities to IBM’S X3850 X5 Servers

Léo Apotheker Named CEO and President of HP

BOXX Mobile Workstation Sets New Record in Cadalyst Labs

NVIDIA Announces New Quadro Graphics Solutions

Rogue Wave Acquires Performance Optimization Vendor Acumem

University of São Paulo Accelerates Drug Research with SGI Altix XE

Powerful Supercomputer Peers into the Origin of Life

Major Russian State Bank to Invest in T-Platforms Group

Green HPC Center Breaks Ground in Holyoke, Mass.

This week, the Massachusetts high-performance computing collaborative took a big step forward. On Tuesday, participants gathered to celebrate the groundbreaking of the new Massachusetts Green High Performance Computing Center (MGHPCC) in Holyoke, Mass. The project includes partners from academia and industry, including the State of Massachusetts, the University of Massachusetts, Massachusetts Institute of Technology, Boston University, Harvard University, and Northeastern University in Boston, as well as vendors Cisco Systems Inc. and EMC Corp. Local area high school students were in attendance as well, commemorating this important event with a time capsule.

The project has an estimated price tag of $168 million with about $80 million of that going to the datacenter itself. Corporate investors EMC and Cisco are each contributing $2.5 million, the state has pledged $25 million, and the universities will contribute a total of $40 million to the project. The center aims to serve up compute-intensive applications in an environmentally-friendly manner. Areas of research include life sciences, clean energy, climate change, the arts, and more.

Governor Deval Patrick unveiled the site’s location in August, placing it on Bigelow Street in Holyoke’s downtown canal district. The project development team was attracted to Holyoke due to the availability of low-cost hydroelectric power from the Connecticut River.

While the site itself may only create a couple dozen jobs, the real potential is its ability to draw industry into the area, pumping up the overall economy. Governor Patrick explained that the center will serve as a magnet for growth with a potential for creating breakthrough technologies.

In an interview on a public radio station, Patrick was asked by a listener how the HPC center would help stimulate the economy. Here is part of his reply:

If you are in biotech, if you are in clean tech, if you are in pharmaceuticals, you need high performance computing in order to do the modeling for your projects, that’s how it’s done these days.

It’s the biggest, fastest computing center in the eastern part of the country… So when folks say, we got a major project we gotta get done, they’re going to say, we gotta go to Holyoke, that’s a pretty big statement about what it is we’re trying to do here in Western Massachusetts and in the Commonwealth.

The 75-minute event concluded with the unveiling of a sign affixed to the former Mastex Industries building, which reads: “Future Home of the Massachusetts Green High Performance Computing Center.” The center was initially expected to be completed by late 2011, but John T. Goodhue, the interim executive director of the center, said that he expects construction to last until 2012.

The project has an active website – Innovate Holyoke — the official destination for the latest information related to the center.

GENCI Orders World-Class Supercomputer

The Partnership for Advanced Computing in Europe, PRACE, and GENCI — the French national High-Performance Computing organization — have ordered a new high-performance computing system from supercomputer-maker Bull. The new system will be named Curie, as a tribute to Pierre and Marie Curie, physicists who contributed significantly to our modern scientific understanding.

CEO of GENCI, Catherine Rivière, commented on the announcement:

“Nowadays, intensive computing is a key element in national competitiveness, both in scientific and industrial domains. With technical support from CEA (Commissariat à l’Energie Atomique et aux Energies Alternatives), through a competitive tendering process, we were able to assess the excellence of Bull’s offering. This means we will soon have at our disposal a machine that will offer French and European scientists the resources they need to carry out their research work at the highest possible level in a highly competitive global environment.”

The bullx supercomputer employs a modular, general-purpose architecture capable of 1.6 petaflops to enable a variety of applications in the fields of high-energy physics, chemistry, biology, climate research and medicine. Capable of over one million billion operations a second, the machine is the most powerful European supercomputer ever ordered and would place among the top 3 systems based on current TOP500 rankings.

Curie will have 5,040 blades equipped with the latest Intel Xeon processors, touting a total of 90,000 processors in total. The computer’s I/O system will enable it to store over 10 petabytes of data at speeds of up to 250 GB/s.

Philippe Vannier, chairman and CEO of Bull, weighs in:

“The fact that GENCI has ordered a very large-scale bullx supercomputer to support its involvement in the PRACE program is very satisfying for Bull on two counts. Firstly it demonstrates the excellence that our engineers have achieved in technologies that go into the most powerful supercomputers on the planet. But over and above this, it carries within it the seeds of our own aim: to build a large-scale European ecosystem to support innovation, because we are convinced that technological supremacy is our best asset when it comes to facing up to global competition and ensuring the creation of high-level employment here in Europe.”

Curie is the second petascale supercomputer financed by GENCI (Grand Equipement National de Calcul Intensif), one of the founding members of PRACE. It will be located near Paris and housed in a new computing center, the Très Grand Centre de Calcul (TGCC), operated by CEA (Commissariat à l’énergie atomique et aux énergies alternatives). The new supercomputer will extend the PRACE research infrastructure that started with Jugene in Germany, fulfilling PRACE’s goal of providing world-class resources for the European scientific and industrial communities.

The installation of Curie will be completed in two phases: the first before the end of the year and the second in October 2011. The system will be available for European users through the next PRACE call for proposals starting in November 2010.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understanding on January 10. The MOU represents the continuation of a 1 Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Tennessee), Satoshi Matsuoka (Tokyo Institute of Technology), Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown and Spectre security updates on the performance of popular H Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE and NREL Take Steps to Create a Sustainable, Energy-Efficient Data Center with an H2 Fuel Cell

As enterprises attempt to manage rising volumes of data, unplanned data center outages are becoming more common and more expensive. As the cost of downtime rises, enterprises lose out on productivity and valuable competitive advantage without access to their critical data. Read more…

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension around the potential changes that could affect or disrupt Lustre Read more…

By Carlos Aoki Thomaz

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understandi Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension aroun Read more…

By Carlos Aoki Thomaz

SRC Spends $200M on University Research Centers

January 16, 2018

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitiv Read more…

By John Russell

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This