European HPC Industry in Need of Revitalization

By By Christopher Lazou

September 29, 2006

The UK's Atomic Weapons Establishment (AWE) hosted an excellent HPC Europe Workshop at St Catherine's college, Oxford. This interactive workshop, which took place on September 25-26, discussed issues of concern to Europeans working in the HPC area. It brought together top experts in HPC — restricted to less than 10 from each European country — to share intelligence and develop common strategies, hoping to collectively leverage future directions of European HPC. Invitations to speak or attend were made by a country representative who also advised on the agenda. This year, about 80 delegates attended from within both the customer and supplier segments. Non-European vendors were present only for the last sessions.

The previous workshop, held at Maffliers, Paris in 2004, focused on the strengths and weaknesses of high performance technical computing in Europe. This year the focus was on strengthening HPC in Europe.

The first day consisted of 17 European presentations to set the scene on the state of HPC in Europe. The second day consisted of 12 vendor presentations and a vendor panel discussion. To give the reader a flavour of the proceedings, below are few examples of how the European user community is satisfying its computing needs.

Professor Richard Kenway of the University of Edinburgh said: “Some areas of science are limited by computer performance. They demand sustained speeds of petaflops or more, now. This presents a greater challenge than the recent step to teraflops, because massive parallelism must be delivered at an affordable price, and many codes running today on the UK HPCx teraflops system will not scale up to run efficiently on ten to hundred times more processors. Hardware that is tailored to the application, and/or serendipitous use of cheap commodity parts developed for some other purpose, will be needed to keep machine costs down, and software will need to be re-engineered to use it efficiently. These factors are driving us towards a diversity of architectures, international facilities and community code bases. There is scope for innovative solutions both from small players and from traditional vendors, as we have seen in the QCDOC, FPGA and Blue Gene/L projects at Edinburgh's Advanced Computing Facility. This growing diversity gives Europe the opportunity to re-enter the high-performance computing arena”.

Thomas Lippert, from the John von Neumann-Institute for Computing (NIC), Germany, said: “Presently, we witness a rapid transition of cutting edge supercomputing towards highest scalability, utilizing up to hundreds of thousands of processors for cost-effective capability computing…. There is hot debate within the community as to whether the advent of highly scalable systems like Blue Gene/L and the emergence of a more heterogeneous hardware landscape, signal the onset of a paradigm shift in HPC. Still, there are HPC problems that are less scalable by nature and might require intricate communication capabilities or a large non-distributed memory space with extremely fast memory access. NIC has recently complemented its 9 teraflops general purpose SMP-cluster, with a 46 teraflops Blue Gene/L supercomputer, which is currently one of the fastest computers in Europe. Both systems share a huge global parallel file system, which is part of the file system of the European DEISA alliance. With this configuration, the NIC is able both to meet the requirements of a very broad range of projects and to support a selected number of high-end simulation projects.” Lippert then presented simulation examples from materials science demonstrating the added value through this heterogeneous hardware approach and NIC's plans for joining the European e-science ecosystem.

Several specific applications were also presented by other speakers. Artem Oganov of ETH Zurich presented: “USPEX — an evolutionary algorithm for crystal structure prediction”. He described how their simulations found new phases of planetary materials with lowest energy levels at extreme pressures, identifying structures where experimental data are insufficient. This algorithm has the potential for designing new materials entirely on the computer. He discussed some of the applications of this method for a number of substances (C, N, O, S, H2O, MgSiO3, CaCO3, MgCO3) and possible industrial uses.

Mark Savill of Cranfield University, UK focused on the recent usage of the National Supercomputer by the UK Applied Aerodynamics HPC Consortium. Flagship computations of aircraft engine components and whole aircraft configurations were discussed — especially a project to simulate vertical descent of a Harrier model for hot-gas ingestion studies.

Reinhard Budich from the Max Plank Institute, Germany talked about the European Network for Earth System (ENES) Modelling and the current situation at the German climate computing centre. After describing their current activities he went on to say that hardware is the cheapest component, software and data management are the real barriers for achieving goals. Climate is very high on the European political agenda, especially understanding the dynamics of the human impact of climate change. ENES is involved in discussions with 30 institutes worldwide, in defining the work expected to start in 2009, for the IPCC AR5 planned for 2012/13 timeframe.

In the hardware systems field, Piero Vicini of INFN, Rome described “The APE project”. In the last 20 years, the INFN APE group (APE is an acronym for “Array Processor Experiment”) has been involved in the development of massively parallel supercomputer dedicated to LQCD (Lattice Quantum Chromo Dynamics), a typical “killer” application for a general purpose supercomputer. ApeNEXT is a fourth generation system, capable of a peak performance of 5 teraflops with a sustained efficiency of more than 50 percent for key applications. It shows impressive ratios of flops/watt and flops/volume at a cost-value ratio of half a Euro per megaflops. Vicini claimed Ape is a highly proficient European HPC computer in the same application space as the Blue Gene/L, used in collaborative work by teams in USA, UK and Japan for LQCD. The next Ape system aims at petaflops performance for a larger class of scientific and engineering applications.

Claude Camozzi of Bull, France talked about: “FAME2: a Pole de Compétitivité project towards petaflops computing”. This is a collaborative project from the French “pole de compétitivité named [email protected]” and has the ambition to provide an emulation infrastructure allowing industrial and academic research laboratories to anticipate availability of nodes based on COTS for petaflops systems. It should enable software developers to create and adapt tools and innovative applications for this new scale of computing power. The efficiency of some hardware accelerators (from European vendors, e.g., ClearSpeed and Ape) will also be evaluated in order to provide guidelines on how to provide “capability system features” on “capacity systems”. They are also looking at new database concepts using native XML and new multilingual research tools. Their goal is to be able to retrieve any reference from a 50 terabyte database within a couple of seconds. This speaker also described the federative aspect of this project at the French level and made suggestions on how to open and leverage this collaborative effort across Europe.

As stated above, the workshop's main focus was to exchange experiences and views on how to strengthen HPC in Europe. Issues like: What do Europeans do best in HPC? Whether it is software development for applications specific to Europe, hardware components and integration, or total solution integration. What issues arise from using non-European HPC? For example, what would happen if there were trade restrictions on high technology exports from the USA to certain European countries? In addition, how do Europeans optimise their relationship with non-European vendors? And lastly, what HPC projects can best be done at a European level, but can't be done well at a national level?

To put this in context, a substantial number of people attending this workshop were either representatives or directors of national large-scale computing facilities currently delivering teraflops of sustained performance on Bull, Cray, IBM, and NEC systems. A number of these participants expressed strong concern that Europe is falling behind the USA, Japan and Asia in using HPC as a strategic resource to achieve economic competitiveness.

A glance at the Top500 list provides evidence that Europe is lagging far behind the United States and Japan in supercomputers. Other indicators cited include patents and published research papers. This is very alarming and is a direct consequence of setbacks in large 'computational projects' at the beginning of the 1990s when the European intensive computing industry collapsed. Today only a few small European computer businesses survive. For example, Meiko collapsed but was bought by the Italian firm Finmeccanica and renamed Quadrics. This company is today producing the 'Rolls Royce' of networks. ClearSpeed is also a spin-off of the failed UK INMOS Transputer effort of the late 1980s. In France, a revitalised Bull is coming back to the forefront with the TERA-10 machine. This system delivered 12.5 teraflops sustained performance on the CEA/DAM benchmark.

As Jean Gonnord, head of the Numerical Simulation Project and Computing at CEA/DAM said: “With an almost non-existent industrial framework and lack of any real strategy, Europeans are using a 'cost base' policy in intensive computing. Laboratories are investing in HPC using their own research funding, so naturally the aim is to get the cheapest machines. This has some odd effects: users practise self-censorship and depend on the American and Japanese makers to define what tomorrow's computing will be like, and this makes Europe fall even farther behind”.

In other words, HPC is of the highest and most pervasive strategic importance. As I wrote in my book 20 years ago: “It enables scientists to solve today's problems and to develop the new technology for tomorrow's industry, affecting national employment patterns and national wealth”. It is also the main tool for the simulation and stewardship of nuclear weapons and delivery systems. Thus HPC is intertwined with national policies spanning the whole spectrum of national security, the armament industry and the whole industrial civilian economy. It would be perilous for Europe to ignore it.

Europe was very late compared to the USA and Asia in embracing HPC. To make up for lost ground, Europe should implement a more proactive policy in supercomputing, centred on a synergy between defence, industry and research.

There are, however, some positive signs on the horizon. For example, the success story of the TERA-10 project, at CEA, was based on having a real policy in high performance computing — grouping resources and using the defence industry research synergy — and according to Gonnord, shows the way for France to get back in the HPC race. Gonnord went on: “Times change — and mentalities too! Since the beginning of 2005 we have seen several changes. For example, the French National Research Agency (ANR) has included 'intensive computing' as an aspect in its program and launched a call for projects last July. Nearly fifty projects were submitted last September and have been evaluated. Another sign is that the [email protected] competitiveness initiative, of which [email protected] is one of the key elements, has just launched a project to develop the new generation of computers leading to petaflops. Of course, these efforts do not compare with those undertaken in the United States, but it's a good start”.

The other good news is that a similar initiative is to be launched at the European level. After a year of effort and persuasion, supercomputing is going to reappear in the budget of the 7th European RTD Framework Programme (2007-2013), which should include an industrial aspect. The beacon project in this initiative will be, if accepted, to set up three or four large computing centres in Europe with the mission not of just providing computing for a given scientific theme, but to stay permanently in the top five of the Top500 list. Undoubtedly, this will mean that major numerical challenges could be solved in the majority of scientific disciplines leading to major technological breakthroughs.

Existing large scale computing centres in Europe are already preparing the case for hosting a petaflops system. In this respect Jean Gonnord said: “The CEA/DAM-Île-de-France scientific computing complex is a natural candidate to host and organise such a structure. But one thing is sure — all of these projects will only make sense if they are based, like in the United States, Japan and now in China, on a solid local industrial network and a proactive policy of national states and the European Union”.

The invited vendors gave excellent presentations discussing their Roadmaps. Cray talked about adaptive supercomputing. NEC talked about HPC solutions using hybrid systems for delivering sustained application performance. IBM talked about their commitment to HPC in Europe. Bull spoke of their plans to deliver petaflops systems. Chip manufacturers Intel, AMD and ClearSpeed presented their future product visions. Other vendors, including Quadrics, also gave talks.

Personally, I find the Cray concept of 'Adaptive Supercomputing' very attractive. It recognises that although multi-core commodity processors will deliver some improvement, exploiting parallelism through a variety of processor technologies, e.g., using scalar, vector, multi-threading and hardware accelerators (FPGAs or Clearspeed) creates the greatest opportunity for application acceleration.

Adaptive supercomputing combines multiple processing architectures into a single scalable system. Looking at it from the user perspective, one has the application program, followed by a transparent interface, using libraries, tools, compilers, scheduling system management and a runtime system. The adaptive software consists of a compiler that knows what types of processors are available on the heterogeneous system and targets code to the most appropriate processor. The result is to adapt the system to the application — not the application to the system. The “Grand Challenge” is to do this efficiently.

The beauty of this concept is that once the workload profile is known, a user buys a system with the right mix of hardware to match that workload profile and the onus is then on the vendor's system software to deliver high productivity. One assumes that the above vision played a part in the recent sales successes at AWE, CSCS, EPSRC, NERSC and ORNL, by Cray. Interestingly, these sites are mainly replacing IBM hardware with the Cray XT3.

The HPC community has accepted that some kind of adaptive supercomputing is necessary to support the future needs of HPC users as their need for higher performance on more complex applications outpaces Moore's Law. In fact, the key players in the petaflops initiatives are broadly adopting the concept, despite using distinct heterogeneous hardware paths. The IBM hybrid Opteron-Cell Roadrunner system, to be installed at LANL, is the latest example.

In addition to the above vendor presentations, there was a panel discussion. The panel represented a broad spectrum of the industry: Chip vendors AMD, Intel and ClearSpeed; computer vendors Bull, NEC, IBM, Cray and Linux Networx; and service provider T-Systems all explained their position in the European landscape.

To conclude, there was a strong feeling at this workshop that Europeans should get their act together and find the political will to put in place funding structures that encourage a home-grown HPC industry if they wish to remain competitive players in this pervasive strategic field. For Europeans to carry on as in the recent past would be unwise and perilous in the long term.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ABB Upgrades Produce Up to 30 Percent Energy Reduction for HPE Supercomputers

June 6, 2020

The world’s supercomputers are currently allied in a common goal: defeating COVID-19. To analyze the billions upon billions of molecules that might produce helpful therapeutics (or even a vaccine), an unimaginable amou Read more…

By Oliver Peckham

Supercomputers Take to the Solar Winds

June 5, 2020

The whims of the solar winds – charged particles flowing from the Sun’s atmosphere – can interfere with systems that are now crucial for modern life, such as satellites and GPS services – but these winds can be d Read more…

By Oliver Peckham

HPC in O&G: Deep Sea Drilling – What Happens Now   

June 4, 2020

At the beginning of March I attended the Rice Oil & Gas HPC conference in Houston. That seems a long time ago now. It’s a great event where oil and gas specialists join with compute veterans and the discussion tell Read more…

By Rosemary Francis

NCSA Wades into Post-Blue Waters Era with Delta Supercomputer

June 3, 2020

NSF has awarded the National Center for Supercomputing Applications (NCSA) $10 million for its next supercomputer - named Delta – “which will kick-start NCSA’s next generation of supercomputers post-Blue Waters,” Read more…

By John Russell

Dell Integrates Bitfusion for vHPC, GPU ‘Pools’

June 3, 2020

Dell Technologies advanced its hardware virtualization strategy to AI workloads this week with the introduction of capabilities aimed at expanding access to GPU and HPC services via its EMC, VMware and recently acquired Read more…

By George Leopold

AWS Solution Channel

Join AWS, Univa and Intel for This Informative Session!

Event Date: June 18, 2020

More enterprises than ever are turning to HPC cloud computing. Whether you’re just getting started, or more mature in your use of cloud, this HPC Cloud webinar is an excellent opportunity to gain valuable insights and knowledge to help accelerate your HPC cloud projects. Read more…

Supercomputers Streamline Prediction of Dangerous Arrhythmia

June 2, 2020

Heart arrhythmia can prove deadly, contributing to the hundreds of thousands of deaths from cardiac arrest in the U.S. every year. Unfortunately, many of those arrhythmia are induced as side effects from various medicati Read more…

By Staff report

NCSA Wades into Post-Blue Waters Era with Delta Supercomputer

June 3, 2020

NSF has awarded the National Center for Supercomputing Applications (NCSA) $10 million for its next supercomputer - named Delta – “which will kick-start NCS Read more…

By John Russell

Indiana University to Deploy Jetstream 2 Cloud with AMD, Nvidia Technology

June 2, 2020

Indiana University has been awarded a $10 million NSF grant to build ‘Jetstream 2,’ a cloud computing system that will provide 8 aggregate petaflops of comp Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

COVID-19 HPC Consortium Expands to Europe, Reports on Research Projects

May 28, 2020

The COVID-19 HPC Consortium, a public-private effort delivering free access to HPC processing for scientists pursuing coronavirus research – some utilizing AI Read more…

By Doug Black

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers Read more…

By John Russell

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

HPCwire and all of the Tabor Communications family are saddened by last week’s passing of Rich Brueckner. He was the ever-optimistic man in the Red Hat presiding over the InsideHPC media portfolio for the past decade and a constant presence at HPC’s most important events. Read more…

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia Read more…

By Doug Black

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This