On February 1, 2013, the UK Chancellor of the Exchequer, Rt. Hon. George Osborne, visited the Science and Technology Facilities Council (STFC) site in Daresbury to formally open the Hartree Centre, which will focus on developing software to improve the energy efficiency of supercomputers. Or, to put it another way, “Osborne pulled the string and opened the curtain and unveiled the plaque,” says Mike Ashworth, head of the Hartree Centre.
The ceremonial opening of the Hartree Centre marks a new phase of government and industry collaboration in the development of high-performance computing the UK. A primary goal is to bring together industry, academia and government organizations to use supercomputers to increase the competitiveness of UK industry.
The ceremony also came with a pledge for funding: more than $45 million to create its energy efficient computing technologies for industrial and scientific applications, especially for supercomputers handling big data projects. About $17 million will go to creating software for Square Kilometer Array (SKA), the world’s largest radio telescope. The rest goes into two camps: next-generation software for Grand Challenge science projects, and software to allow industry to make better use of high-performance computing and computational science.
The software research will focus on creating new code to efficiently exploit new computer architectures that will be emerging in the next five to 10 years. “We’re trying to structure that code in a flexible way so that it’s not tied into any one architecture, but reveals multiple levels of parallelism, so that we’re ready to exploit large numbers of light weight cores [used as] accelerators,” says Ashworth.
The Hartree Centre has not yet decided how the money will be finally allocated, but it’s likely to include research on Xeon Phi processors, possibly NVIDIA’s latest generation of Kepler GPUs, and very probably FPGAs.
Yes, Ashworth sees new potential in FPGAs for supercomputing. STFC researchers first looked at using FPGAs for HPC about 10 years ago, but the chips weren’t very fast and were difficult to program. They required programming at the hardware level using VHDL. Now, of course, the chips are much faster and support double-precision, which is required for a lot of scientific applications. Ashworth notes that he’s “very keen on exploring” technology from Maxeler, which has high-level interfaces to FPGAs. He wants to explore how to make FPGAs useful for the kinds of research that Hartree will emphasize.
Energy efficiency is a very prominent part of the center’s mandate. “We’re interested in looking at how key applications perform in terms of their energy efficiency,” says Ashworth. “In the past, computing efficiency meant FLOPS. Now it’s FLOPS per watt. In the past it was time to a solution. Now we’re more interested in the number of watts to achieve a certain solution.”
This is inspired both by government targets to reduce carbon emissions and to save money – which, of course, go hand in hand, since both involve reducing energy consumption.
The center has some pretty heavy-duty hardware to work with: the UK’s most powerful supercomputer, already being made available for research by industry and scientific organizations through STFC. In mid-2012, STFC installed an IBM Blue Gene/Q system, named Blue Joule. It consists of seven racks of servers with 114,688 1.6 GHz cores and 112 TB RAM. When it was fired up last summer, it reached 1.2 petaflops, the first computer in the UK to pass 1 petaflop. That rated it 13th on the TOP500, but has slipped to #16 in the most recent list.
That equipment is accompanied by an IBM iDataPlex system, dubbed Blue Wonder, with 8,192 Sandy Bridge cores for 158.7 teraflops of processing power.
STFC didn’t just buy the computer, however, it got IBM as a partner. “Rather than having vendors just supply us with hardware, we specifically said in the procurement that they must enter into a collaboration with us,” says Ashworth.
In fact, there are several corporate collaborators involved, including Intel, OCF, Mellanox, DataDirect Networks and ScaleMP. Each is contributing some combination of components, services, technical expertise and/or business development expertise. IBM and OCF, for example, help the Hartree Centre find corporate partners to set up joint projects. “When we go into a room with an industrial potential partner, we’ll go in with somebody from IBM,” says Ashworth. “That adds very much to the prospects of landing that business.”
Those partnerships work both ways. One of Hartree’s mandates is to help UK companies make better use of high-performance computing and computational science. To that end, he wants to focus research on accelerators that can help achieve higher performance at lower cost.
“We see the Hartree Center as a testing ground for novel architectures,” says Ashworth. “We can buy a piece of hardware, a development platform, and make it available to academics, make it available to industry. In collaboration with our expertise, we learn how to use the hardware, and set up joint projects with people we believe would benefit from that hardware, and push forward the UK’s ability to exploit these new technologies for the future. We’re looking at a 5-10 year time frame to leverage a lot of these technologies.”
The research priorities are environment, energy, developing new materials, life sciences and human health, and security. One of the Grand Challenge projects at STFC, for example, is a three-way collaboration between STFC, the Met Office and the Natural Environment Research Council (NERC) to develop brand new code for weather forecasting and for climate change studies using supercomputers. Industrial applications might include projects such as computer modeling to create, say, new industrial adhesives or new drugs.
The UK government expects the money invested in Hartree will pay off many-fold by helping industry exploit supercomputing technology to become more competitive.