Visit additional Tabor Communication Publications
February 21, 2013
On February 1, 2013, the UK Chancellor of the Exchequer, Rt. Hon. George Osborne, visited the Science and Technology Facilities Council (STFC) site in Daresbury to formally open the Hartree Centre, which will focus on developing software to improve the energy efficiency of supercomputers. Or, to put it another way, "Osborne pulled the string and opened the curtain and unveiled the plaque," says Mike Ashworth, head of the Hartree Centre.
The ceremonial opening of the Hartree Centre marks a new phase of government and industry collaboration in the development of high-performance computing the UK. A primary goal is to bring together industry, academia and government organizations to use supercomputers to increase the competitiveness of UK industry.
The ceremony also came with a pledge for funding: more than $45 million to create its energy efficient computing technologies for industrial and scientific applications, especially for supercomputers handling big data projects. About $17 million will go to creating software for Square Kilometer Array (SKA), the world's largest radio telescope. The rest goes into two camps: next-generation software for Grand Challenge science projects, and software to allow industry to make better use of high-performance computing and computational science.
The software research will focus on creating new code to efficiently exploit new computer architectures that will be emerging in the next five to 10 years. "We're trying to structure that code in a flexible way so that it's not tied into any one architecture, but reveals multiple levels of parallelism, so that we're ready to exploit large numbers of light weight cores [used as] accelerators," says Ashworth.
The Hartree Centre has not yet decided how the money will be finally allocated, but it's likely to include research on Xeon Phi processors, possibly NVIDIA's latest generation of Kepler GPUs, and very probably FPGAs.
Yes, Ashworth sees new potential in FPGAs for supercomputing. STFC researchers first looked at using FPGAs for HPC about 10 years ago, but the chips weren't very fast and were difficult to program. They required programming at the hardware level using VHDL. Now, of course, the chips are much faster and support double-precision, which is required for a lot of scientific applications. Ashworth notes that he's "very keen on exploring" technology from Maxeler, which has high-level interfaces to FPGAs. He wants to explore how to make FPGAs useful for the kinds of research that Hartree will emphasize.
Energy efficiency is a very prominent part of the center's mandate. "We're interested in looking at how key applications perform in terms of their energy efficiency," says Ashworth. "In the past, computing efficiency meant FLOPS. Now it's FLOPS per watt. In the past it was time to a solution. Now we're more interested in the number of watts to achieve a certain solution."
This is inspired both by government targets to reduce carbon emissions and to save money – which, of course, go hand in hand, since both involve reducing energy consumption.
The center has some pretty heavy-duty hardware to work with: the UK's most powerful supercomputer, already being made available for research by industry and scientific organizations through STFC. In mid-2012, STFC installed an IBM Blue Gene/Q system, named Blue Joule. It consists of seven racks of servers with 114,688 1.6 GHz cores and 112 TB RAM. When it was fired up last summer, it reached 1.2 petaflops, the first computer in the UK to pass 1 petaflop. That rated it 13th on the TOP500, but has slipped to #16 in the most recent list.
That equipment is accompanied by an IBM iDataPlex system, dubbed Blue Wonder, with 8,192 Sandy Bridge cores for 158.7 teraflops of processing power.
STFC didn't just buy the computer, however, it got IBM as a partner. "Rather than having vendors just supply us with hardware, we specifically said in the procurement that they must enter into a collaboration with us," says Ashworth.
In fact, there are several corporate collaborators involved, including Intel, OCF, Mellanox, DataDirect Networks and ScaleMP. Each is contributing some combination of components, services, technical expertise and/or business development expertise. IBM and OCF, for example, help the Hartree Centre find corporate partners to set up joint projects. "When we go into a room with an industrial potential partner, we'll go in with somebody from IBM," says Ashworth. "That adds very much to the prospects of landing that business."
Those partnerships work both ways. One of Hartree's mandates is to help UK companies make better use of high-performance computing and computational science. To that end, he wants to focus research on accelerators that can help achieve higher performance at lower cost.
"We see the Hartree Center as a testing ground for novel architectures," says Ashworth. "We can buy a piece of hardware, a development platform, and make it available to academics, make it available to industry. In collaboration with our expertise, we learn how to use the hardware, and set up joint projects with people we believe would benefit from that hardware, and push forward the UK's ability to exploit these new technologies for the future. We're looking at a 5-10 year time frame to leverage a lot of these technologies."
The research priorities are environment, energy, developing new materials, life sciences and human health, and security. One of the Grand Challenge projects at STFC, for example, is a three-way collaboration between STFC, the Met Office and the Natural Environment Research Council (NERC) to develop brand new code for weather forecasting and for climate change studies using supercomputers. Industrial applications might include projects such as computer modeling to create, say, new industrial adhesives or new drugs.
The UK government expects the money invested in Hartree will pay off many-fold by helping industry exploit supercomputing technology to become more competitive.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.