Visit additional Tabor Communication Publications
December 14, 2007
EDINBURGH, UK, Dec. 12 -- A unique supercomputer called 'Maxwell' -- built in Scotland by the FHPCA with the support of Scottish Enterprise -- has been recognised at this year's prestigious British Computer Society IT Industry Awards in London.
Nominated in two categories, the FHPCA (FPGA High Performance Computing Alliance) came home with a medal having been placed runner-up for the much coveted prize of the BT Flagship Award for Innovation.
The FHPCA was established in 2004 to promote the use of field-programmable gate arrays (FPGAs) as an alternative to microprocessors. With traditional microprocessor-based solutions hitting performance limits, there is a growing need for new technologies that address the need for ever greater processing capability without demanding large amounts of space and power.
Maxwell uses FPGAs and requires much less space and cooling than a conventional microprocessor system. It is also over 100 times more energy-efficient and up to 300 times faster.
Several Scottish companies have been using Maxwell since its launch in March this year. Impressive results have already been achieved in the oil & gas and medical imaging sectors.
One of the first companies to use the supercomputer, Aberdeen-based Offshore Hydrocarbon Mapping plc (OHM), found that its application ran significantly faster on Maxwell. OHM is the world's leading provider of Controlled Source Electromagnetic Imaging (CSEMI) services to the offshore oil industry.
Dr Lucy MacGregor, Chief Scientific Officer of OHM, said: "Improving the performance of our data processing and visualisation services is key to our continued success and we are very excited about the code speed-ups we've achieved with Maxwell."
Of course, many other sectors of industry could benefit too, particularly the financial sector. In order to demonstrate the power of FPGAs and the Maxwell system when handling Monte Carlo calculations for the investment banking sector, the Alliance decided to implement the Black-Scholes algorithm, which is commonly used to calculate future stock prices. Spectacular results were obtained. In particular the algorithm ran 320 times faster per FPGA on Maxwell compared to the equivalent algorithm running on the host PC. This demonstration of the Black-Scholes algorithm has shown the potential benefits of FPGAs to the financial sector and the Alliance is currently pursuing opportunities with several leading investment banks, some of whom have been conducting their own experiments with FPGAs.
Dr Mark Parsons, Commercial Director of EPCC said: "Maxwell has been created for businesses so that they can easily investigate FPGAs. We've already seen it give companies a competitive advantage. We now want more businesses to come and test their codes on Maxwell to see whether it will be useful for them too."
Maxwell was built by the FHPCA (FPGA High Performance Computing Alliance). The Alliance is led by EPCC at the University of Edinburgh and comprises Alpha Data, Nallatech, Xilinx, Algotronix, Scottish Enterprise and the iSLI.
The BCS IT Industry Awards are the leading hallmark of success among practitioners in the IT industry today.
David Clark, BCS Chief Executive, said: "This year's awards are a fitting culmination to our 50th anniversary year which has been exceptional. Technology has enabled unparalleled improvements in productivity and business efficiency over the last 50 years and today IT drives business. These winners have embraced this concept and proven their excellence in innovation and professionalism; they exemplify the importance and value that technology brings to business, society and the economy."
The winners were announced on Thursday 6th December.
Source: EPCC, University of Edinburgh
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.