Visit additional Tabor Communication Publications
October 28, 2005
The adoption of field-programmable gate arrays (FPGAs) is increasing, but a fuller understanding and acceptance of their capabilities is needed for the technology to make the next leap into wider adoption. Malachy Devlin, senior vice president and chief technology officer of Nallatech, recently spoke with HPCwire to address some questions about what FPGAs can do and what is needed to foster expanded acceptance.
HPCwire: With the advent of FPGA technology around the corner and the growing acceptance of FPGA technology within the HPC community, how do you see this technology helping HPC applications?
Devlin: FPGAs may appear to be a new technology, but they are now over 20 years old, with Xilinx first inventing them in 1984. Nallatech has over 1,500 site installations of FPGA-based processing systems illustrating the technology is well on the road of adoption. However, most of this adoption has been realized in high-performance computing within the embedded marketplace. With proven examples of deployment from this area and the continued growth of capacity in high-performance computing, we have shown the viability of FPGAs to carry out algorithms based on bit, integer and now floating point algorithms.
We have also shown FPGAs are able to provide increased processing performance from two-times to over 100-times performance over the fastest microprocessors such as the Opteron or Itanium 2. Interesting this performance doesn't come at the decrement of having to increase the power consumption budget. In fact, FPGAs run much cooler than a microprocessor. Where we are considering over 100W for a high-end microprocessor, FPGAs typically consume around 15W when executing high-performance algorithms. The knock-on effect from this is significant, with this large increase in GFLOPS/Watt, over 10-times, we are able to reduce electricity costs, air conditioning costs and machine room floor space. The latter is realized through the reduced thermal density, which enable us to pack more devices in a given space, thus reducing the floor space required for large installations.
HPCwire: There is a perception that FPGAs are difficult to program. How is this technology being made more accessible?
Devlin: It is true that FPGAs are the younger sibling to the microprocessor and hence the tool flows and methodologies have not reached the same maturity level. But this is changing rapidly. When I first used FPGAs in 1989, the main tools for designing FPGAs were pen, paper and a basic layout tool called XACT. Today, we are able to write programs directly in C, FORTRAN and MATLAB and compile these to FPGAs. The investment in this area continues rapidly.
To get the full performance of FPGAs, we need to take advantage of their ability to run many operations in parallel, whereas an Itanium 2 has five floating point units, we are able to put hundreds of floating point units in an FPGA. This does mean that code refactoring may be necessary to take advantage of this; therefore, there will be a limit of taking dusty deck code and have it running instantly on an FPGA.
We shouldn't look at this totally as a disadvantage. FPGAs are allowing us to break the shackles of Von Neumann and instruction set architectures. This can only be a good thing as we can now dynamically create processing engines that fit the algorithm problem, rather than fitting the algorithm to a particular processor architecture. In fact, the need for code refactoring is really a result of providing further choice in how the algorithm implementation is constructed. This is the first time that software developers are given the capability to construct their own processor architectures rather than relying on the decisions of a processor architecture team within the processor company that needs to try and cater for a wider range of application areas.
HPCwire: Nallatech recently partnered with SGI to provide FPGA technology to its products. How vital are technology partnerships such as these in the development of the FPGA market?
Devlin: Partnerships are critical to the success. Our relationship with SGI is bringing together the best in class for HPC and FPGA computing technology. Through this blending of capabilities, we are developing some great innovations for reconfigurable computing. Partnerships also need to go wider than this. We need to create the complete ecosystem for the technology to survive and prosper. Fortunately, this is taking shape through initiatives such as the FPGA High Performance Computing Alliance, FHPCA and OpenFPGA.org. These are bringing together over 26 organizations that are addressing standardization and increasing awareness of FPGA within the high performance computing space.
HPCwire: Which particular industries do you see taking the lead in the wider deployment of FPGA technology?
Devlin: We are seeing a lot of interest from a wide range of industries. This is driven by the demonstration of improved computing performance per watt, per dollar and per cubic foot by FPGAs over traditional processors. These are all key parameters that are getting pushed to their limits with the latest large clusters.
Our FPGA systems have been running applications in a wide range of industries, such as seismic processing, bioinformatics, simulations and encryption. We get performance improvements from 17-times in seismic to over 250-times in the bioinformatics. It's important to note that applications such as the seismic processing and simulation incorporate significant amounts of floating point operations in their algorithms, which is typically considered a no-go for FPGAs. This is not true; we have been doing floating point on FPGAs since 2002, primarily single precision, however, we have had double precision floating capability since 2004. We are also programming these floating point algorithms in C and not a hardware-orientated language such as VHDL, making FPGAs much more accessible to HPC developers.
HPCwire: What do you see as the main challenges FPGA technology will face in the future?
Devlin: FPGAs have brought a new processing paradigm to the mix and have certainly shown their capabilities in real applications. Moving forward, we need to continue the thrust on tools, environments and methodologies so that FPGAs become more manageable and identify with today's software techniques. This needs the community to get together and create appropriate standards to ensure that we do not fragment the market before it goes mainstream. FPGAs are the first commercially successful technology that has given us the tools we need to move from this Von Neumann era to a new era where we are no longer constrained by fixed processor architectures.
Malachy Devlin is senior vice president and chief technology officer of Nallatech. He obtained a Ph.D. in Signal Processing from Strathclyde University and is recognized worldwide, as an industry expert on FPGA technologies. He is a software specialist with several years experience in various companies, including the National Engineering Laboratory, Telia and Hughes Microelectronics (now part of Raytheon). He is part of the team that developed Nallatech's DIME modular technology based on FPGAs.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.