In July 2005, Duncan Buell keynoted the Reconfigurable Systems Summer Institute at NCSA. Buell directed the Splash 2 reconfigurable computing project in the 1990s and now leads a university research team supported by the Department of Defense. NCSA's J. William Bell interviewed Buell as he moved into new digs as interim dean of the University of South Carolina's College of Engineering and Information Technology.
Bell: Many people are just starting to get interested in reconfigurable computing. Could you give a definition of the field?
Buell: The field started almost as soon as these FPGAs started to exist in the '80s. You have a chip that can take on different hardware characteristics or different gate characteristics depending on what it is configured to do. The idea is that if you have a computation that is not supported successfully by the traditional instruction set that Intel or whoever hands you, then you can essentially design your own arithmetical logic unit to handle exactly the computation you need.
Bell: Why should this be on the radar screen of the discipline scientist — the chemist or the astronomer who's doing simulation or data analysis?
Buell: In the last couple of years, the chips have gotten big enough and fast enough that you could think about laying out the circuitry for floating point [operations]. And that's something that the scientific computing people are taking note of.
You can create a number of processing units and have them all functioning in parallel. If they're all on the same chip, all the data stays on the chip. You don't have to go back to memory, not even back to cache. Now, of course, you have to realize that the floating point FPGA is going to be slower than an Intel processor, so the speedup's not necessarily a factor of 100. But you've eliminated the overhead, so maybe you only get a factor of 50. You need to have at least a factor of 50 before people take notice. Anything less than that, and they just ride the technology curve. They just buy another dozen clusters or something.
Bell: How do you see this shift expanding out to more discipline scientists?
Buell: Part of my assumption in all of this has been that if you demonstrate success, people will notice. As long as you're talking about the performance advantages you might be able to get, people will smile nicely, be polite, and do whatever they're doing. The big deal in all of this is that having the hardware is only part of the game. The really important part is the programming environment, so that ordinary programming types are able to make use of the machine.
Bell: What are the key features of that software environment?
Buell: I would maintain that they are going to be willing to look at the hardware as something that will require a special version of something like C or C++. They're not going to be willing to learn a real hardware language.
The other big problem always has been that debugging has to be something that looks like debugging. There has to be something like a software simulator that you can run a debugger on because the first 90 percent of the programming environment is just getting the right answers out and has nothing to do with the hardware and not a whole lot to do with performance.
Bell: And what can someone in your position or NCSA's position do to ensure these sort of things actually happen?
Buell: If you had a few people at the edge, and you had someone willing to undertake an implementation, one message is: “I did the implementation and it was successful, but it would have been more successful and I would have gotten done a lot faster if in fact the programming environment had been the sort of thing it should have been.” I think it's going to take a few evangelists to demonstrate they're willing to put in the effort and then point out that they really shouldn't have to put in that much effort.
It's a shakedown cruise. You're going to go see what works, what doesn't work, what could be better. The more applications, the more you expose the features that need to be included that for some reason weren't.
Bell: In your keynote, you talked about the “must haves” for reconfigurable computing. The big speedups, the software environment. But you also discussed it being something people choose to do again.
Buell: When I ran Splash 2, I figured we'd start with a dozen applications that might have potential. Given that we had a fixed set of hardware constraints, probably half of the dozen we would look at briefly, and they wouldn't match the architecture we had. That would leave us with six that we might actually implement. Of those, three might work, have a performance improvement, but say the improvement is 10 or 15 times. Then you sit back, and you say that was a nice experiment.
Bell: So there's hardware overhead and then there's human overhead?
Buell: If it's going to be a serious application, it's going to be six months or nine months worth of programming effort. In six months or nine months, machines get better. So really it's the people cost. Is it worth doing it in a somewhat unconventional way compared to doing it in a much more conventional way and just not getting as good performance? From the point of view of a middle management person, is it really a good use of all the resources?
Bell: It's interesting to hear you talk about it like that. Rob Pennington [NCSA's Chief Technology Officer] and the people in our Innovative Systems Lab have acknowledged that aspect of it and are looking at a number of different — as many as a dozen, as you just described — applications. They make a full admission up front that some of this is just exploring the space.
Buell: That's why it's called research. If there isn't a significant possibility of failure it's not research, it's just work. That's not where the “Ah-ha! Gotcha!” big breakthrough comes.
This article originally appeared in the December 13, 2005 issue of Access Online and has been provided courtesy of NCSA.