Dr. Lee Margetts, head of Synthetic Environments at the University of Manchester Aerospace Research Institute, has recently been awarded the largest single allocation of CPU on the UK’s HECToR Service. HPCwire contacted him to find out just what he hopes to do with three million processor hours of high-end compute time.
HPCwire: Hello Lee. Well let me start by asking what exactly is HECToR?
Lee Margetts: Hi. Well, HECToR is an acronym that stands for High End Computing Terascale Resource. It’s a UK service that was launched in January this year. The first phase is a Cray XT4 with more than 6,000 dual core processors.
HPCwire: Three million processor hours seems like a lot of time on a Cray machine. What are you going to do with it all?
Margetts: Yes, it is a very generous allocation. In financial terms, we’re talking about 1.7 million pounds sterling or 3.5 million dollars. This is a significant award for the University of Manchester. I’m personally not going to be involved in every stage of the process. I’m just one of a large team of academics and researchers who are using ParaFEM, a general purpose code for parallel finite element analysis that I started to develop with my PhD supervisor nearly 10 years ago.
HPCwire: What are you using the code for?
Margetts: Our main project is titled “Ultrascalable Modelling of Materials with Complex Architectures”. This is led by Dr. Paul Mummery and Dr. Mohammad Sheikh at the University of Manchester. The team are using imaging techniques to create 3D models of real material microstructures. After an image is created, it is converted into a finite element mesh using software from a UK company called Simpleware Ltd.
HPCwire: Sounds fascinating, but what exactly are the application areas?
Margetts: There are many. Where do I start? We’re doing some work with the European Space Agency, looking at the thermo-mechanical properties of carbon foams for spacecraft applications. We’re also looking at the design of woven composites for high temperature and high performance engineering applications. There’s also a project running that’s looking at the lifetime behavior of concrete in nuclear reactor pressure vessels. This work is led by Professor Roger Crouch at the University of Durham. But perhaps the most exciting application area involves dinosaurs.”
HPCwire: HPC and dinosaurs. That sounds like a combination that wouldn’t get many hits in Google!
Margetts: You’d probably find my colleague Dr. Bill Sellers at Manchester, who’s recently published an article stating that T-Rex can run faster than David Beckham. We’ve recently been awarded funding by Microsoft to look into more detail at dinosaur locomotion.
HPCwire: Who’s Beckham, another colleague?
Margetts: Ha ha! No, no! He’s a famous Manchester footballer who wore a skirt, married a pop star and captained the English soccer team. Beckham aside, another guy you’ll come across at Manchester is Dr. Phil Manning. He’s trying to shake up paleontology, but he’s also causing ripples in engineering too.
HPCwire: What’s he up to?
Margetts: He’s using supercomputers and finite elements as interpretive tools in paleontology. He’s recently been over to Boeing in the US, in order to scan 50 cubic feet of rock.
HPCwire: What does he hope to achieve?
Margetts: The rock contains a dinosaur mummy. Much of the soft tissue and skin have been preserved. This is really exciting as it will provide new evidence regarding how dinosaurs really were. We’ll be using this information to carry out detailed biomechanics simulations on HECToR.
HPCwire: So why is this causing ripples in engineering?
Margetts: His work at Boeing caused a lot of gossip. In the end, it gave NASA the idea of scanning the nose cone of the Space Shuttle — to see how the ceramic tiles “flowed” on re-entry.
HPCwire: All these applications are going to use, what did you call it? ParaFEM?
Margetts: Yes, that’s right. But we’re also going to be using robotics software and genetic algorithms to investigate dinosaur locomotion.
HPCwire: And ParaFEM is an academic code?
Margetts: Yes.
HPCwire: So is the code going to be robust enough for HPC users in industry?
Margetts: That’s a very topical question. During the launch of Manchester’s Aerospace Research Institute, a chap from Airbus said to me “Nice work Lee, but is this stuff ever going to see the light of day?”
HPCwire: Exactly.
Margetts: Well, in the UK, getting research funding from the government for code development is unheard of. This means there’s little money to productize academic code. Fortunately for the team, I was awarded funding for commercialization from the University’s technology transfer company, UMIP Ltd.
HPCwire: Has that had any success?
Margetts: We’ve met with a number of high profile end users and engineering software vendors. We’re hoping to strike up some kind of joint development partnership. Although some of the software vendors we’ve met have said that they understand the technology to be infinitely scalable at all stages through the analysis process, it’s not on their product development horizon. Personally I think this kind of response is bad news for HPC hardware vendors. More software applications means a higher demand for machines.
HPCwire: So what’s your vision for the future of HPC? Do you see the market as being limited?
Margetts: Well, I’m going to be a little controversial here. I personally think the term HPC creates a barrier between two communities of computer users, those that use HPC and those that use desktop machines. Since I started working in the field 10 years ago, I’ve noticed a very blatant sense of superiority and elitism when HPC professionals use the phrase. I prefer to use the term “scalable computing.” Software should be designed, right at the beginning, to run on all sizes of computer system. All users really want is their applications to run faster. This applies equally well on desktop machines and HPC systems.
HPCwire: Some people might say that sounds a little impractical.
Margetts: Not at all. Interactive Supercomputing Inc is doing it already. The user works on their laptop and the compute-intensive parts of the analysis are off-loaded to a backend HPC system, a compute server. The HPC system could be anywhere in the world. The user’s not going to give a single thought to where it is. I’m quite excited at the prospect of the paradigm we’re entering, where technical computing is scalable, mobile and interactive.
HPCwire: Thank you Lee. It was a pleasure talking with you. Best of luck with those three million processor hours.