Visit additional Tabor Communication Publications
December 02, 2005
This year marks the 15th birthday of the Edinburgh Parallel Computing Centre (EPCC). This is, for us a significant milestone and I wanted to take this opportunity to review the past decade and a half, not only in terms of the changes we have seen as an organization, but also to reflect on the revolution that has taken place in the HPC arena more generally.
Although founded in 1990 as the University of Edinburgh's flagship for HPC and its application in computational science, our roots go back almost a decade earlier. In 1981 the Department of Physics launched an initiative to buy two ICL Distributed Array Processors and use these highly-parallel computers as a cost-effective alternative to vector computers for computational science. The success of the academic, and latterly, industrial research on these and successor machines proved that there were real opportunities from pulling this activity together into a coherent department - and EPCC was born.
Speaking as one of those who came out of the Department of Physics to found EPCC, those were heady days. Not only was parallel computing the "in" computing technology, but many more scientists were realising that computation was emerging as the third methodology of science to complement theory and experiment. This meant that new opportunities were emerging in exciting new fields - and that included industrial applications. Embracing the linkage between academia and industry from the outset set EPCC down a road which it has followed ever since. Encapsulated in the term 'win-win-win' we have sought to form alliances within and across projects so that the participants have obtained more for their investment than would have been possible otherwise. Without this entrepreneurial approach we would not have been able to build up our range of facilities, which are unmatched in any European university, nor would our 70 staff members be able to span such a wide range of activities from HPC facilities management, through industrial software development, European co-ordination, HPC training to research into HPC tools and techniques and academic computational science development.
More recently, the necessity of marrying the outputs of HPC research with experimental data has driven much grid research worldwide. EPCC, alongside its sister institute, the National e-Science Centre has taken a leadership role in developing the middleware to support such grid-based research and, then, to help academia and industry to build applications on top of this infrastructure. In recognition of the success of the OGSA-DAI project, one of the largest that EPCC has ever been involved with, we became a founding partner in the Globus Alliance two years ago. It was therefore a great satisfaction for me to hear a few days ago that our OGSA-DAI work will be funded for another three years.
Looking back can be dangerous as memory can be a distorting mirror, but I would like to pick out a few milestones along our path. The first of these was our collaboration with Thinking Machines which brought the Connection Machine to Edinburgh in 1991. This was the first time that a parallel computer in the UK out-performed the Cray vector supercomputers, then at RAL. Not only did this machine give a boost to Edinburgh researchers, but it raised the visibility of parallel computing and of EPCC on the national and international scene and led fairly directly to two other key activities that we have carried on ever since: European research, training and co-ordination; and, UK national HPC services.
The development of our industrial program into a slick machine bringing in clients from blue chip multi-nationals to local SME's has also been vital to our success. In a portfolio of projects with over 100 clients it is the unusual ones that stick out and I always think of our work on automated inspection of coated mushrooms, or monitoring the effectiveness of fishing nets - even if projects such as designing more effective wind turbines, or maximizing extraction from oil reservoirs may have had more widespread effect.
Although we have had a training activity for a decade, I was particularly pleased when a few years ago we started the UK's first MSc in HPC. This has been a real success, attracting many European and international students to the University each year. Recently expanded to include a linkage to computational PhD students from around the UK, this project is going from strength to strength and we see it as vital to maintain the UK's world-leading position in computational science research. Learning from the experiences of the past, and from other applications areas, is vital as the use of HPC in computational science breaks out of its traditional homelands of physics, chemistry and engineering into new areas such as biology, medicine and geology.
If the number of application domains has increased as HPC has become more mainstream, we have seen a corresponding decline in the range of technology options available. When in 1991 Greg Wilson and I edited a book on the HPC technology marketplace we had to be selective to keep the book to under 400 pages. Today, with the domination by the big computer companies, we would find it hard to produce a long pamphlet. Is this a bad thing? Provided that it does not stifle future progress, my answer would be no. Code portability between platforms is better than ever and the big companies have produced top-end machines at ever more affordable prices - something that would not have been possible without the benefits of scale. The emergence of new novel-architecture machines, such as the IBM Blue Gene, or the home-grown QCDOC or machines from niche providers, such as Clearspeed, shows that the HPC technology roadmap is an exciting one.
Every decade produces its own claims that "Moore's Law is just about to die" and, so far, every decade has been wrong. Conversely, it would be foolish to argue that single microprocessors can continue to deliver higher performance through more and more reductions in size. One clear result of this is to make the parallel computing paradigm, which we started with because of its cost-effectiveness, a fundamental technology for the future. Only using such techniques will enable us to overcome the physical limitations imposed on the design of ever-faster microprocessors. I believe that such pressures can only increase the applicability of our skills and resources in the years to come. Looking forward 15 years in an area which is changing as rapidly as leading-edge computing is dangerous. However, I see an exciting road ahead with new scientific problems and tools appearing to tackle them. EPCC has all of the skills, drive and ambition to take on those challenges and I feel privileged to lead such an organization. EPCC is not hardware and buildings, it is people; without the dedication of our staff, we would be nothing, but with today's highly-talented team we are looking forward to another bright 15 years.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.