Dan Reed helps to drive Microsoft’s long-term technology vision and the associated policy engagement with governments and institutions around the world. He is also responsible for the company’s R&D on parallel and extreme scale computing. Before joining Microsoft, Dan held a number of strategic positions, including Head of the Department of Computer Science and Director of the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign (UIUC), Chancellor’s Eminent Professor at the University of North Carolina (UNC) at Chapel Hill and Founding Director of UNC’s Renaissance Computing Institute (RENCI).
In addition to his pioneering career in technology, Dan has also been deeply involved in policy initiatives related to the intersection of science, technology and societal challenges. He served as a member of the U.S. President’s Council of Advisors on Science and Technology (PCAST) and chair of the computational science subcommittee of the President’s Information Technology Advisory Committee (PITAC). Dr. Reed received his Ph.D. in computer science from Purdue University.
In my role as Chairman of the ISC Cloud Conference in Frankfurt, Germany, October 28-29, I interviewed Dan who will present the Keynote on Technical Clouds: Seeding Discovery.
Wolfgang: Dan, three years ago you joined Microsoft and are now the Corporate Vice President Technology Strategy and Policy & Extreme Computing. What was your main reason to leave research in academia, and what was your greatest challenge you faced when moving to industry?
Dan: It was an opportunity to tackle problems at truly large scale, create new technologies and build radical new hardware/software prototypes. Cloud data centers are far larger than anything we have build in the HPC world to date and they bring many of the same challenges in novel hardware and software. I have found myself working with many of the same researchers, industry leaders and government officials that I did in academia, but I am also able to see the direct impact of the ideas realized across Microsoft and the industry, as well in academia and government.
As for challenges, there really were not any. As part of Microsoft research, I have a chance to work with a world class team of computer scientists, just as I did in academia. Moreover, I had spent many years in university leadership roles and in national and international science policy and the technology strategy aspects have many of the same attributes. On the technology strategy front, my job is to envision the future and educate the community about the technology trends and their societal, government and business implications.
Wolfgang: You are our keynote speaker at the ISC Cloud Conference end of October in Frankfurt. Would you briefly summarize the key message you want to deliver?
Dan: I’d like to focus on two key messages.
First, let scientists be scientists. We want scientists to focus on science, not on technology infrastructure construction and operation. The great advantage of inexpensive hardware and software has been the explosive growth in computing capabilities, but we have turned many scientists and students into system administrators. The purpose of computing is insight, not numbers, as Dick Hamming used to say. The reason for using computing systems in research is to accelerate innovation and discovery.
Second, the cloud phenomenon offers an opportunity to fundamentally rethink how we approach scientific discovery, just as the switch from proprietary HPC systems to commodity clusters did. It’s about simplifying and democratizing access, focusing on science, discovery and usability. As with any transition, there are issues to be worked out, behavioral models to adapt and technologies to be optimized. However, the opportunities are enormous.
Cloud computing has the potential to provide massively scalable services directly to users which could transform how research is conducted, accelerating scientific exploration, discovery and results.
Wolfgang: What are the software structures and capabilities that best exploit cloud capabilities and economics while providing application compatibility and community continuity?
Dan: Scientists and engineers are confronted with the data deluge that is the result of our massive on-line data collections, massive simulations, and ubiquitous instrumentation. Large-scale data center clouds were designed to support data mining, ensemble computations and parameter sweep studies. But they are also very well suited to host on-line instances of easy-to-use desktop tools – simplicity and ease of use again.
Wolfgang: How do we best balance ease of use and performance for research computing?
Dan: I believe our focus has been too skewed toward the very high end of the supercomputing spectrum. While this apex of computing is very important, it only addresses a small fraction of working researchers. Most scientists do small scale computing, and we need to support them and let them do science, not infrastructure.
Wolfgang: What are the appropriate roles of public clouds relative to local computing systems, private clouds and grids?
Dan: Both have a role. Public clouds provide elasticity. This pay-as-you-go cost model is better for those who do not want to bear the expense of acquiring and maintaining private clusters. It also supports those who do not want to know how infrastructure works or who want to access large, public data. Access to scalable computing on-demand from anywhere on the Internet also has the effect of democratizing research capability. For a wide class of large computation, one doesn’t need local computing infrastructure. If the cloud were a simple extension of one’s laptop, one wouldn’t have a steep supercomputing learning curve, which could completely change a very large and previously neglected part research community.
Private clouds are ideal for many scenarios where long-term, dedicated usage is needed. Supercomputing facilities typically fit into this category. Grids are also about interoperability and collaboration, and some cloud-like capability has been deployed on top of a few of the successful grids.
Wolfgang: In a world where massive amounts of experimental and computational data are produced daily, how do we best extract insights from this data, both within and across disciplines, via clouds?
Dan: There are two things we must do. First we need to ensure that the data collected can be easily accessed. Data collections must be designed from the ground up with this concept in mind, because moving massive amounts of data is still very hard. Second, we must make the analysis applications easy to access on the web, easy to use and easy to script. Again, make the scalable analytics an extension of one’s everyday computing tools. Keep it simple. Make it easy to share data and results across distributed collaborations.
—–
Dr. Wolfgang Gentzsch is the General Chair for ISC Cloud’10, taking place October 28-29, in Frankfurt, Germany. ISC Cloud’10 will focus on practical solutions by bridging the gap between research and industry in cloud computing. Information about the event can be found at the ISC Cloud event website. HPC in the Cloud is a proud media partner of ISC Cloud’10.