Visit additional Tabor Communication Publications
November 11, 2005
If you are curious about what Bill Gates will say in his keynote address to the supercomputing community next week at SC05 in Seattle, you may be able to get more than a little glimpse of it in "The Next Decade in HPC," an article by Microsoft CTO Craig Mundie, published today in the new issue of CTWatch Quarterly. As this extended excerpt from his article makes clear, Microsoft's thrust into HPC represents a major strategic investment, one designed to enable the company to better anticipate and take the lead in the new directions computing will take.
An excerpt from "The Next Decade in HPC," by Craig Mundie
The global society has an increasing need to solve some very difficult large-scale problems in engineering, science, medicine and in many other fields. Microsoft has a huge research effort that has never been focused on such problems. I believe that it is time that we started to assess some application of our research technology outside of our traditional ways of using it within our own commercial products. We think that by doing so, there is a lot that can be learned about what will be the nature of future computing systems
Many of the things that we thought of as de rigueur in terms of architectural issues and design problems in supercomputers in the late eighties and early nineties have now been shrunk down to a chip. Between 2010 and 2020 many of the things that the HPC community is focusing on today will go through a similar shrinking footprint. We will wake up one day and find that the kind of architectures that we assemble today with blades and clusters are now on a chip and being put into everything. In my work on strategy for Microsoft I have to look at the 10 to 20 year horizon rather than a one to three year horizon. The company's entry into high performance computing is based on the belief that over the next ten years or so, there will be a growing number of people who will want to use these kinds of technologies to solve more and more interesting problems. Another of my motivations is my belief that the problem set, even in that first ten-year period, will expand quite dramatically in terms of the types of problems where people will use these kinds of approaches.
There was a time certainly, when I was in the HPC business, when the people who wrote high performance programs were making them for consumption largely in an engineering environment. Only a few HPC codes were more broadly used in a small number of fields of academic research. Today, it is doubtful whether there is any substantive field of academic research in engineering or science that could really progress without the use of advanced computing technologies. And these technologies are not just the architecture and the megaflops but also the tools and programming environments necessary to address these problems.
In parallel with these developments in HPC, we are no longer seeing the kind of heady growth in the number of trained computer scientists produced by the world's universities. In fact, in the United States, this number is actually going down. The numbers are still rising in places like India and China right now, but one can forecast fairly directly that, even if all these people were involved in engineering and science, there will not be enough of them to meet future demand. I think the problem is in fact worse than this because computer science is still a young and maturing discipline.
So another interest I have in seeing Microsoft engage with the scientific community is in helping to bridge the divide between the Computer Science community and the broader world of research and engineering. My personal belief is that what we currently know as computing is going to have to evolve substantially - and what we know as programming is going to have to evolve even more dramatically. Every person who is involved in software development will struggle to deal with the complexity that comes from assembling ever larger and more complicated and interconnected pieces of software. Microsoft, as a company that aspires to be the world leader in providing software tools and platforms, is thinking deeply about how to solve those problems. One of the features that attracts me toward the world of high performance computing is that it is a world made up of people who have daily problems that need to get done, who live in an engineering environment but who are frequently at the bleeding edge in terms of the tools and techniques. And frankly there is a level of aggressiveness in this community that cannot really exist in basic business IT operations, particularly not at the scale where people are attempting to solve big new problems. So for all these reasons, Bill Gates and I decided that even though technical computing is not going to be the world's largest software market, this is a strategic market in the sense that the HPC community can help us all better understand these challenging problems. We therefore hope that together we can help move the ball forward in some of these very difficult areas. As we look downstream and contemplate some fairly radical changes in the nature of computing itself and the need for software tools to deal with that, we also expect that this community is a place from which technical leaders can emerge. We would like to be a part of that.
We think that Microsoft has some assets that could really make a difference for the growing community of people who will need to adopt HPC technologies for their business or their research. Before too long, these people will not only want to solve the problem but will also want to be able to configure and manage these HPC systems for themselves. One thing that Microsoft can do really well is to provide good tools not only for programming but also for administration, management and security.
In the full article, which is available at the CTWatch Quarterly website (http://www.ctwatch.org/quarterly), Mundie goes on to highlight some key challenges that HPC now faces, such as the need for algorithmic innovation and dramatically improved parallelism, where Microsoft research can make a significant contribution.
Copies of the entire issue of CTWatch Quarterly will be available on the floor of SC05 at the SDSC, NCSA, European e-Science, and Oak Ridge National Laboratory booths.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.