Technical Clouds: Seeding Discovery – An Interview with Microsoft’s Dan Reed

By Wolfgang Gentzsch

September 16, 2010

Dan Reed helps to drive Microsoft’s long-term technology vision and the associated policy engagement with governments and institutions around the world. He is also responsible for the company’s R&D on parallel and extreme scale computing. Before joining Microsoft, Dan held a number of strategic positions, including Head of the Department of Computer Science and Director of the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign (UIUC), Chancellor’s Eminent Professor at the University of North Carolina (UNC) at Chapel Hill and Founding Director of UNC’s Renaissance Computing Institute (RENCI). 

In addition to his pioneering career in technology, Dan has also been deeply involved in policy initiatives related to the intersection of science, technology and societal challenges. He served as a member of the U.S. President’s Council of Advisors on Science and Technology (PCAST) and chair of the computational science subcommittee of the President’s Information Technology Advisory Committee (PITAC). Dr. Reed received his Ph.D. in computer science from Purdue University.

In my role as Chairman of the ISC Cloud Conference in Frankfurt, Germany, October 28-29,  I interviewed Dan who will present the Keynote on Technical Clouds: Seeding Discovery.

Wolfgang: Dan, three years ago you joined Microsoft and are now the Corporate Vice President Technology Strategy and Policy & Extreme Computing. What was your main reason to leave research in academia, and what was your greatest challenge you faced when moving to industry?

DanIt was an opportunity to tackle problems at truly large scale, create new technologies and build radical new hardware/software prototypes.  Cloud data centers are far larger than anything we have build in the HPC world to date and they bring many of the same challenges in novel hardware and software.  I have found myself working with many of the same researchers, industry leaders and government officials that I did in academia, but I am also able to see the direct impact of the ideas realized across Microsoft and the industry, as well in academia and government.

As for challenges, there really were not any. As part of Microsoft research, I have a chance to work with a world class team of computer scientists, just as I did in academia. Moreover, I had spent many years in university leadership roles and in national and international science policy and the technology strategy aspects have many of the same attributes.  On the technology strategy front, my job is to envision the future and educate the community about the technology trends and their societal, government and business implications.

Wolfgang: You are our keynote speaker at the ISC Cloud Conference end of October in Frankfurt. Would you briefly summarize the key message you want to deliver?

Dan I’d like to focus on two key messages.

First, let scientists be scientists. We want scientists to focus on science, not on technology infrastructure construction and operation. The great advantage of inexpensive hardware and software has been the explosive growth in computing capabilities, but we have turned many scientists and students into system administrators. The purpose of computing is insight, not numbers, as Dick Hamming used to say. The reason for using computing systems in research is to accelerate innovation and discovery.

Second, the cloud phenomenon offers an opportunity to fundamentally rethink how we approach scientific discovery, just as the switch from proprietary HPC systems to commodity clusters did.  It’s about simplifying and democratizing access, focusing on science, discovery and usability. As with any transition, there are issues to be worked out, behavioral models to adapt and technologies to be optimized. However, the opportunities are enormous.

Cloud computing has the potential to provide massively scalable services directly to users which could transform how research is conducted, accelerating scientific exploration, discovery and results. 

Wolfgang: What are the software structures and capabilities that best exploit cloud capabilities and economics while providing application compatibility and community continuity?

Dan:  Scientists and engineers are confronted with the data deluge that is the result of our massive on-line data collections, massive simulations, and ubiquitous instrumentation. Large-scale data center clouds were designed to support data mining, ensemble computations and parameter sweep studies. But they are also very well suited to host on-line instances of easy-to-use desktop tools – simplicity and ease of use again

Wolfgang: How do we best balance ease of use and performance for research computing?

Dan:  I believe our focus has been too skewed toward the very high end of the supercomputing spectrum.  While this apex of computing is very important, it only addresses a small fraction of working researchers. Most scientists do small scale computing, and we need to support them and let them do science, not infrastructure.

Wolfgang: What are the appropriate roles of public clouds relative to local computing systems, private clouds and grids?

Dan:  Both have a role. Public clouds provide elasticity. This pay-as-you-go cost model is better for those who do not want to bear the expense of acquiring and maintaining private clusters. It also supports those who do not want to know how infrastructure works or who want to access large, public data.  Access to scalable computing on-demand from anywhere on the Internet also has the effect of democratizing research capability. For a wide class of large computation, one doesn’t need local computing infrastructure.  If the cloud were a simple extension of one’s laptop, one wouldn’t have a steep supercomputing learning curve, which could completely change a very large and previously neglected part research community.

Private clouds are ideal for many scenarios where long-term, dedicated usage is needed.  Supercomputing facilities typically fit into this category. Grids are also about interoperability and collaboration, and some cloud-like capability has been deployed on top of a few of the successful grids. 

Wolfgang: In a world where massive amounts of experimental and computational data are produced daily, how do we best extract insights from this data, both within and across disciplines, via clouds?

DanThere are two things we must do. First we need to ensure that the data collected can be easily accessed.  Data collections must be designed from the ground up with this concept in mind, because moving massive amounts of data is still very hard.  Second, we must make the analysis applications easy to access on the web, easy to use and easy to script.  Again, make the scalable analytics an extension of one’s everyday computing tools. Keep it simple. Make it easy to share data and results across distributed collaborations. 

—–

Dr. Wolfgang Gentzsch is the General Chair for ISC Cloud’10, taking place October 28-29, in Frankfurt, Germany.  ISC Cloud’10 will focus on practical solutions by bridging the gap between research and industry in cloud computing. Information about the event can be found at the ISC Cloud event website.  HPC in the Cloud is a proud media partner of ISC Cloud’10.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire