Visit additional Tabor Communication Publications
May 20, 2010
Microsoft's ambitions have always been big. A quarter of a century ago, the company's primary mission was to put a consumer-friendly computer on every desktop. At least in the industrialized world, they can consider that mission accomplished. Now they want to do nothing less than model the world.
Actually, what they're envisioning is offering an integrated set of computing tools and platforms that enables others to model the world. The target applications include all the typical HPC suspects: scientific simulations, medical imaging, financial modeling, aerospace design, real-time predictive analytics, bioinformatics, and so on. The overarching plan is to integrate Microsoft's current portfolio of HPC server products, it's newly hatched parallel computing tools, and the Azure cloud platform into a complete technical computing portfolio.
To go along with that vision, Microsoft has created a Technical Computing group that brings all the pieces together. Bill Hilf will be heading up marketing for the new group, with Kyril Faenov leading the engineering team. It will be made up of the HPC group that Faenov started six years ago, the Interactive Supercomputing team that they brought aboard when the company was acquired last year, the parallel computing group, and a sprinkling of folks from the Microsoft Research division.
According to Faenov, Microsoft is aiming the new effort at the millions of scientists, engineers and analysts out there looking for more user-friendly technical computing, or in his words, "to make their lives easier, lower their costs of discovery, and make innovation faster." That, of course, was and is the theme of the company's current Windows HPC Server 2008 platform for cluster computing, and that same focus will now apply across all their HPC solutions, parallel computing tools and Windows Azure cloud offering.
Bringing the Azure cloud into the HPC fold was a no-brainer. In fact, Faenov says HPC and supercomputing applications already represent a large percentage of the early adopters for their new cloud offering. Microsoft sees Azure as a way to bring technical computing to a much broader set of customers -- either those that don't have the financial wherewithal (or expertise) to build their own HPC infrastructure or those that do have in-house cluster systems, but would like to burst to the cloud at times of peak demand.
Although little of this capability is in place today, the long-term goal is to be able to run a Windows-based HPC app on either a local cluster running the HPC Server, in the Azure cloud, on a workstation grid, or on some combination of the three. The idea is to make the underlying platform transparent to the applications, so that applications can be migrated as needed. The apps themselves could be in the form of SOA workloads, Dryad programs, or more traditional MPI-based applications.
To fill the parallel programming piece of the puzzle, Microsoft has Visual Studio 2010, which comes with support for things like multicore/manycore coding and MPI-aware debugging, profiling and runtime analysis. In the future, they will integrate support for GPU computing -- there's already a beta plug-in for NVIDIA's parallel Nsight -- and extend the programming model to support a distributed runtime environment for clusters and clouds.
The third focus for the new group will be on tools and applications for technical domain specialists. Faenov says they are seeing significant demand from customers to be able to handle large-scale datasets and to create and visualize the models interactively. Since these tools are aimed at the technical end user rather than the professional software engineer, the environments must be high-level, but rich in mathematical abstractions. Microsoft already has some of these tools in its current stable of offerings, (Excel and Microsoft SQL, for example), but more may be on the way. And all of the tools will be designed to work seamlessly across cluster and cloud platforms.
Microsoft has set up a Web site to explain its technical computing initiative. Currently, the site is mostly an infomercial for the new group (with some interesting commentaries from HPC movers and shakers), but eventually the company hopes to turn it into an ecosystem hub that attracts industry practitioners and academics across the community.
This is all about the future, though. Microsoft's announcement this week was the vision, not the product lineup. Becoming a technical computing superpower is going to take time. Faenov says Microsoft will begin laying out its roadmap and offering up some product details over the next few months.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.