Visit additional Tabor Communication Publications
HPC Matters is a joint blog consisting of contributors from the Tabor Communications team on their observations and insights into HPC matters.
August 14, 2008
If one were to categorize the enterprise software market as mature, robust, innovative -- and definitely 21st century -- as it races headstrong into the cloud -- how would one categorize the HPC software market? Not to throw stones, but you could easily put it in the circa 1980 timeframe and use terms like immature, cottage-like and definitely lacking investment. When you mention HPC to any of the venture guys, they run for the hills.
I can rewind 20+ years to when I first entered this market and the conversation has not changed. No software, no money for software, and a continuous "humm" over the impracticalities of building software for an ever increasingly complex set of platforms. The dialogue never seems to evolve beyond parallel programming, languages (Fortran and C -- of course), open source and the vertical application specialists who seem to own the scarcest commodity.
Now mind you, there is an implicit expectation that the government should invest in and drive the initiative. Not only is the conversation focused in the wrong direction, but we are not even asking the right question! In my opinion, the issue is much more complex. Don't get me wrong. These are not absolutes, and we certainly need to get very real and focused on solving the challenges associated with programming models, as well as building robust middleware and tools. The big question, however, is around productivity, not platforms; and how do we make high performance systems fit seamlessly into an overall IT environment? For me the question is "where is SAP for the rest of us?"
If productivity is the "uber-trend," are we focusing on the right issues? A quick analysis of the numbers tells a very interesting story. The majority of the market is comprised of smaller clusters -- not extreme levels of parallelism. Yes, multicore will make things more complicated. But the heart of the market (the sweet spot) is in the midrange and in the industrial sector -- broadly defined. This is where the growth is, and this is where customers need help.
What is interesting to me is that after all of these years, we have not found the "common thread" that links all of these segments. An HPC "ERP" equivalent, if you will. Maybe, we haven't looked! Workflows in product development are pretty similar. Supply chains are complex and growing more complex; and again, have common attributes. Data volumes and the management and use of that data are becoming gigantically difficult. I can't begin to count the number of users who ask why we don't have some sort of application framework that enables applications to "speak to one another." I dug into my personal archives to find some anecdotal statements from discussions I have had recently.
The list goes on....
My apologies to those who have innovated on this front. PTC (Parametric Technology Corp.) and Accelrys immediately come to mind -- but these attempts are highly verticalized. If you are in the manufacturing segment, you are in luck. Between PTC and Dassault there are solutions. PTC has found the "killer app" in the manufacturing space -- PLM. But why hasn't that translated to other segments?
I've heard all the arguments, ranging from HPC applications are too niche; there is no demand; to the lack of common horizontal applications. But again they miss the point that there is a great deal of commonality within and across engineering and scientific workflows. Common requirements exist around data transparency, data analysis, and data management. There are also requirements for integrated applications that provide consistency and efficiency between elements of a workflow, across an organization or beyond the borders of a corporation.
Maybe it is a matter of timing. HPC is hot right now and getting hotter every day. From my vantage point, this screams of opportunity. Someone, please jump in!
Posted by Debra Goldfarb - August 13, 2008 @ 9:00 PM, Pacific Daylight Time
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.