Visit additional Tabor Communication Publications
August 27, 2008
Using computer simulations to design new products has become standard operating procedure at many engineering firms today. Aerospace companies, automakers, and consumer goods manufacturers have been employing HPC for some time. Not everyone is on board though, as the Council on Competitiveness keeps reminding us. But even if every product engineer isn't using HPC in the traditional sense, almost all make use of technical computing on the desktop, either as an end unto itself or as a prelude to larger scale simulations on an honest-to-God supercomputer.
In fact though, there's a false dichotomy between desktop-based HPC and server-based HPC from the widget-maker's point of view. Engineers just want to run their favorite CFD software and get the results back as quickly as possible. Given a choice, though, most would prefer the luxury of a personal workstation versus sharing a cluster with others. The good news here is that desktop systems are becoming much more powerful, not only because CPUs are getting faster, but also because GPUs and Cell processors can now be exploited as floating point accelerators.
Even a high-end PC -- the one your teenage son is using -- has a teraflop of performance under the hood. Of course, tapping into that performance for general-purpose computing is still a work in progress. But with software frameworks such as CUDA (for GPUs), Intel's Threading Building Blocks (for multicore CPUs), and RapidMind's Platform (for both) now available, ISVs have a choice of tools to bring teraflop computing to their desktop customers. In fact, for both software vendors and users, the path to shared memory parallelism on the desktop may be an easier transition and more economical than the path to distributed memory parallelism on HPC clusters.
Of course, if you're Boeing you don't have much choice; you're going to need some big iron to do those wind tunnel simulations for your aircraft designs. I think it's safe to say that firms doing cutting-edge engineering will require cutting-edge computing. But for component-makers who need something less than a digital wind tunnel, a teraflop of compute power may be plenty. Keep in mind that 10 years ago, the top supercomputer in the world was a 1 teraflop system.
The real question is this: What's the market for desktop HPC versus server-based HPC for product engineering? That's a tough one to answer since both applications and computing performance are moving targets. I suspect computing performance is moving faster than the software, if only because it's much harder for ISVs to modify their code than for OEMs to build faster machines. In fact, the software vendors would love to get their simulation tools in a framework that automatically scaled with the underlying hardware. But since multicore CPUs and coprocessor acceleration are still relatively new, the ISVs have yet to catch up.
Certainly there is room for more capability in the current crop of engineering design and visualization tools. Despite advances in the power and sophistication of this software, the final step in the design process is almost always a physical mockup and test. Even Boeing and the Formula One automakers still use wind tunnels -- they just need less of them than they used to.
In the latest issue of Product Design & Development, some ink is devoted to the topic of simulation software versus physical testing. The consensus is that simulation, while critical, only takes you so far.
Mike Rainone, co-founder of PCDworks puts in his two cents on the topic, asking: "Why in the world did we spend bazillions of dollars on these (software) programs, if you are going out to the shop to build the thing out of foam?" Even while recognizing that simulation has become an indispensable tool to the designer's arsenal, Rainone says he's not about to tear down the shop. "Regardless of the veracity of the model, most systems defy true 'understanding' until you get physical," he writes; "until you can put it in your hands, turn it inside out, and make it work to see the interdependencies of the parts in action."
For that you are going to need a holodeck, or something very much like it. Fully-immersive simulations might seem like science fiction today, but Intel, AMD and NVIDIA have been talking up "visual computing" as the next frontier, so these virtual reality applications (perhaps minus the tactile feedback) are definitely in the cards for the post-2010 world. Product designers may never shut down the shop completely, but I imagine they are going love the holodeck. And by the way, so will your teenage son.
Posted by Michael Feldman - August 26, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.