Visit additional Tabor Communication Publications
October 06, 2010
Big-name manufacturers such as GM, Boeing, DreamWorks and Eli Lilly are using high-performance computing to design better products, sure, but they're also saving money by implementing the computers' ability to simulate real-world testing. Take the automaker GM, for example: in order to test the safety of its cars, it needs to crash them. With some cars costing more than $300,000 (and crash test dummies costing up to $100,000), it makes sense to keep the number of crash tests to a minimum. That's where supercomputers come in. Despite the hefty costs associated with purchasing and maintaining high-end supercomputers, it still makes financial sense.
The cost-savings HPC affords by reining in the need for physical testing is the subject of an article at Bloomberg Business Week by Rachel King. HPC's reach is extensive, running the gamut from vehicle design to animation to drug development to seismic imaging, as King illustrates:
Engineers at GM use high-performance computers to simulate the new 2011 Chevrolet Cruze, while Boeing (BA) used them in developing the 787 Dreamliner. These machines help animators at DreamWorks Animation SKG (DWA) render movies such as Shrek and Kung Fu Panda, while Eli Lilly & Co. (LLY) scientists use them to research new pharmaceuticals. Chevron (CVX) used high-performance computing to do seismic imaging that led to the discovery of new reservoirs of oil in the Gulf of Mexico and Speedo International took advantage of it to model the swimsuit Michael Phelps wore at the 2008 Olympics.
Experts cited in the article call the technology "game-changing," and say that it "increases safety and overall vehicle performance."
Every technology has its drawbacks, however, and supercomputing is not an exception. The complexity of the software can create barriers to use and there are cost and environmental concerns related to keeping all those processors cooled.
Virtual testing can never completely obviate the need for physical testing, but together the two methods work synergistically to advance the design and manufacturing processes, with the virtual testing pointing the way and the physical testing providing the verification.
Full story at Bloomberg Business Week
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.