Visit additional Tabor Communication Publications
December 01, 2006
Joining high performance computing applications with small- and medium-sized companies is one step closer to reality as the Ohio Supercomputer Center (OSC) and the Edison Welding Institute (EWI) announced a partnership agreement today. As part of its innovative Blue Collar Computing initiative, OSC will provide remote portal access of HPC systems and software to EWI welding applications -- a tremendous cost-saving resource that will reach engineers at over 200 companies.
Welding involves the complex interactions of a high number of physical processes. Integrated numerical simulation tools are needed to improve the performance of welded structures. Through OSC's HPC application interface, engineers will easily be able to input product dimensions, welding process parameters and other specifications to conduct complete online simulations of welding procedures to determine the strength and viability of its prototypes.
"This is a real breakthrough for our clients that lack the fundamental data and computing horsepower needed to develop digital simulations," said Henry Cialone, president and CEO of EWI. "This new interface will help manufacturing engineers eliminate the endless trial and error of physical prototypes and allow them to test bolder design ideas in weld and joining models."
Over 200 member companies in the U.S. make EWI the leading engineering organization in North America dedicated to advancing and applying materials joining technology to improve manufacturing competitiveness. The OSC simulation tools will be available to all EWI member companies and other customers. Initial deployments will provide online, automated alternatives to simulation services provided by the EWI engineering staff to member companies.
Most manufacturing and design engineers do not have the necessary data and computing horsepower to develop tools that easily simulate the complex physical processes of metals and polymers. Integrating numerical simulation tools into the process helps to improve performance and reduce the cost of materials in the new welded structures. Blue Collar Computing solutions will increase manufacturing competitiveness by lowering the barriers to using HPC tools through web portals. For example, these tools will accelerate problem solving, product and process development by allowing the engineer to quickly do "what if" scenario calculations.
The OSC-EWI partnership is the latest success story of the Blue Collar Computing initiative -- a cooperative effort to help small- and medium-sized companies gain access to supercomputing technology at a more affordable cost. With improved software development, training, outreach and partnerships, supercomputing can become a reality on a smaller scale for industrial clients.
Large companies have long seen competitive advantages from such HPC simulations. General Motors is using parallel computing to simulate the crash testing of automobiles and claims that it can reduce the number of full-size crash vehicle tests by more than 85 percent, at a cost of $500,000 per test. Similarly, supercomputing simulations have reduced the amount of money that Goodyear has spent on physical tire prototypes from 40 percent to 15 percent.
"Just as the ATM has replaced the bank teller, desktop supercomputing simulations will soon replace physical testing labs," said Stan Ahalt, executive director of OSC. "Blue Collar Computing strives to help small- and medium-sized companies build better products, cut costs of production, quickly analyze and solve assembly line problems and streamline overall efficiency."
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.