Visit additional Tabor Communication Publications
December 19, 2011
TROY, Mich., Dec. 16 -- Altair Engineering, Inc., today announced the opening of an exceptionally high-powered data center in Troy, Mich., this month to house and manage its growing HyperWorks On-Demand cloud-based computer-aided engineering (CAE) solution for customers who rely on high-performance computing.
HyperWorks On-Demand (HWOD) is a high-performance computing (HPC) solution for design innovation in the cloud. HWOD leverages Altair's patented licensing system, providing access to Altair's HyperWorks CAE software and a modern, scalable HPC infrastructure through a secure and efficient Web-based platform.
The data center enables Altair to scale HWOD up to more than 10,000 cores for computer simulation projects. In fact, it can support as many as 150 large-scale engineering solver jobs running simultaneously, employing Altair's solvers RADIOSS, OptiStruct and AcuSolve along with other tools in the HyperWorks family of simulation software.
"Companies often turn to HyperWorks On-Demand because they have outgrown their internal capacity or do not have the resources internally to manage high-performance computing equipment," said Altair Chief Information Officer Martin Nichols. "HyperWorks On-Demand provides all our HyperWorks products as a cloud service, and this data center allows us to scale up to provide much larger on-demand clusters for our customers."
Unlike most other cloud services, HWOD provides true HPC for its end users. This means that customer simulations – which can require substantial resources and can run for several days – are run as fast as possible.
HPC requires extensive infrastructure in addition to just the computers. Things such as backup generators, uninterruptible power supplies, water-cooling capabilities, systems administration, operators, and high security are all necessary elements that are often out of reach for many small and midsized engineering departments. Companies gain access to a robust, resilient and secure HPC environment when they use the HWOD services that will be hosted in the new data center. Thanks to its domain expertise and sophisticated cloud stack, Altair can also provide turnkey and configurable private cloud solutions, offering all the efficiency and flexibility of HWOD completely within the customer's firewall.
The data center is a scalable, modular facility that can be easily extended internally and has the capability to be interconnected with similar adjacent modular facilities in the future.
"Our HyperWorks On-Demand data center essentially fits the power of an entire building of high-performance computers into a single room, making it feasible now for medium to large-sized organizations to access substantial computing resources via Altair's private cloud," Nichols noted. "The compute-power density of this center is phenomenal, far higher than that of a standard commercial data center. Altair's is much more similar to a scientific super-computing installation."
The data center is situated about 3.5 miles from Altair's headquarters and incorporates extensive physical and cyber security measures. It is monitored inside and outside by video surveillance, night-vision cameras and sensors. Firewall devices protect data both entering and exiting the facility's computing equipment.
Construction of the data center was completed this week, and Altair is in the process of expanding its current HWOD capabilities to be relocated to the new facility within the next month. The expanded HWOD will be fully up-and-running in the new data center by early 2012.
HyperWorks, A Platform for Innovation, is a comprehensive simulation solution for rapid design exploration and decision-making. HyperWorks provides a tightly integrated suite of best-in-class tools for all facets of the simulation process: modeling, analysis, optimization, visualization, reporting and collaborative knowledge management. Leveraging a revolutionary pay-per-usage licensing model, HyperWorks delivers maximum value and flexibility for customers worldwide. For more information, please visit: www.altairhyperworks.com.
About Altair Engineering
Altair Engineering, Inc. empowers client innovation and decision-making through technology that optimizes the analysis, management and visualization of business and engineering information. Privately held, with more than 1,500 employees, Altair has offices throughout North America, South America, Europe and Asia/Pacific. With a 26-year track record for high-end software and consulting services for engineering, computing and enterprise analytics, Altair consistently delivers a competitive advantage to customers in a broad range of industries. To learn more, please visit www.altair.com and www.simulatetoinnovate.com.
Source: Altair Engineering, Inc.
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.