Visit additional Tabor Communication Publications
September 09, 2008
Includes support for non-raised floor datacenter modeling, improved graphics, equipment libraries and user-productivity enhancements
CONCORD, N.H., Sept. 9 -- Applied Math Modeling Inc. announced today the release of CoolSim 3.1 for modeling the thermal environment of datacenters. CoolSim 3.1 adds many new features for the rapid creation and analysis of datacenters, including the ability to model non-raised floor scenarios. CoolSim is the most cost-effective, comprehensive, easy to use tool in the market for the quick and easy analysis of datacenters.
With CoolSim 3.1, users quickly determine such things as the maximum equipment loading for a given datacenter, the optimal placement of cooling and/or thermal loads, the effect of failed cooling units, and the opportunity to reduce operating costs by reducing cooling capacity. Features of the new CoolSim 3.1 release include:
Graphical depictions of airflow pathlines, iso-surfaces of temperature, and contours of pressure and temperature, with interactive 3D control to pan, zoom, and rotate the view.
Paul Bemis, CEO of Applied Math Modeling said: "With the new release of CoolSim 3.1, users can create and analyze datacenter design alternatives faster and more accurately than ever before. The new features added to CoolSim 3.1 make it the easiest and most cost-effective datacenter thermal analysis tool on the market, providing insight that can be obtained in no other manner."
CoolSim has an easy-to-use graphical interface that enables users to quickly create a model of their datacenter. The model is then automatically submitted to a hosted high-performance computing (HPC) cluster for processing using ANSYS/Fluent computational fluid dynamics (CFD) technology. Once the simulation is complete, HTML output reports and 3D visual images are produced and sent to the user. This mechanism allows users to perform multiple "what-if" studies of their datacenters to determine the optimal placement of existing equipment, or the effect of adding new equipment to an existing room.
"At Applied Math Modeling, we believe the application of 3D CFD technology should be made easy to learn, easy to use, and easy to remember. CoolSim 3.1 delivers on this goal for datacenter thermal modeling. The use of CFD for modeling datacenters is a 'best practice' that can now be employed by anyone with a basic understanding of datacenter cooling and design."
"The latest version of CoolSim is the best yet," said beta tester Chris Ames, Ames Consulting Services. "I have tried it on both new projects and previously designed rooms, and it performs extremely well, even when I throw it some curves. This is another great step forward with a great product. This tool is exactly what I need to help my customers improve datacenter energy efficiency with regard to precision cooling and advanced heat removal. Keep up the good work!"
CoolSim 3.1 will be demonstrated online at a free webinar scheduled for Wednesday, Oct. 22 at 2 p.m. ET. For more information and to register, visit http://tinyurl.com/coolsimdemo.
About Applied Math Modeling
Applied Math Modeling Inc. develops and supports engineering simulation applications for specific target markets. As a strategic "value added" partner to ANSYS, Applied Math Modeling develops unique graphical user interfaces (GUIs) for target markets that require specific modeling tasks, driven by the rich set of industry proven ANSYS simulation engines. These applications are then delivered to the market using a hosted "Software as a Service" (SaaS) model that is particularly well suited for periodic or occasional users. This unique approach reduces end user IT complexity and cost Visit www.koolsim.com for more information or firstname.lastname@example.org.
Source: Applied Math Modeling Inc.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.