December 9, 2008
If you are familiar with current approaches to programming accelerators, you are either discomforted by the complexities, or excited at the levels of control you can get. Can we come up with a different model of GPU and accelerator programming -- a model that allows HPC programmers to focus on domain science instead of on computer science? Read more…
November 21, 2008
The "cloud" model of exporting user workload and services to remote, distributed and virtual environments is emerging as a powerful computing paradigm. Yet, one domain that challenges this model in its characteristics and needs is high performance computing. Read more…
November 21, 2008
OpenCL (the Open Computing Language) is under development by the Khronos Group as an open, royalty-free standard for parallel programming of CPUs, GPUs, the Cell and other parallel processors. An update of the effort was presented at SC08 on Nov. 17. Read more…
November 20, 2008
John West had a great conversation with Matt Reilly, chief engineer for SiCortex. Matt talked about what's going on with the SiCortex's low power, high density compute platform, and then he discussed the need for the computer science curriculum to include parallelism. Read more…
November 20, 2008
InfiniBand has been a comfort zone for those tightly-coupled HPC applications that can't live without their addiction to low latency and high speed. If your application is a science experiment with good funding and no firm schedule, that's OK. If your application involves business, deadlines, and ROI, it's time to break out of that comfort zone and acquaint yourself with 10 Gigabit Ethernet. Read more…
November 19, 2008
A team led by Thomas Schulthess of Oak Ridge National Laboratory has broken the petaflop barrier with a supercomputing application likely to accelerate the revolution in magnetic storage. Using ORNL's upgraded Cray XT Jaguar supercomputer, the team was able to achieve a sustained performance of 1.05 petaflops for an application that simulates the behavior of electron systems. Read more…
November 19, 2008
Researchers at Tohoku University in Sendai, north-eastern Japan, announced on Wednesday that they had broken a batch of performance records on their NEC SX-9 supercomputer, as measured on the HPC Challenge Benchmark test. Hiroaki Kobayashi, director the university's Cyberscience Center, said the SX-9 had achieved the highest marks ever in 19 of 28 areas the test evaluates in computer processing, memory bandwidth and networking bandwidth. Read more…
November 19, 2008
Oak Ridge National Laboratory recently unveiled the first petascale system dedicated to scientific research, a Cray XT machine with a theoretical peak performance of 1.64 petaflops. We talked with Doug Kothe, director of science at ORNL's National Center for Computational Sciences, about the challenges of and potential breakthroughs in science now possible with this built-for-science petascale system. Read more…
In this era, expansion in digital infrastructure capacity is inevitable. Parallel to this, climate change consciousness is also rising, making sustainability a mandatory part of the organization’s functioning. As computing workloads such as AI and HPC continue to surge, so does the energy consumption, posing environmental woes. IT departments within organizations have a crucial role in combating this challenge. They can significantly drive sustainable practices by influencing newer technologies and process adoption that aid in mitigating the effects of climate change.
While buying more sustainable IT solutions is an option, partnering with IT solutions providers, such and Lenovo and Intel, who are committed to sustainability and aiding customers in executing sustainability strategies is likely to be more impactful.
Learn how Lenovo and Intel, through their partnership, are strongly positioned to address this need with their innovations driving energy efficiency and environmental stewardship.
Data centers are experiencing increasing power consumption, space constraints and cooling demands due to the unprecedented computing power required by today’s chips and servers. HVAC cooling systems consume approximately 40% of a data center’s electricity. These systems traditionally use air conditioning, air handling and fans to cool the data center facility and IT equipment, ultimately resulting in high energy consumption and high carbon emissions. Data centers are moving to direct liquid cooled (DLC) systems to improve cooling efficiency thus lowering their PUE, operating expenses (OPEX) and carbon footprint.
This paper describes how CoolIT Systems (CoolIT) meets the need for improved energy efficiency in data centers and includes case studies that show how CoolIT’s DLC solutions improve energy efficiency, increase rack density, lower OPEX, and enable sustainability programs. CoolIT is the global market and innovation leader in scalable DLC solutions for the world’s most demanding computing environments. CoolIT’s end-to-end solutions meet the rising demand in cooling and the rising demand for energy efficiency.
© 2024 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.