Industry Compute Cluster Enables Innovative Research and Development for Health and Life Sciences

February 28, 2022

University-purchased High Performance Computing (HPC) systems are typically funded to support principal investigators and their teams. But in 2014, the Center for Computational Research (CCR) at the University at Buffalo (UB) created a dedicated cluster to give businesses of Western New York access to large-scale computing resources they would either have to build on their own using public cloud services... Read more…

Alternative Supercomputing or How to Misuse a Computer

July 14, 2016

In 2008, the IBM Roadrunner supercomputer broke the petaflops barrier using the power of the heterogeneous Sony Cell Broadband Engine (BE) processor. A year prior, the Cell BE had already made its way into the consumer market as the engine inside the SonyPlaystation 3. The PS3's accelerated design, Linux-capability and low price point... Read more…

Univa Expands Grid Engine Support for Docker and Intel KNL

May 31, 2016

Univa today announced general availability of Grid Engine 8.4.0. The latest version of Grid Engine includes many new features including expanded support for Docker containers as well as “preview support” for Intel’s latest Xeon Phi code named Knights Landing processor. Univa also reports fixing more than 80 prior issues. Leading the container enhancements, users can now automatically dispatch and run jobs in Docker containers, from a user specified Docker image. Read more…

Penguin Computing Mines Commodity Gold

January 6, 2016

We recently sat down with Fremont, Calif.-based Penguin Computing to learn about the Linux cluster specialist’s unique approach to the HPC and hyperscale mark Read more…

NNSA Taps Penguin Computing for 7-9 Petaflops ‘Open’ HPC Cluster

October 21, 2015

Per a newly-inked contract with Penguin Computing, the Department of Energy’s National Nuclear Security Administration (NNSA) is set to receive its third join Read more…

Cray Details Its Cluster Supercomputing Strategy

July 28, 2015

When iconic American supercomputer maker Cray purchased 20-year-old HPC cluster vendor Appro in late 2012, Cray CEO Peter Ungaro referred to Appro's principal I Read more…

Purdue Lights Up Eighth Cluster in Eight Years

May 12, 2015

At Purdue, installing cluster computers is a tradition that inspires teamwork. The university’s central computing organization, Information Technology at Purd Read more…

Cluster Lifecycle Management: Capacity Planning and Reporting

May 8, 2015

In the previous Cluster Lifecycle Management column, I discussed the best practices for proper care and feeding of your cluster to keep it running smoothly on a Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

From Hallucination to Reality

As Federal agencies navigate an increasingly complex and data-driven world, learning how to get the most out of high-performance computing (HPC), artificial intelligence (AI), and machine learning (ML) technologies is imperative to their mission. These technologies can significantly improve efficiency and effectiveness and drive innovation to serve citizens' needs better. Implementing HPC and AI solutions in government can bring challenges and pain points like fragmented datasets, computational hurdles when training ML models, and ethical implications of AI-driven decision-making. Still, CTG Federal, Dell Technologies, and NVIDIA unite to unlock new possibilities and seamlessly integrate HPC capabilities into existing enterprise architectures. This integration empowers organizations to glean actionable insights, improve decision-making, and gain a competitive edge across various domains, from supply chain optimization to financial modeling and beyond.

Download Now

Sponsored by CGT Federal

Whitepaper

Why IT Must Have an Influential Role in Strategic Decisions About Sustainability

Data centers are experiencing increasing power consumption, space constraints and cooling demands due to the unprecedented computing power required by today’s chips and servers. HVAC cooling systems consume approximately 40% of a data center’s electricity. These systems traditionally use air conditioning, air handling and fans to cool the data center facility and IT equipment, ultimately resulting in high energy consumption and high carbon emissions. Data centers are moving to direct liquid cooled (DLC) systems to improve cooling efficiency thus lowering their PUE, operating expenses (OPEX) and carbon footprint.

This paper describes how CoolIT Systems (CoolIT) meets the need for improved energy efficiency in data centers and includes case studies that show how CoolIT’s DLC solutions improve energy efficiency, increase rack density, lower OPEX, and enable sustainability programs. CoolIT is the global market and innovation leader in scalable DLC solutions for the world’s most demanding computing environments. CoolIT’s end-to-end solutions meet the rising demand in cooling and the rising demand for energy efficiency.

Download Now

Sponsored by Lenovo

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

SUBSCRIBE for monthly job listings and articles on HPC careers.

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire