Bank of Italy Converges HPC and Enterprise Office with New Cluster

October 10, 2016

The democratization of high performance computing (HPC) and the converged datacenter have been topics of late in the IT community. This is where HPC, high performance data analytics (big data/Hadoop workloads), and enterprise office applications all run on a common clustered compute architecture with a single file system and network. Read more…

Nielsen and Intel Migrate HPC Efficiency and Data Analytics to Big Data

May 16, 2016

Nielsen has collaborated with Intel to migrate important pieces of HPC technology into Nielsen’s big-data analytic workflows including MPI, mature numerical libraries from NAG (the Numerical Algorithms Group), as well as custom C++ analytic codes. This complementary hybrid approach integrates the benefits of Hadoop data management and workflow scheduling with an extensive pool of HPC tools and C/C++ capabilities for analytic applications. In particular, the use of MPI reduces latency, permits reuse of the Hadoop servers, and co-locates the MPI applications close to the data. Read more…

Hadoop and Spark Get RADICAL at SC15

November 13, 2015

The rapid maturation of the Apache Hadoop ecosystem has caught the eyes of HPC professionals who are eager to take advantage of emerging big data tools, such as Read more…

Stepping Up to the Life Science Storage System Challenge

October 5, 2015

Storage and data management have become the perhaps the most challenging computational bottlenecks in life sciences (LS) research. The volume and diversity of d Read more…

New Models for Research, Part III – Embracing the Big Data Stack

March 30, 2015

In the third of a four-part installation, Jay Etchings, director of operations for research computing, and senior HPC architect at Arizona State University, explores the brave new world of open big data alternatives to traditional bare metal HPC+MPI+InfiniBand for research computing. Read more…

The Best of HPC in 2014

January 27, 2015

As we turn the page to 2015, we’re taking a look back at the top stories from 2014 to reflect on just how far the fastest machines in the world (and the peopl Read more…

Tulane Accelerates Discovery with Hybrid Supercomputer

December 16, 2014

The rich culture and distinctive charm of the city of New Orleans served as the backdrop for this year's annual Supercomputing Conference (SC14). If you haven't Read more…

Numascale Image 1

Cray Launches Hadoop into HPC Airspace

October 15, 2014

There has been little doubt that the convergence of traditional high performance computing with advanced analytics has been steadily underway, fed in part by a Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

From Hallucination to Reality

As Federal agencies navigate an increasingly complex and data-driven world, learning how to get the most out of high-performance computing (HPC), artificial intelligence (AI), and machine learning (ML) technologies is imperative to their mission. These technologies can significantly improve efficiency and effectiveness and drive innovation to serve citizens' needs better. Implementing HPC and AI solutions in government can bring challenges and pain points like fragmented datasets, computational hurdles when training ML models, and ethical implications of AI-driven decision-making. Still, CTG Federal, Dell Technologies, and NVIDIA unite to unlock new possibilities and seamlessly integrate HPC capabilities into existing enterprise architectures. This integration empowers organizations to glean actionable insights, improve decision-making, and gain a competitive edge across various domains, from supply chain optimization to financial modeling and beyond.

Download Now

Sponsored by CGT Federal

Whitepaper

Why IT Must Have an Influential Role in Strategic Decisions About Sustainability

Data centers are experiencing increasing power consumption, space constraints and cooling demands due to the unprecedented computing power required by today’s chips and servers. HVAC cooling systems consume approximately 40% of a data center’s electricity. These systems traditionally use air conditioning, air handling and fans to cool the data center facility and IT equipment, ultimately resulting in high energy consumption and high carbon emissions. Data centers are moving to direct liquid cooled (DLC) systems to improve cooling efficiency thus lowering their PUE, operating expenses (OPEX) and carbon footprint.

This paper describes how CoolIT Systems (CoolIT) meets the need for improved energy efficiency in data centers and includes case studies that show how CoolIT’s DLC solutions improve energy efficiency, increase rack density, lower OPEX, and enable sustainability programs. CoolIT is the global market and innovation leader in scalable DLC solutions for the world’s most demanding computing environments. CoolIT’s end-to-end solutions meet the rising demand in cooling and the rising demand for energy efficiency.

Download Now

Sponsored by Lenovo

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

SUBSCRIBE for monthly job listings and articles on HPC careers.

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire