Ubiquitous Parallelism and the Classroom

November 20, 2009

The oft-contended best simple statement is that we need ubiquitous parallelism in the classroom. In the near future, most electronic devices will have multiple cores which would benefit greatly from parallel programming. The low hanging fruit is, of course, the student's laptop, and aiding the student to make full use of that laptop. Read more…

Exascale Expectations

November 20, 2009

Supercomputer performance has grown at a fairly constant rate of a 1,000-fold increase per decade. Will the sprint to exascale be able to hold that pace? Read more…

Reconfigurable Computing Research Pushes Forward

November 20, 2009

Despite all the all the recent hoopla about GPGPUs and eight-core CPUs, proponents of reconfigurable computing continue to sing the praises of FPGA-based HPC. We got the opportunity to ask Dr. Alan George, who runs the NSF Center for High-Performance Reconfigurable Computing, about the work going on there and what he thinks the technology can offer to high performance computing users. Read more…

Jaguar Scales TOP500

November 19, 2009

AMD's John Fruehe and ORNL's Buddy Bland talk about the significance of Jaguar capturing the top spot in the supercomputing world and what that means for the most demanding science applications. Read more…

Parallel NFS Is the Future Standard to Manage Petabyte Level Growth

November 19, 2009

IT professionals are constantly being challenged to manage exponential growth that has reached petabyte levels. As pressures increase on IT to deliver even-higher levels of productivity and efficiency, a new generation file system standard will be required to maximize utilization of powerful server and cluster resources while minimizing management overhead. Read more…

Mitrionics Looks Beyond FPGAs

November 18, 2009

Mitrionics has begun work on an experimental compiler that aims to make parallel programming architecture-agnostic. We asked Stefan Möhl, Mitrionics' chief science officer and co-founder, what's behind the new technology and what prompted the decision to expand beyond their FPGA roots. Read more…

Intel CTO Tells HPC Crowd to Get a Second Life

November 17, 2009

The opening address of the Supercomputing Conference had a surreal quality to it in more ways than one. Between talking avatars, physics-simulated sound, and a Larrabee demo running HPC-type codes, it was hard to separate reality from fantasy. Read more…

Déjà Vu All Over Again

November 16, 2009

Never short on opinions, especially when it comes to high performance computing, Convey Computer Co-Founder Steve Wallach talked to HPCwire about the future of HPC and how lessons from the past can point the way for the future. Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

Streamlining AI Data Management

Five Recommendations to Optimize Data Pipelines

When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.

With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.

To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.

Download Now

Sponsored by DDN

Whitepaper

Taking research further with extraordinary compute power and efficiency

Karlsruhe Institute of Technology (KIT) is an elite public research university located in Karlsruhe, Germany and is engaged in a broad range of disciplines in natural sciences, engineering, economics, humanities, and social sciences. For institutions like KIT, HPC has become indispensable to cutting-edge research in these areas.

KIT’s HoreKa supercomputer supports hundreds of research initiatives including a project aimed at predicting when the Earth’s ozone layer will be fully healed. With HoreKa, projects like these can process larger amounts of data enabling researchers to deepen their understanding of highly complex natural processes.

Read this case study to learn how KIT implemented their supercomputer powered by Lenovo ThinkSystem servers, featuring Lenovo Neptune™ liquid cooling technology, to attain higher performance while reducing power consumption.

Download Now

Sponsored by Lenovo

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

Receive the Monthly
Advanced Computing Job Bank Resource:

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire