November 18, 2011
The weekly wrap-up of SC11 highlights the rise of big data, the latest Green500 results, and winners and losers from the show. Read more…
November 16, 2011
There are a number of young companies at SC11 this week debuting novel technologies. One of them, Advanced Cluster Systems, recently launched its first software product, with the rather bold name of Supercomputing Engine Technology. It promises one of the Holy Grails of HPC: to turn sequential applications into parallel ones. Read more…
November 16, 2011
Although women comprise the majority of the United States labor force, 60 percent of college graduates in developed countries, most of the internet users, and start more than half of the new companies created each year in the US, they have made surprisingly few inroads into high performance computing. On Thursday at SC11, there will be two sessions where HPC community members can discuss these issues and exchange ideas on how to change the status quo. Read more…
November 16, 2011
At SC11 in Seattle, Intel showed off an early silicon version of Intel's Many Integrated Core (MIC) "Knight Corner," the codename for its first commercial product based on their MIC architecture. The demonstration was performed for the benefit of reporters and analysts, who got to see the new chip in action at a press briefing here on Tuesday afternoon. Read more…
November 16, 2011
Conference kicks off with news about NCSA's Blue Waters supercomputer project, the TOP500, and a flurry of processor-related announcements. Read more…
November 15, 2011
This month ACM, the world’s largest educational and scientific computing society, announced the launch of its newest Special Interest Group, SIGHPC. HPCwire caught up with Cherri Pancake, the first Chair of SIGHPC, to get her take on what the group is today, and the role she sees for it in the future of the high performance computing community. Read more…
November 15, 2011
John D’Ambrosia, chair of the Ethernet Alliance weighed in on the focus of the Ethernet Alliance at SC11, expanding on their interoperability goals and describing the overall role of Ethernet technologies in HPC. Read more…
November 14, 2011
For the first time since the TOP500 group began publishing their list of the fastest computers in the world, there was no turnover in the top 10 machines. In fact, the only change at the top was the new record Linpack mark set by the now fully deployed K Computer at RIKEN. Read more…
November 14, 2011
Supercomputer maker SGI has launched its next generation ICE supercomputer, the company's flagship scale-out HPC cluster platform. Using Intel's latest Xeon processors, ICE-X is up to two and half times as dense and twice as fast as the current ICE 8400 system. Read more…
November 14, 2011
The National Center for Supercomputing Applications has awarded Cray a $188 million contract to complete the NSF-funded Blue Waters supercomputer project at the University of Illinois. An 11.5 petaflops Cray XE6/XK6 hybrid system outfitted with AMD CPUs and NVIDIA GPUs will be deployed next year and become the center's petascale resource for open science and engineering. The much-anticipated deal was announced on Monday, just as the Supercomputing Conference (SC11) in Seattle got underway. Read more…
November 14, 2011
A week after launching the PRIMEHPC FX10, Fujitsu has announced that first installation of the new system will go to the Information Technology Center at the University of Tokyo. According to the press release, the 4,800-node supercomputer will deliver 1.13 petaflops peak when it boots up in April 2012. Read more…
November 11, 2011
Fujitsu, Bull, DataDirect Networks, and Bright Computing make some big news in the lead up to the Supercomputing Conference in Seattle. Read more…
November 11, 2011
SC11, the world’s greatest yearly supercomputing show rolls into Seattle this week. To help prepare you for the big week, we have put together a list of the top 10 myths of the phenomenon that is SC. With a little discussion of each, hopefully we will bring out the good, the bad, the hopes and realities of SC. And, maybe, along the way we’ll see why SC matters so much to our community. Read more…
October 20, 2011
At SC11 in Seattle, the stage is set for data-intensive computing to steal the show. This year's theme correlates directly to the "big data" trend that is reshaping enterprise and scientific computing. We give an insider's view of some of the top sessions for the big data crowd and a broader sense of how this year's conference is shaping up overall. Read more…
The increasing complexity of electric vehicles result in large and complex computational models for simulations that demand enormous compute resources. On-premises high-performance computing (HPC) clusters and computer-aided engineering (CAE) tools are commonly used but some limitations occur when the models are too big or when multiple iterations need to be done in a very short term, leading to a lack of available compute resources. In this hybrid approach, cloud computing offers a flexible and cost-effective alternative, allowing engineers to utilize the latest hardware and software on-demand. Ansys Gateway powered by AWS, a cloud-based simulation software platform, drives efficiencies in automotive engineering simulations. Complete Ansys simulation and CAE/CAD developments can be managed in the cloud with access to AWS’s latest hardware instances, providing significant runtime acceleration.
Two recent studies show how Ansys Gateway powered by AWS can balance run times and costs, making it a compelling solution for automotive development.
Five Recommendations to Optimize Data Pipelines
When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.
With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.
To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.