Visit additional Tabor Communication Publications
April 15, 2010
DEARBORN, Mich., April 15 -- Avetec's High Performance Computing (HPC) Research Division -- the Data Intensive Computing Environment (DICE) -- today awarded the 2nd annual DICE Data Intensive Impact Awards to showcase products and technologies that have enabled progress in HPC data management in locality, movement, manipulation and integrity, as well as power and cooling efficiencies.
In the Future Technology Category, DICE selected NVIDIA's Tesla 20 series Graphics Processing Unit (GPU), which the DICE team considers to be a critical new technology for the HPC space. When compared to the latest quad-core CPUs, Tesla 20-series GPU computing processors deliver equivalent performance at 1/20 the power consumption and 1/10 the cost. More importantly, Tesla GPUs enable high performance computing users to scale their computing resources to get significant boosts in performance while staying within tight power and monetary budgets.
"Over the last two years, the DICE Program has intensely studied how the HPC and Information Technology (IT) communities look at power consumption in data intensive environments, and while there is a desire to be more 'green,' the costs had been somewhat prohibitive until now," said Al Stutz, Avetec chief information officer. "The NVIDIA Tesla 20-series GPU helps to drastically improve performance without the costs -- environmental, capital, storage space and personnel -- typically associated with performance improvements of this magnitude."
"Only GPUs offer the accessibility, performance and scalability required to respond to the increasingly complex scientific challenges the HPC community is being asked to solve today," said Andy Keane, general manager of the Tesla business unit at NVIDIA. "Tesla GPUs are helping reduce the time it takes to do important work from a week to a day and from a day to mere minutes -- this represents truly fundamental change for HPC users."
In the Product category, DICE selected Spectra Logic's T-Finity enterprise tape library. "Our team selected the T-Finity for its versatility in data intensive environments," said Al Stutz, Avetec CIO and DICE team leader. "The product helps with the demanding archiving and backup environments experienced in the enterprise IT, federal, high performance computing (HPC) and media and entertainment space."
T-Finity saves customers up to 30 percent on initial capital investments and 15 to 20 percent on annual recurring operational expenses. The library offers redundancy, scaling to more than 45 petabytes in a single library and to more than 180 petabytes in a single, unified library complex.
"HPC organizations have always had more than their share of big-data challenges -- from performance to high capacity to long term data access," said Brian Grainger, Spectra Logic's vice president of worldwide sales. "Spectra Logic's T-Finity tape library was built to provide HPC environments with high speed performance, superior capacity and ready access to stored data, along with significant cost savings in both capital investments and operating expenses. Spectra Logic is honored to receive this esteemed recognition from the DICE Team."
About the Winners
NVIDIA invented the first GPU in 1999 and continues to set new standards in visual computing with interactive graphics available on devices ranging from tablets and portable media players to notebooks and workstations. NVIDIA's expertise in programmable GPUs has led to breakthroughs in parallel processing which make supercomputing inexpensive and widely accessible. The company holds more than 1,100 US patents, including ones covering designs and insights fundamental to modern computing.
Spectra Logic provides tape libraries, automated tape backup solutions, disk and deduplication backup solutions, and general data backup solutions that protect data. The company's products include features such as tape libraries with integrated data encryption, media health monitoring, low power consumption, and easy upgradability and maintenance. Spectra Logic has been in business for more than 30 years and has been an ISO 9001 certified organization since 1998.
About the Award
The DICE Data Intensive Impact Awards program was designed to showcase products and technologies that have impacted HPC data management by applying the latest technologies and capabilities to enable progress in data locality, movement, manipulation and integrity. The solutions considered may be either hardware, software or a combination. The product or technology should address more than one of the DICE Challenge Focus Areas -- Data Locality, Movement, Manipulation or Integrity -- but will only be considered for one award. Last year's inaugural award recipients were Blue Arc and Woven Systems.
About the Sponsor
Avetec is a not-for-profit research company focused on research through modeling, simulation and testing to advance American companies' competiveness. Avetec has three major areas of research 1) turbo machinery, 2) Data Intensive Computing Environment (DICE) and 3) workforce development. Avetec's HPC Research Division -- the Data Intensive Computing Environment -- is a geographically dispersed test environment that conducts technology testing and validation for new and emerging HPC data management solutions. The DICE team works with the HPC industry, datacenters (government and industry) and the research community to evaluate new and emerging products and technology that enhance research computing data and results throughput.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.