Visit additional Tabor Communication Publications
June 01, 2010
NextIO vCORETM C200 delivers a modular, manageable GPU solution for pooling and scaling GPU resources among servers and workstations
AUSTIN, Texas, June 1 -- NextIO, the premier provider of next-generation I/O consolidated solutions, today announced the release of the vCORE C200 GPU Consolidation Appliance, the first product that brings enterprise-class features to a GPU compute platform. NextIO's vCORE C200 represents the next-generation of GPU compute solutions by delivering serviceability, manageability and flexibility to today's GPU farms through increased server uptime and reduced GPU over-provisioning. vCORE C200 was launched this week at the 2010 International Supercomputing Conference (ISC) in Hamburg Germany, May 30 – June 3 at booth #700.
Due to their high core densities and low cost, Tesla GPU solutions are one of the fastest-growing technologies in the High Performance Computing industry. NVIDIA Tesla GPUs are typically installed in servers and workstations as a single unit or in pairs, and can deliver server performance equal to supercomputers for a fraction of the price. However, managing and scaling GPUs has been challenging because each GPU is dedicated to a specific server. NextIO's vCORE C200 solves this issue by allowing NVIDIA Tesla GPUs to be added "on-demand" to servers as required. By pooling, sharing and dynamically reconfiguring up to eight Tesla GPUs per server, vCORE addresses the need for managing GPUs in a dynamic high performance computing (HPC) environment. Its modular architecture also allows GPUs to be hot-swapped or updated without disrupting jobs already running on the system.
"One of the main hurdles for the rapid adoption of GPUs arises from the management complexity of the new heterogeneous computing environment," said Jie Wu, Research Director, Technical Computing, IDC. "The ability to pool, share and dynamically reconfigure GPUs across multiple servers will allow HPC users to not only achieve performance gain for their applications, but also improve the resources utilization, resulting in improved GPU value as well as significantly lower total cost of ownership (TCO) for their organizations."
NextIO's vCORE product family exclusively offers NVIDIA's record-setting GPUs for both compute-intensive and data visualization needs. With the ability to hold eight double-wide GPUs in 4U of space, the vCORE C200 delivers over eight TeraFLOPS of GPU computing power that can be provisioned and managed through a simple-to-use interface. The fully serviceable and redundant design, together with NextIO's patented vConnectTM switching technology, allows vCORE to dynamically map Tesla GPUs to servers as needed, increasing the GPU utilization in HPC clusters. The hot-plug capability and serviceable chassis provides quick repair of GPUs and eliminates the need for "forklift" upgrades to migrate to the newest GPU. In addition, GPUs can be driven by industry-standard resource schedulers to deliver GPU compute power when needed, which makes vCORE C200 ideal for medium-to-large clusters.
"Tesla 20-series GPUs are designed to meet the specific needs of the HPC applications, and we are seeing many cluster deployments as a result," said Andy Keane, general manager of the Tesla business at NVIDIA. "The ability to scale a large number of GPUs per node is critical, and vCORE C200 allows users to manage and dynamically reconfigure these resources across multiple servers."
Ideally suited for customers with high job counts, high GPU counts or a combination of both, vCORE C200 increases business value with more effective jobs per time period and reduces GPU management overhead for many industries, including:
"Customers have expressed the need for a smooth, non-disruptive way to allocate, provision, and service their GPUs," said Mike Heumann, vice president of worldwide marketing at NextIO. "We developed the vCORE C200 to help customers manage their GPU resources, reduce their operational costs and avoid over-provisioning their infrastructure. vCORE C200 allows customers to manage and upgrade their GPUs without experiencing disruptions."
The family of NextIO vCORE C200 products includes NVIDIA Tesla 20-series GPUs bundled in a variety of configurations and can be ordered now for delivery in July. NextIO also offers the vSTOR Application Acceleration Appliance that provides the highest flash performance for applications requiring high-speed storage transactions (IOPS) or high bandwidth at the lowest cost.
NextIO, Inc. is the leader in next-generation network consolidation solutions for today's dynamic datacenter in a variety of industries including enterprise, oil and gas, high performance computing, digital media and financial services. With its innovative vConnect platforms, NextIO offers the unique ability to virtualize I/O technology on any server, operating system, hypervisor and storage architecture. Leveraging PCI Express, NextIO offers true I/O consolidation for any end-point technology. vConnect delivers unprecedented rack-level scalability, with I/O and server resources that can be scaled independently for 50-70 percent savings in capital, power, and cooling. NextIO's any-to-any I/O connectivity boosts performance and reliability while streamlining IT deployment, simplifying administration and reducing costs. For more information, visit www.nextio.com.
Source: NextIO, Inc.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.