Visit additional Tabor Communication Publications
November 13, 2011
Brings a wide range of system architecture, IO and operating systems concepts together toenhance application scalability, usability and flexibility in hybrid HPC systems
SEATTLE, Washington, and NEW DELHI, India, Nov. 13 – HPC Links Pvt. Ltd. announced today that it is offering a new open source hybrid supercomputing software product named Open VERTEX 1.0.
Open VERTEX 1.0 is the first product to be released by HPC Links based on the company’s innovative VERTEX architecture http://www.hpclinks.com/solutions/VERTEX.shtml. VERTEX is a Hybrid HPC platform architecture where VERTEX control nodes transparently connect light-weight, heterogeneous compute nodes to storage and system services. VERTEX addresses the challenges posed by today's Hybrid HPC systems that use traditional CPUs as well as compute intensive processors such as GPGPUs, many-core, and Cell processors on the same HPC system. By employing a set of unique features, VERTEX provides unprecedented levels of ease-of-use, flexibility and scalability for commodity hybrid HPC systems. Open VERTEX 1.0 makes many of these features available to the users for free.
HPC Links also announced a range of integration and support services for the Open VERTEX 1.0 product to help customers use this software to boost their HPC productively. In addition, HPC Links will also provide advanced, custom development and engineering services around Open VERTEX 1.0 in order to bring the customer closer to an ideal VERTEX architecture experience. These advanced services include customization of VERTEX to new instruction set architectures for compute nodes, scaling existing customer applications using the hybrid VERTEX HPC architecture, developing new applications for this platform, as well as adding customer specific software features and modules to the VERTEX environment.
Founder and CEO of HPC Links, Dr. Ashwini K Nanda said, “Hybrid supercomputing using a variety of compute intensive commodity processors is an undeniable reality today. But managing the different execution binaries to scale applications on hybrid machines remains a significant challenge for our customers. The VERTEX architecture, Open VERTEX 1.0 software product, and our related services offerings bring a comprehensive set of scalable and flexible software solutions to the users of commodity hybrid supercomputing.”
“The world has truly woken up to the outstanding performance improvements which hybrid HPC using NVIDIA Tesla GPUs can bring to the most challenging parallel applications,” said Shanker Trivedi, vice president of worldwide Sales (PSG) at NVIDIA. “With the Open VERTEX 1.0 software platform and services, HPC Links is making the compute power of NVIDIA’s Tesla GPUs even more usable for a wide range of HPC applications.”
Open VERTEX 1.0 has been tested extensively using Boston Ltd/Supermicro SUPERFLEX blades based x86/NVIDIA GPU clusters in a major European Oil & Gas customer’s lab, and on clusters consisting of x86, NVIDIA GPU and IBM Cell based PS3’s at University of Massachusetts and Air Force Research Labs in Rome, NY.
“Supermicro provides the largest selection of high-performance, high-efficiency Server Building Block Solutions® ideally suited for any scale VERTEX deployment,” said Wally Liaw, Vice President of Sales, International, at Supermicro, Santa Clara, CA. “Our supercomputing solutions are optimized for the widest range of HPC requirements with first-to-market support of the latest multicore processors, high performance GPUs and high-speed interconnect technologies. With our SuperBlade®, Twin architecture, SuperServer®, Workstation, Storage and Network Switch product lines, our customers can quickly scale up and out to achieve maximum application benefits with leading edge technologies such as VERTEX from HPC Links”
"The Air Force Research Lab has been experimenting with the VERTEX software from HPC Links and found it to be quite useful for getting data to our cluster of Cell processors on Sony PS3s. We like the fact that we do not have to port any of our infrastructure software to compute nodes using special purpose processors such as Cell. We can consolidate our infrastructure software including file systems, workload management, and authentication, on VERTEX nodes using commodity general-purpose CPUs." said Mark Barnell, Senior Computer Scientist and HPC Director at Air Force Research Lab – Information Directorate, Rome, NY.
Dr Gaurav Khanna, Associate Professor of Astrophysics at UMass, Dartmouth, who has been a beta-tester of VERTEX said, "The VERTEX platform enabled us to tightly integrate our x86 and Cell BE based PS3 systems into a cluster, thus paving the way for our scientific applications to make efficient use of multiple processor architectures, simultaneously. In addition, the light-weight nature of the VERTEX compute nodes helps make additional resources available for computation and improve scaling. The highly experienced HPC scientists and engineers of the VERTEX team provided us with tireless support and insight, until we got our complex parallel applications working perfectly.”
The Open VERTEX 1.0 software is available for free download at https://github.com/HPCLinks/Open-Vertex.
About HPC Links
HPC Links provides leading edge services, systems and solutions in the areas of multicore, cloud and high performance computing to our worldwide customers. The new venture has been formed by industry veterans who in the past played key roles in the development of several top supercomputers around the world – the LANL Roadrunner machine in the US, first to cross the PetaFLOP barrier, the Barcelona Marenostrum machine, fastest in Europe 2004-06, and the CRL Eka machine, Asia’s fastest 2007-08, India’s fastest 2007-10. HPC Links is based out of both the US and India. Our unique, interdisciplinary skill pool covering broad areas of computer science and major HPC application domains, continuously strive to make parallel programming easier for our scientific, engineering and business customers.
Source: HPC Links
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.