Visit additional Tabor Communication Publications
June 05, 2012
Troy, N.Y., June 5 -- Leading “big data” analytics firm GNS Healthcare has signed a multi-year agreement to extend and expand its membership with the Computational Center for Nanotechnology Innovations (CCNI) at Rensselaer Polytechnic Institute. The agreement enables GNS to continue and grow its use of CCNI’s massively parallel computational resources to directly support its research and operations.
CCNI is a $100 million partnership between Rensselaer, IBM, and New York state. The center houses one of the world’s most powerful university-based supercomputers and is a national leader in promoting the application of high-performance computing in industry. CCNI supports a network of more than 700 researchers in academia and industry across a diverse spectrum of disciplines.
“One of our primary goals as a public-private partnership is to support economic growth through the use of high-performance computing and we’re delighted that GNS will continue to have access to the computing power it needs to innovate, grow, and move into new business areas,” said CCNI Director James Myers.
“GNS has partnered with CCNI since 2007 to drive healthcare innovation through the application of GNS’s supercomputer-driven REFS modeling and simulation platform using CCNI’s high-performance computing. Having access to one of the world’s largest supercomputing resources and working with the expert staff at CCNI for many years has allowed us to deliver results from ‘big data’ that would not have been possible using solely internal computing resources. Today, the flexibility that CCNI provides as our business continues to rapidly grow enables us to seamlessly tackle our partners’ biggest challenges,” said Thomas A. Neyarapally, GNS Healthcare senior vice president, corporate development.
In June 2011, GNS Healthcare became one of 10 companies whose success in leveraging high-performance computing was documented through a case study developed jointly by the Council on Competitiveness and the Defense Advanced Projects Research Agency (DARPA). GNS employs its proprietary machine learning algorithms in the REFS technology platform on massively parallel supercomputers to analyze vast amounts of biomedical information, discovering new insights into the complex, clinical causes of human disease, and new opportunities for diagnosis and treatment. The company’s technology has helped its collaborators improve treatment of major diseases including multiple sclerosis and cancer and reduce instances of harmful drug interactions for patients taking medicine.
“We are laser-focused on delivering value to our industrial partners and helping them achieve their strategic research goals by leveraging CCNI’s expertise, software, and computing resources,” Myers said. “With GNS, which itself is a provider of software and solutions to the medical and pharmaceutical communities, we act as a resource provider. They see us as a cloud —a powerful one with a supercomputer in it—that lets them concentrate on their business rather than worrying about the capital costs and complexities of running a supercomputer on their own.”
CCNI opened its doors in 2007 with more than 100 teraflops of computing power, and today supports a broad range of at-scale modeling, simulation, and analysis research across a spectrum of science and engineering disciplines. The center is committed to hastening the advance of ever-shrinking computer chips and other devices that are designed and manufactured by the micro- and nanoelectronics industry and to driving the academic and industrial adoption of computationally and data-intensive techniques. Over the last five years, more than 700 researchers from 50 universities, companies, and government laboratories have run high-performance science and engineering applications at CCNI.
Last year, Rensselaer won a $2.65 million grant from the National Science Foundation (NSF) to purchase, install, and run a new balanced, green supercomputing system at CCNI designed to support the development of next-generation computational and data-intensive applications. The new system is expected to be comprised of a powerful IBM Blue Gene/Q supercomputer along with a multiterabyte memory (RAM) storage accelerator, petascale disk storage, rendering cluster, and remote display wall systems. The new system will be a national resource for academic and industrial researchers across many different disciplines.
For more information on CNNI at Rensselaer, visit:
• Rensselaer Computational Center for Nanotechnology Innovations (CCNI)
• Innovating New Ways To Share and Preserve Scientific Data on Sustainability
• New Supercomputer To Boost Rensselaer Leadership in High-Performance Computing
• Rensselaer Supercomputer Director Named to National Initiative on High Performance Computing
• Rensselaer Alumni Magazine: SuperPower
REFS (Reverse Engineering-Forward Simulation) is comprised of integrated machine learning algorithms and software that extract causal relationships from complex, multidimensional data and enable the simulation of billions of ‘what if?’ hypotheses to explore novel unseen conditions and predictions forward in time. This model-centric discovery and simulation approach represents a paradigm shift in data analysis, leapfrogging existing approaches such as high-dimensional pattern matching.
About GNS Healthcare
GNS Healthcare is a “Big Data” analytics company that has developed a scalable approach for the discovery of what works in healthcare, and for whom. GNS Healthcare’s analytics solutions are being applied across the healthcare industry: from pharmaceutical and biotechnology companies, health plans and hospitals, to integrated delivery systems, Pharmacy Benefits Managers (PBMs), and Accountable Care Organizations (ACOs). Whether your organization is delivering care or developing personalized therapies and diagnostics, GNS Healthcare can help you discover the knowledge you need to match patients with treatments that work.
Source: Rensselaer Polytechnic Institute
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.