Visit additional Tabor Communication Publications
November 15, 2011
Entrants aspire to advance Parkinson's and diabetes research, create stem cell knowledge-base, improve organic photovoltaics for solar cells and map genomic diversity
SEATTLE, Nov. 15 -- Cycle Computing announced the finalists of the CycleCloud BigScience Challenge 2011 at Supercomputing 2011 in Seattle last night. The contest offers $10,000 of computation time, the equivalent of eight hours on a 30,000-core cluster to candidates who are working on behalf of non-profit organizations to further humanity and state of the art research.
Finalists were selected based on their proposal's long-term benefit to humanity, originality, creativity and suitability to run on CycleCloud clusters launched within Amazon Web Services (AWS). The grand prize, which includes an original $10,000 in credit from Cycle Computing and four hours of CycleCloud engineering support, will also include an additional $2,500 of credit from AWS.
Due to the impressive caliber of submissions, all finalists were awarded both the original $500 credit from Cycle Computing and an additional $1,000 credit from AWS. The finalists will be judged by Jason Stowe, CEO, Cycle Computing, and a panel of industry luminaries, including Matt Wood, technology evangelist for Amazon Web Services, Kevin Davies, editor-in-chief, Bio-IT World and Peter S. Shenkin, vice president, Schrödinger.
· Alan Aspuru-Guzik, professor in department of chemistry and chemical biology and Johannes Hachmann, postdoctoral fellow, Harvard Clean Energy Project: Hachmann and Aspuru-Guzik wish to conduct computational screening and design of novel materials for the next generation of organic photovoltaics (OPVs). The goal is to facilitate creating the next generation of photovoltaic cells.
· Jesus Izaguirre, associate professor of computer science and engineering and concurrent associate professor of applied and computational mathematics and statistics, University of Notre Dame: Izaguirre intends to explore mutations in proinsulin case misfolding and analyze the ability to stimulate the folding pathways of these mutations to provide mechanic insight into the events of onset of diabetes. He also plans to examine the dominant states in the folding pathways to enable structure-based drug design and the production of new therapies to combat this disease.
· Soumya Ray, assistant professor of neurology, Harvard Medical School: Ray’s team has identified a mutation that represents the majority of Parkinson’s disease patients. They seek to utilize the additional computational power to explore the dynamics of the protein and how it interacts with inhibitors to understand how drugs interact with the mutation, benefitting a large number of researcher and other drug discovery programs around the world.
· Victor Ruotti, computational biologist, Morgridge Institute for Research: Ruotti aspires to collect genetic information, specifically RNA alignments, from different types of cells to build an RNA-based indexing system for stem cells. Once these alignments are identified, analysis based on this knowledgebase will provide a better understanding of the overarching signaling mechanisms used by stems cells to support generation of personalized, cell-based therapies for a variety of diseases.
· Martin Steinegger, bioinformatics researcher, TU Munich ROSTLAB: Steinegger’s team’s goal is to provide access to every possible mutation in the gene sequence that will ever be observed in humans. To achieve, they have started a new project called SNAP-Map, which strives to calculate every possible single-nucleotide polymorphism (SNP) in human proteins to make this technology and data available worldwide. With this data available, researchers will have the ability to access the effect of mutations in individuals and advance the efforts towards individual medicine based on understanding human diversity and variation.
"We created the CycleCloud BigScience Challenge to remove boundaries and help democratize access to supercomputing resources," said Jason Stowe, founder and CEO, Cycle Computing. "As a bootstrapped company, we understand why researchers are usually confined to sizing their questions to the compute cluster they have, or can afford. These finalists highlight how utility supercomputing gives scientists the computational room to realize their vision, ask challenging questions, and move humanity forward."
Each finalist will provide a presentation and demo on their research to the Cycle Judging Panel followed by a 30 minute Q&A. The Finalists' entries will be judged against the contest criteria and the grand prize winner will be announced next year on the Cycle Computing site.
About Cycle Computing
Cycle Computing, a bootstrapped, profitable software company, delivers proven, secure and flexible utility supercomputing software and services since 2005. Cycle helps clients maximize existing HPC infrastructure and speed computations on servers, virtual machines, and on-demand in the cloud. Thanks to our CycleServer HPC management software and our CycleCloud fully-supported & secured HPC clusters, Cycle clients experience faster time-to-market, decreased operating costs, and unprecedented service & support. Starting with three initial Fortune 100 clients, Cycle has grown to deploy proven implementations at Fortune 500s, SMBs and government and academic institutions including JP Morgan Chase, Purdue University, Pfizer and Lockheed Martin.
Source: Cycle Computing
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.