Visit additional Tabor Communication Publications
June 05, 2012
PLEASANTON and SAN FRANCISCO, Calif. -- June 5, 2012 – At the 2012 Semantic Technology & Business Conference in San Francisco, YarcData Inc., a Cray company (Nasdaq: CRAY), today announced the planned launch of a “Big Data” contest featuring $100,000 in prizes. The YarcData Graph Analytics Challenge will recognize the best submissions for solutions of un-partitionable, Big Data graph problems.
YarcData is holding the contest to showcase the increasing applicability and adoption of graph analytics in solving Big Data problems. The contest is also intended to promote the use and development of RDF and SPARQL (both standards developed by the World Wide Web Consortium) as the industry standard for graph analytics.
“Graph databases have a significant role to play in analytic environments, and they can solve problems like relationship discovery that other traditional technologies do not handle easily,” said Philip Howard, Research Director, Bloor Research. “YarcData driving thought leadership in this area will be positive for the overall graph database market, and this contest could help expand the use of RDF and SPARQL as valuable tools for solving Big Data problems.”
The grand prize for the first place winner is $70,000. The second place winner will receive $10,000 and the third place winner will receive $5,000. There will also be additional prizes for the other finalists. Contest judges, which will include a combination of Big Data industry analysts, experts from academia and semantic web, and YarcData customers, will review the submissions and select the ten best contestants.
The YarcData Graph Analytics Challenge will officially begin on Tuesday, June 26, 2012 and winners will be announced during a live web event on Dec. 4, 2012. Full contest details including specific criteria and the contest judges will be announced on June 26. To pre-register for a contest information packet, please visit the YarcData website at www.yarcdata.com. Information packets will be sent out June 26. The contest will be open only to those individuals who are eligible to participate under U.S. and other applicable laws and regulations.
“As YarcData has been growing its customer base, we are seeing increased awareness and interest in graph analytics among enterprises across verticals, including life sciences, healthcare, financial services, scientific research and government,” said Arvind Parthasarathi, President, YarcData. “Many critical Big Data problems are based on un-partitionable graphs and we hope this contest will encourage programmers to explore innovative solutions in the world of graph analytics. We believe RDF/SPARQL will be to graph analytics what SQL is to relational databases.”
About the uRiKA Graph Analytics Appliance
YarcData’s uRiKA system is a Big Data appliance for graph analytics. uRiKA helps enterprises reveal unknown, unexpected or hidden relationships in Big Data by creating a highly-scalable, real-time graph analytics warehouse that supports ad hoc queries, pattern-based searches, inferencing and deduction. The uRiKA system is a purpose-built appliance for graph analytics featuring graph-optimized hardware that provides up to 512 terabytes of global shared memory, massively-multithreaded graph processors supporting 128 threads/processor, and highly scalable I/O with data ingest rates of up to 350 terabytes per hour – and an RDF/SPARQL database optimized for the underlying hardware enabling applications to interact with the appliance using industry standard interfaces. uRiKA complements an existing data warehouse or Hadoop cluster by offloading graph workloads and interoperating within the existing analytics workflow. Subscription pricing for on-premise deployment of the appliance eases the adoption of the uRiKA system into existing IT environments.
About YarcData Inc.
YarcData delivers business-focused real-time graph analytics for enterprises to gain business insight by discovering unknown relationships in Big Data. Adopters include the Institute of Systems Biology, the Mayo Clinic, Noblis, Sandia National Labs, the Canadian government, as well as multiple deployments in the United States government. Started as a division of Cray Inc., the YarcData business is in the process of transitioning to a subsidiary, YarcData Inc., a Cray company. YarcData is based in the San Francisco bay area and more information is at www.yarcdata.com.
About Cray Inc.
As a global leader in supercomputing, Cray provides highly advanced supercomputers and world-class services and support to government, industry and academia. Cray technology is designed to enable scientists and engineers to achieve remarkable breakthroughs by accelerating performance, improving efficiency and extending the capabilities of their most demanding applications. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to surpass today’s limitations and meeting the market’s continued demand for realized performance. Go to www.cray.com for more information.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.