Tag: data analytics
The US Army Research Laboratory (ARL) is counting on supercomputing and large-scale analytics to provide the competitive edge it needs to maintain its position as the nation’s premier laboratory for land forces. As laid out in the recently-released Technical Implementation Plan for 2015-2019, the ARL sees advanced computing as fundamental to its mission to “discover, Read more…
In a recent blog entry, Mike Boros, Hadoop Product Marketing Manager at Cray, Inc., writes about the company’s positioning of Hadoop for scientific big data. Like the old adage, “when the only tool you have is a hammer, every problem begins to resemble a nail,” Boros suggests that the Law of the Instrument may be true Read more…
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/OpenSFS_logoCROPPED.jpg” alt=”” width=”95″ height=”51″ />OpenSFS has chosen its Community Representative Director for 2013: Tommy Minyard, director of Advanced Computing Systems (ACS) at the Texas Advanced Computing Center (TACC). We got the new director’s views on Lustre’s opportunities in big data and exascale, maintaining a single source tree, and new features on the horizon.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Penguin_UXD1_server_small.jpg” alt=”” width=”92″ height=”75″ />Penguin Computing has launched its first ARM-based server platform. Known as the UDX1, the Penguin box is based on Calxeda’s latest ARM server chip, and is aimed at cloud computing, Web hosting, and, especially, data analytics – UD stands for Ultimate Data. The move puts Penguin into the front ranks of computer makers who are testing the waters for the burgeoning microserver market.
Petascale supercomputing is coming to one of the least populated states in the US.
For all the accolades one hears about German engineering, there are few IT vendors native to that country. Recently though, we got the opportunity to talk with one such company, ParStream, a Cologne-based startup that has developed a bleeding-edge CPU/GPU-based analytics platform that marries high performance computing to big data.
The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the NC State effort to overcome the memory limitations of multicore chips; the sale of the first-ever commercial quantum computing system; Cray’s first GPU-accelerated machine; speedier machine learning algorithms; and the connection between shrinking budgets and increased reliance on modeling and simulation.
Culling together massive data has provided some profound opportunities for a wide array of analytics projects but has created a number of complications for those who want to gain actionable intelligence from it. While the “big data” movement is still unfolding, a number of companies have emerged to help simplify access and use, especially of unstructured information. HPC stalwart Platform Computing entered the race to refine handling of vast datasets — not to mention the management behind such operations to stake their claim in this emerging space.
Advanced data analytics applications are replacing flesh-and-blood analysts.
When announced in 2006, the Cray XMT supercomputer attracted little attention. The machine was originally targeted for high-end data mining and analysis for a particular set of government clients in the intelligence community. While the feds have given the XMT support over the past five years, Cray is now looking to move these machines into the commercial sphere. And with the next generation XMT-2 on the horizon, the company is gearing up to accelerate that strategy in 2011.