Visit additional Tabor Communication Publications
September 12, 2011
In a recent whitepaper on SGI’s role in the coming wave of data-intensive computing requirements , IDC’s high performance computing (HPC) guru, Steve Conway, presented an overview of how the HPC and “big data” markets are merging in terms of hardware challenges.
Conway expressed a clear set of shared needs between data-intensive computing in HPC and the enterprise “big data” hardware and software in the face of his prediction that big data will keep getting bigger, necessitating changes in what (and how) hardware vendors design for HPC and enterprise customers.
In the HPC context, IDC defines data-intensive computing for this market as a set of big data problems that includes “tasks involving sufficient data volumes and complexity to require HPC-based modeling and simulation.” They go on to explain that these problems are rooted in combinations or isolated masses of both structured and unstructured data and can come from traditional HPC spheres (academia, public sector, etc.) or can “be upward extensions of commercial problems that have grown large and complex enough at the high end to require HPC.”
IDC claims that data-intensive workloads are going to become par for the HPC course in coming years, making up a more sizable portion of the overall high performance computing market. Conway notes that “in addition, while many big data problems will be run on standard clusters, limitations in the memory sizes and memory architectures of clusters make them ill-suited for the most challenging classes of data-intensive problems.” He points to a number of HPC sites that are looking to upgrade their systems to those that have fatter memory profiles, a trend that IDC expects to see playing out in the next few years and beyond.
Conway points to a number of requirements that are specific to data-intensive computing hardware, noting that HPC and data-intensive problems vary in terms of their emphasis on speed or time to solution. He says that “data-intensive problem solving performance typically is gauged by how fast the computer can traverse one or more large data sets, something using special frameworks such as MapReduce, Hadoop (Linux) or Dryad (Windows)” in contrast to say floating operations per second (FLOPS) on the HPC side.
Also of interest, there are a number of trends that affect data-intensive solutions that will be rolling out, including from SGI with its Altix line. Conway says that the high-end data explosion has had an impact on the entire IT spectrum with this being even more pronounced at the HPC end. He also says that there is a trend toward “unbalanced HPC systems.” In effect, he is referring to the last decade’s per node and system memory speeds that haven’t kept up with advancements in processors, which he says makes it “more difficult to feed the processors enough data to keep them busy.” This is the famous “memory wall” that is holding back the type of standard clusters dominant in the HPC realm.
Conways claims HPC vendors need to recognize the demand for system-wide emphasis on memory size, capabilities, bandwidth and latency. Conway says that even though many big data challenges can be tackled on commodity clusters, these aren’t always designed well for such problems because of limited system and cluster memory size and sharing issues as well as the communications barriers that exist. He claims that “the latencies of standard clusters typically are too high to support cache coherency across the clusters’ distributed memory locations.”
Although the bias factor should be noted here (this was a whitepaper for an HPC vendor), Conway makes the argument that commodity clusters just can’t do the job that a specialized HPC solution can do when it comes to data-intensive computing.
This article was originally published in BigCompute, a Tabor Communications publication slated to launch later this year.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.