Tag: Hadoop

Cray Targets Oil and Gas Sector’s Big Data Needs

Sep 25, 2013 |

Supercomputer-maker Cray is helping oil and gas companies benefit from the most-advanced reservoir modeling approach yet. Called Permanent Reservoir Monitoring, or PRM, the technique requires innovative data warehousing technology and data analysis techniques.

Accelerate Hadoop MapReduce Performance using Dedicated OrangeFS Servers

Sep 9, 2013 |

Recent tests performed at Clemson University achieved a 25 percent improvement in Apache Hadoop Terasort run times by replacing Hadoop Distributed File System (HDFS) with an OrangeFS configuration using dedicated servers. Key components included extension of the MapReduce “FileSystem” class and a Java Native Interface (JNI) shim to the OrangeFS client. No modifications of Hadoop were required, and existing MapReduce jobs require no modification to utilize OrangeFS.

Cray Bundles Intel’s Hadoop with CS300 Line of Supercomputers

Jun 20, 2013 |

This month, Cray will begin delivery of a new big data analytics cluster that combines a of its entry-level CS300 system that’s been optimized to run Intel’s Hadoop distribution. Cray says the new system will provide customers with a “turnkey” Hadoop cluster that can tackle big data problems that would be difficult to solve using commodity hardware.

Cray Cracks Commercial HPC Code

Jun 20, 2013 |

During a conversation this week with Cray CEO, Peter Ungaro, we learned that the company has managed to extend its reach into the enterprise HPC market quite dramatically–at least in supercomputing business terms. With steady growth into these markets, however, the focus on hardware versus the software side of certain problems for such users is….

Hacking into the N-Queens Problem with Virtualization

Jun 19, 2013 |

Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, trained at San Francisco’s Hack Reactor, an institute designed for intense fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.

TACC Longhorn Takes On Natural Language Processing

Jun 14, 2013 |

For all the progress we’ve made in IT over the last 50 years, there’s one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.

Intel Carves Mainstream Highway for Lustre

Jun 12, 2013 |

Today Intel announced some new pitches to push Lustre in front of enterprise eyeballs with usability features for Lustre and a total rip and replace for the native Hadoop file system designed to appeal to the HPC-oriented Hadoop set. We talked with Brent Gorda, former CEO and founder of Whamcloud, which Intel acquired just a tick under a year ago about how….

TACC’s Hadoop Cluster Breaks New Ground

May 30, 2013 |

A 256-node Hadoop system at the University of Texas at Austin is breaking down the barriers that have traditionally kept high performance computing relegated to technical experts. Nearly 70 students and researchers at TACC have used the cluster to crunch big datasets, and provide potential answers to questions in the fields of biomedicine, linguistics, and astronomy.

Why Big Data Needs InfiniBand to Continue Evolving

Apr 1, 2013 |

Increasingly, it’s a Big Data world we live in. Just in case you’ve been living under a rock and need proof of that, <a href=”http://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/” target=”_blank”>a major retailer can use an unimaginable number of data points to predict the pregnancy of a teenage girl outside Minneapolis before she gets a chance to tell her family</a>.  That’s just one example, but there are countless others that point to the idea that mining huge data volumes can uncover gold nuggets of actionable proportions (although sometimes they freak people out…)

Why Big Data Needs InfiniBand to Continue Evolving

Apr 1, 2013 |

Increasingly, it’s a Big Data world we live in. Just in case you’ve been living under a rock and need proof of that, <a href=”http://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/” target=”_blank”>a major retailer can use an unimaginable number of data points to predict the pregnancy of a teenage girl outside Minneapolis before she gets a chance to tell her family</a>.  That’s just one example, but there are countless others that point to the idea that mining huge data volumes can uncover gold nuggets of actionable proportions (although sometimes they freak people out…)

Sponsored Links