Visit additional Tabor Communication Publications
November 16, 2012
PALO ALTO, CA, Nov. 16 - HP today unveiled the industry's first server built to help clients operationalize Big Data, drive new business opportunities and save up to $1 million over three years.
With the advent of Big Data software and the promise that it brings, many organizations have tried to deploy these solutions on existing architectures not designed to handle the specific needs of these workloads. As a result, the outcomes from these early deployments have been suboptimal from a performance and cost perspective.
"Big Data application environments such as Hadoop, MPP data warehouses, Big Data analytics and object stores have very different workload requirements," said Dan Vesset, vice president, Business Analytics Research, IDC. "Given the large and varied amounts of fast-moving data that needs to be stored and accessed quickly and the different requirements of end users, these workloads can be highly varied, complex and inefficient to manage if run on traditional hardware infrastructure. In order to fully embrace the promise of Big Data, it is critical that the underlying infrastructure be optimized for the workload."
The new HP ProLiant SL4500 server series is the only solution purpose built for Big Data environments. It provides maximum performance, productivity and cost-effectiveness in an ultradense solution required by these workloads. Built on HP Converged Infrastructure, the new server offers a highly efficient design that consumes up to 50 percent less space, 61 percent less power and 31 percent lower cost while using 63 percent fewer cables.
Modular architecture optimizes results for workload specific applications
The modular design of the HP ProLiant SL4500 server series offers varied compute and storage configurations that enable clients to optimize their infrastructure for a workload-specific application, removing the need to piece together incongruent hardware for the supporting infrastructure.
With a single, cost-effective architecture, the HP ProLiant SL4500 server series also supports multiple Apache Hadoop vendors including Cloudera and Hortonworks, as well as additional software including OpenStack Cloud Software and MongoDB.
"Enterprises that leverage Cloudera's Platform for Big Data to unlock insights across all of their data benefit from deploying infrastructure components optimized for the extreme demands of Big Data workloads," said Amr Awadallah, chief technology officer, Cloudera. "By designing a server purpose built for Big Data, HP is offering the market a seamless new approach to processing large data sets efficiently and cost-effectively."
HP innovation delivers greater performance and density
The HP ProLiant SL4500 Gen8 server series, with HP Smart Array technology, delivers industry-leading performance with a nearly seven times faster input/output operations per second (IOPS) than existing architectures. With the smart analytics of HP SmartCache, the system will optimize storage traffic to ensure the lowest latency response and up-front investment.
Current server offerings cannot address the rapidly growing amounts of storage and servers for Big Data, forcing IT leaders to acquire additional expensive data center space. However, the new HP ProLiant SL4500 server series solves this problem by delivering industry-leading storage density of up to 240 terabytes (TB) in a single 4.3-rack-unit (U) chassis, or 2.16 petabytes (PB) with nine servers in an industry-standard 42-U rack.
As a result of this extreme density, clients realize significant cost savings, greater performance and increased efficiency.
Safeguard Big Data, simplify management and support with HP ProLiant Gen8
The latest member of the HP ProLiant Generation 8 (Gen8) family, the HP SL4500 server series is built with HP ProActive Insight Architecture, which embeds intelligence and automation capabilities allowing clients to:
HP enhances scale-out server portfolio
HP also announced updates to its high-performance computing (HPC) portfolio, enabling clients to maximize the performance benefits of the latest processing technology from Intel and NVIDIA.
The HP ProLiant SL270s Gen8 server offers maximized processor density, with the ability to support up to eight Intel Xeon Phi coprocessors or eight NVIDIA Kepler graphic processing units (GPUs) per server. The HP ProLiant SL270s and SL250s Gen8 servers now support the latest NVIDIA Kepler GPUs and newly announced Intel Xeon Phi coprocessors, enabling clients to select the best accelerator or coprocessor for their specific workloads.
Pricing and availability
The HP ProLiant SL4500 server series in a single node configuration is available immediately worldwide for a starting price of $7,643.
The new HP ProLiant SL270s Gen8 servers will be available next month, with for a starting price of $6,166. The HP ProLiant SL250s Gen8 servers will be available with NVIDIA Kepler and Intel Xeon Phi processors early next year for a starting price of $5,659.
HP's premier Europe, Middle East and Africa client event, HP Discover, takes place Dec. 4-6 in Frankfurt, Germany.
HP creates new possibilities for technology to have a meaningful impact on people, businesses, governments and society. The world's largest technology company, HP brings together a portfolio that spans printing, personal computing, software, services and IT infrastructure to solve customer problems.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.