Visit additional Tabor Communication Publications
November 04, 2009
New Liquid Elements provides fast path for OEMs and resellers to enter the multi-billion dollar unified computing market
STAMFORD, Conn., Nov. 4 -- Liquid Computing, the leader in unified computing infrastructure for today's dynamic datacenter, today announced that it is working with Intel to launch its new Liquid Elements unified computing system on Intel Server Systems with Intel Xeon processors. Liquid Computing will work with VAR channel partners and customers to begin delivering the solution to market at the end of this quarter. The combined solution will be demonstrated at the upcoming SC09 conference, Nov. 14-20 in Portland, Ore.
Until now, Liquid Computing has driven the unified computing market with its LiquidIQ product, a fully integrated hardware and software-based system that is proven to dramatically reduce the cost of managing IT infrastructure through software control of servers, networking and storage from a common interface.
With the introduction of Liquid Elements, Liquid Computing becomes the first to deliver the power of unified computing across standards-based datacenter infrastructure from leading brands. Initially, Liquid Elements will support Intel Server System SR1680MV rack servers and NetApp storage devices. The result is a complete "datacenter in a box" that combines the cost-savings and management flexibility of unified computing from Liquid Computing with the breakthrough processing speed, memory and density of servers from Intel. The system is also designed to cost effectively scale from the smallest through the largest of implementations.
"We believe Liquid Computing's solution will help accelerate the adoption of unified computing that applies an open, standards-based approach," said David Brown, general manager of Channel Server Products at Intel. "By addressing the significant issue of escalating cost and complexity of infrastructure management, Liquid Elements on Intel-based servers provides a powerful solution for the server OEMs and channel to create dynamic and highly efficient datacenters for their customers."
"With Liquid Computing's open approach to unified computing, enterprises can leverage their existing datacenter vendor relationships and investments," said Zeus Kerravala, senior vice president, Yankee Group. "Since it's based on common datacenter building blocks from industry leaders like Intel and NetApp, and supports both bare metal and virtualized environments, Liquid Elements customers gain the benefits of unified computing without incurring cost, risk and delays typically associated with migration to a new platform."
"Liquid Elements will add strength to our datacenter solutions practice," said Joe Vaught, executive vice president and COO of PCPC Direct Ltd., a Houston-based solutions provider. "We're excited by Liquid Computing's vision, as customers are looking for innovative and open solutions that can significantly reduce their operating costs. We are also impressed with the relationships Liquid Computing has developed with leaders like Intel and Microsoft to incorporate their products and platforms into complete integrated systems. We look forward to a strong and prosperous partnership."
"As recognized leaders in server and storage technologies, Intel and NetApp both share our belief that open standards drive competition, eliminate vendor lock-in, and empower customers to choose," said Vikram Desai, president and CEO of Liquid Computing. "Liquid Elements embraces this approach, and thereby takes the benefits of unified computing and comprehensive datacenter automation mainstream."
Liquid Computing will also join Intel's Enabled Server Acceleration Alliance (Intel ESAA). Intel ESAA provides resellers and direct OEMs access to a wide range of pre-validated configuration guides ("recipes") jointly developed by Intel and vendors, as well as marketing and technical resources. It also helps members build relationships, reduce engineering costs, and compete with products based on the proven reliability of Intel server and workstation products.
Liquid Elements Features
Liquid Elements for Intel Server Systems combines the benefits of Liquid Computing's management and control with the power and density of the Intel Server System (SR1680MV) to provide:
Liquid Elements for Intel Server Systems Components
The solution will be available at the end of this quarter. For more information, visit http://www.liquidcomputing.com/products/liquid-elements.php or contact Liquid Computing at firstname.lastname@example.org.
About Liquid Computing, Inc.
Liquid Computing is a leader in unified computing infrastructure for the dynamic datacenter. The company's core product, LiquidIQ, is a complete "datacenter in a chassis" that drives down the time and costs of managing IT infrastructure through unified software-based control of servers, storage and networking resources. Recently named a Visionary in Gartner's 2009 Magic Quadrant for Blade Servers, Liquid Computing has offices in Ottawa, Canada, and Stamford, Conn. The company has customers throughout North America and has established relationships with global industry leaders including Intel, NetApp, Microsoft, VMware, BMC, Oracle, Red Hat and AMD. For more information, visit http://www.liquidcomputing.com.
Source: Liquid Computing, Inc.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.