Visit additional Tabor Communication Publications
October 27, 2006
As the commercial world continues to incorporate high performance computing technology into the enterprise, a demand is being created for much more sophisticated scalable computing solutions. Virtualization capability and RAS (Reliability, Availability and Serviceability) features are increasingly being seen as essential components for all high-end computing. Up until recently these capabilities were only offered in mainframes and big Linux or UNIX boxes. Today new companies are emerging that incorporate these features into scalable systems with commodity-based parts.
Liquid Computing Corporation is one such company. Next Monday, they are planning to announce the general availability of their new scalable computing system, LiquidIQ. With its Interconnect Driven Server (IDS) architecture, LiquidIQ was developed to address the communication bottleneck that afflicts today's scale-out clusters as well as offer SMP virtualization. With the use of HyperTransport technology, the system is able to achieve sustained high-bandwidth, low-latency communications and granular control over system resources.
We recently got the opportunity to talk with Brian Hurley, CEO and co-founder of Liquid Computing, about the significance of their new offering and how he sees LiquidIQ fitting into the high performance and enterprise computing landscapes.
HPCwire: Can you give us a little historical background for Liquid Computing and talk about how that reflects the architectural philosophy of your offering?
Hurley: We started out to focus on the problem of scalable computing; in fact our tag line is "Scalable Computing without Compromise". When we looked at the state of the computing industry, we quickly determined that the basic problem with computing today is not about computing –- that has been commoditized –- it is all about communications. How processors talk to each other, how they talk to memory, how they talk to I/O. To solve this communications problem, we developed a fundamentally new system architecture, which we call the Interconnect Driven Server (IDS) architecture that delivers unprecedented performance and unique new capabilities.
We started with a "white sheet of paper" and designed an integrated system, rather than a system of systems, that allowed us to break the bounds of the "metal box" that typically defines the limits of computing systems today. We have a system that supports seamless multi-chassis scaling. Around this, we wrap "telecom DNA" related to scalable system operations, availability and life cycle management.
The LiquidIQ product delivers new capabilities that include: performance; flexibility; manageability; high availability; and low TCO.
HPCwire: What is the most important problem that you see the Interconnect Driven Server architecture solving in your target markets in high performance computing?
Hurley: We are solving the problems of sustained mixed workload performance over scale and we are doing it with a low cost of ownership.
HPCwire: Do you think HPC users are ready for server virtualization, and if so, what makes you think this is the case?
Hurley: Yes. Virtualization solves long-standing issues associated with system flexibility, operations and life cycle management. We have taken a practical approach to hardware virtualization that simplifies the operation and use of scalable computing. Virtualization removes human effort from reconfigurations, and allows the system to be adapted to different applications or user requirements through software commands or automatic policy.
LiquidIQ virtualizes all system resources including: processors, memory, I/O, and communications. This virtualization of system resources offers significant manageability and flexibility benefits without performance degradation.
Virtualization allows the system to adopt different "personalities" for different users or applications, which means that the physical manifestation of the product from the application or user viewpoint can be changed with a simple software command without manual reconfiguration. As an example, new processors or memory can be assigned to an application via a simple software command.
Adaptive and high performance capabilities allow a LiquidIQ system owner to do more with less. Our live product demos inevitably leave people with their jaws on the floor.
HPCwire: What is the maximum SMP scalability of your current offering?
Hurley: LiquidIQ can support up to 960 processor sockets today. The IDS architecture scales to a virtually unlimited number of sockets. Today, the 960 processor sockets can be dynamically configured in multiple hard partitions, each up to 8-way SMP. In 2007, LiquidIQ will support up to 32-way SMP.
HPCwire: What were some of the basic enabling technologies that allowed you to build such a system today?
Hurley: When the bubble burst, it spun off many technologies and services into the general market that were previously the domain of the large incumbent telecom and computing vendors. This included contract manufacturing capabilities, expert staff, and some interesting technology components. All that, combined with the availability of Linux, open source middleware and high performance commodity processors allowed us to build a system that would have been impossible for a small company to build more than 3 years ago.
HPCwire: How will the IDS architecture scale as quad- and octa-core processors emerge over the next two to three years?
Hurley: As multi-core processors roll out, we expect the I/O demand for processors to interact with other system resources to increase. Our system chassis has 3X times Moore's law built into it. We have verified today that our system's chassis can support upwards of 100 gigabytes per second of interconnect bandwidth that is ready to be exploited by next generation chipsets and communications interfaces. Also of note is that our IDS architecture is processor-agnostic and is built to evolve with processing, memory, broadband and communications technologies.
HPCwire: What remains the most challenging bottleneck in high performance systems today?
Hurley: The "metal box" level bottlenecks have traditionally been the challenge. The scope of a system's performance, manageability and cost attributes has been defined by the physical box around the processors which constrains scalability. LiquidIQ has solved those bottlenecks. The "metal box" is not a limit to us.
The last true frontier remains application development methodologies and associated tools. Programming and debugging for scalable computing applications is still extraordinarily complex and is a bottleneck to end-user productivity. In this context, LiquidIQ provides new capabilities such as resource virtualization and high bandwidth, low latency communications which allow the system to be adapted to the application requirements dynamically, rather than forcing the user to adapt to the system. For example, memory intensive applications can be accommodated as easily as communications or I/O intensive applications. LiquidIQ also provides support for emerging GASNet based programming languages such as UPC.
HPCwire: Would you like to comment on how the Liquid Computing approach differs from that of other companies -- Fabric7 and now PANTA Systems -- who are offering servers with a similar architecture?
Hurley: We don't believe either Fabric7 or Panta Systems are delivering a product with a similar architecture. We built LiquidIQ as a dynamically adaptive and reconfigurable system that scales effortlessly. Our system is a single seamless system, versus a "system of systems." This means the management, control and sustained performance throughput are maintained as the system expands. Other vendors typically require professional services, manual reconfigurations, the addition of 3rd party switches and adjuncts to expand beyond the bounds of the chassis metal.
The underlying performance of the LiquidIQ system and associated virtualization capabilities are unique in the industry.
HPCwire: Can you talk about some of challenges of a small company bringing a high-end system to market?
Hurley: We have a very experienced team that has "been there and done that" from telecom, software and computing. Our biggest challenge has been to balance the demands of an ever increasing list of customers interested in our product against our available resources. Our next challenge is scaling the company to meet demand.
HPCwire: Do you have aspirations to eventually move beyond the high performance enterprise and technical computing markets?
Hurley: Our product is addressing the fundamental problems associated with scalable computing. Today, scalable computing as a market spans more than just the traditional HPC market. It includes applications in ASP, SaaS, IT outsourcing and telecommunications. These applications all require tens to hundreds of processors, storage and I/O components all heavily networked together to deliver service. Our early adopters include customers in the enterprise technical computing, ASP, IT outsourcing and telecommunications OEM markets.
Brian Hurley is a Liquid Computing co-founder and CEO. He has over 19 years of industry experience in the communications systems business and is experienced in the development and market delivery of highly reliable, highly scalable, distributed multi-processing products. With Nortel, Brian blazed the trail to new markets, delivering to market the first commercial release of several new wireless, data and optical products. Brian holds an Electrical Engineering degree from Carleton University.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.