Connecting the Dots: Applications and Grid Infrastructure

By By Yaron Haviv, CTO, Voltaire

May 28, 2007

The move to grid or grid-like architectures within the datacenter brings many benefits, such as growth of capacities, lower costs, and support for an increasing number and variety of applications. This trend has also brought additional infrastructure requirements and associated challenges. Connecting and managing hundreds or thousands of servers and networked storage, and incorporating server and storage virtualization technologies, has created communication challenges, network complexity and a steep learning curve for getting the most out of the ability to virtualize infrastructure.

With all of this complexity to deal with, have we figured out how much time and resources it takes to deploy applications over grids?

This article examines how grids built around a service-oriented architecture (SOA) focusing on business tasks, business flows, and service delivery will significantly shorten the time and efforts in application deployment and configuration, while delivering the greatest efficiencies. A datacenter grid model is proposed, which includes considerations for deployment and provisioning tools for applications, server and storage infrastructure, and high-performance grid fabrics.

The Evolution of Today’s Data Center Challenges

In recent years, IT has been faced with enormous growth in capacities. Digital data is generated at a growing rate and covers every aspects of our lives — from telephones (VoIP, cellular) to media and entertainment, retail, banking and leisure. The amount of data grows exponentially and becomes larger and more complex to process (think about types, such as media, XML, DNA, etc.), requiring faster servers and storage with greater capacities.

Because compute and storage capacities needed to grow to address the data influx and budgets remained flat, these trends forced a paradigm shift and led to increased commoditization of the compute and storage infrastructure. Commodity-based server clusters and grids replaced large and expansive mainframes, and clusters of low- to mid-range storage replaced the “Big Irons.” Furthermore, server and storage virtualization technologies were introduced to increase the utilization of the hardware, leading to hardware cost savings.

While this revolution in server and storage architecture saves significant amounts of money on hardware and capital expenses, it also created new and increased software and operations expenses. IT’s next challenge is to increase the software efficiency and to reduce the operational costs in grids.

The Key to Reducing Software and Operational Costs? Simplify the Datacenter!

The need for increased capacities at lower costs drove the industry to focus on reducing capital expenses and hardware costs. While doing this, the datacenter became more complex and fragmented, driving up operational costs. Unary systems are replaced with server farms and clusters, network topologies become more complicated due to the distributed nature of the new data center, and new technologies such as SAN (storage area networks) introduced with them complete processes and disciplines (not to mention additional organizational barriers). Server virtualization further fragmented the resources, creating more manageable components and undermining some key assumptions about the relationship between a resource and its physical implementation. Provisioning of an application environment that consists of several tiers can take weeks and relies on many resources. Even small changes to the environment may require scheduling downtime and serial processes that take days.

To address complexity, datacenters will need to be built around an SOA, which focuses on business tasks, business flows and service delivery. This new architecture transcends hardware and software. The infrastructure will become adaptive and self-configuring, focusing less on technology silos and manual processes that take much time and resources. It will instead zero in on how the infrastructure can satisfy the service and application objectives. This “subtlety” is a prerequisite to constructing a true SOA.

Simplification Through I/O and Fabric Consolidation

A significant barrier to achieving a utility datacenter is the tight relation that exists between applications and infrastructure. Some applications are more dependent on compute, some on storage, and some rely more heavily on low-latency, inter-processor communication. Each application may drive different hardware requirements or configurations. For example, a standard Web server and a standard 1U server with two network ports may be good enough for some applications, but for a database of file servers, more and faster I/O is needed. Without addressing that relationship, datacenters would still need to be provisioned physically or, in the best case, automation and virtualization tools would be limited to homogeneous silos.

And CPU capacities are growing. The world is migrating to 64-bit CPU technologies from Intel and AMD, CPUs are faster, and multi-core technology with dual- or quad-CPU cores on a single chip is now reality. New power-saving and cooling technologies allow higher system densities increasing the overall CPU capacity per server by a factor of 10 or more. This requires an equivalent capacity growth on the server I/O, storage and network interfaces.

To address these I/O challenges, servers now need 10 Gbps or faster external interfaces that can process and/or virtualize I/O in hardware. Middleware is needed that can take advantage of that hardware and bypass the OS while still providing traditional application and/or storage APIs. Moreover, the communication fabric/network should be more reliable and predictable — and not waste precious application run time on wait or retransmissions. Since network, storage and I/O resources are scarce, it also is important to enable efficient management and dynamically partition the I/O resources to the right applications and traffic. Much like the CPU-partitioning executed by server virtualization technologies, this can also eliminate the need for manual infrastructure and cable provisioning, and shorten application deployment time.

In today’s datacenter, a lot of focus in put on incorporating tools for storage and server automation, which can take care of many deployment and maintenance tasks. However, a key element in the creation of an operationally efficient datacenter is the use of a unified fabric (for all cluster, I/O and network traffic) that can allow for partitioning of the fabric and the attached I/O to accommodate dynamic application needs. The fabric consolidation is achieved using multi-service switches and servers are connected to the fabric switch using a multi-channel adapter (such as an InfiniBand adapter). This configuration can provide multiple virtual NICs, fast access to storage and low-latency messaging and RDMA for application scale-out and clustering. The switches can form multiple virtual and isolated LAN, SAN and cluster networks on-demand, and attach transparently to external Ethernet or Fibre Channel networks.

Using InfiniBand fabric technology, a single adapter and single link can emulate multiple adapters and network ports. InfiniBand has built-in mechanisms to ensure isolation between virtual I/O elements and between different logical networks. 10 GbE can also be used as a lower-end alternative to InfiniBand. However, today, 10 GbE’s pricing remains high and the technology is immature by comparison. Furthermore, InfiniBand has many unique capabilities that enable consolidation that are not found in 10 GbE.

Service-Oriented Infrastructure (SOI) Management

While virtualization of servers, storage and fabrics is a key element to achieving a flexible and more efficient datacenter, it also is critical to develop a new approach to data center resource management. Instead of manual procedures by which administrators create and configure the infrastructure, infrastructure resources should be dynamically created and configured based on the application requirements. This is achieved through the use of SOI management tools.

Fabric provisioning and SOI management tools, such as Voltaire GridVision Enterprise software, depend on the use of dynamic and unified datacenter fabrics, which have loose relationships between resources and can be programmed to create whatever topology or logical links are needed at a given time or to satisfy a given application load.

These tools are complementary to many of the virtualization and automation/provisioning tools in market today because they focus on the infrastructure and connectivity aspects of virtual datacenter resources. They can integrate with the server virtualization products (such as Xen and VMWare) and typically use an open and extensible API for optional integration with server and storage provisioning tools  Orchestration and scheduling tools can use the SOI Web services API and object models to provision infrastructure as needed, collect health/performance information and get notified on infrastructure events and changes.

With a SOI, equipment can be wired once, thus eliminating physical user intervention. Complex application deployment procedures that cross organizational boundaries can be automated and conducted in few minutes rather than days or weeks. They are less error-prone and consume fewer resources. Furthermore, infrastructure can be built-to-order to meet application-specific requirements with the right balance of CPU, network and storage resources. Ultimately, this makes applications on a grid more efficient and eliminates the right bottleneck.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Quantinuum Reports 99.9% 2-Qubit Gate Fidelity, Caps Eventful 2 Months

April 16, 2024

March and April have been good months for Quantinuum, which today released a blog announcing the ion trap quantum computer specialist has achieved a 99.9% (three nines) two-qubit gate fidelity on its H1 system. The lates Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Computational Chemistry Needs To Be Sustainable, Too

April 8, 2024

A diverse group of computational chemists is encouraging the research community to embrace a sustainable software ecosystem. That's the message behind a recent Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire