Your Datacenter Makeover: Have Someone Else Do It

By Dennis Barker

August 17, 2008

Running for the office of Springfield sanitation commissioner, Homer Simpson promised that his crews wouldn’t just pick up the trash, they’d wash cars and provide any other personal domestic service citizens wanted. His successful campaign slogan: “Can’t someone else do it?”

Maybe you can imagine an IT manager reacting with that question when he’s told, again, he has got to turn the datacenter from a cost sink into a strategic business advantage. Rebuilding systems to meet the demands of specific business goals or service levels and driving down real-estate and energy costs sounds like a priority, but it can end up low on the to-do list, always a few notches under “keep things running.” A rethinking and overhauling of infrastructure is the kind of massive task where someone else doing it makes the most sense, especially with the cost of building, powering, and cooling datacenters rising at record rates. You want to get it right.

“A lot of firms are optimizing their datacenters in the wrong way,” says Tony Bishop, co-founder and CEO of Adaptivity, a company that helps organizations bring IT strategies, architectures, and operating models in line with business goals. “They’re designing from the bottom up, then trying to manage it. They look at the facility, the floor space, the power lines, then try to guesstimate what the growth will be. Then they organize the floor based on grouping resources by type: servers here, storage here, firewalls over there. They’re organizing by what works best on the floor, not what works best for the business. Instead, you’ve got to be thinking that sales applications need market data and tools to be near each other because of the physics of data proximity. You’ve got to be business-process-driven rather than resource-driven.”

Formerly chief architect in Wachovia’s Corporate Investment Banking Technology Group, where he led a team that built a utility-computing infrastructure, Bishop has walked the walk of designing a datacenter around business objectives.
 
Adaptivity and partner DataSynapse propose to be that “someone else” who will come in and figure out how to turn your IT systems into a resource no one even notices because it handles each transaction as if it were the most important transaction in the pipe. (The two have worked together before, and Adaptivity uses DataSynapse technology in its solutions, but earlier this month announced an official joint services agreement.) They have a shared concept of what the ideal core of IT services ought to be, and they call it the Next Generation Datacenter (NGDC). They say that with their methodologies, software technology, and combined know-how, they can transform a datacenter into one that’s flexible enough to meet changing business demands while reducing complexity, using fewer resources, and not wrecking the budget.

“The primary pain we see is that datacenters are spending a growing percentage of the IT budget, but it’s inefficient spending due to poor capacity planning,” says Joe Schwartz, chief marketing officer at DataSynapse. A McKinsey study found that average server utilization among datacenters studied was a sad 6 percent, and facility utilization was 56 percent. Naturally, this results in all sorts of unnecessary complexity, but the biggest drag it causes is on profitability.

The NGDC is intended to avoid such inefficiencies by being built from scratch to meet business demands; bring the experts in to study those demands, then build accordingly. One way to think of the NGDC concept is datacenter as a service.

“We tailor the infrastructure so that it operates as a service utility,” Bishop says. “We develop a demand-based model, where services — processing, storage, and so on — are allocated as needed and shut down when no longer needed. With our approach, you end up with a real-time, demand-based utility. That’s the most efficient way to operate. Our methodologies, the technologies we implement, DataSynapse’s application management technology, it’s all aimed at letting customers deploy a utility infrastructure.”

What’s in the NGDC

“We’ve created methodologies and templates for building an adaptive datacenter,” Bishop says. “Think of it like a franchise. You’re going to have everything you need. Templates, methods, processes, tools to map your business models to infrastructure. The technology to distribute work and dynamically allocate resources. Processing-specific ‘ensembles’ that guarantee low latency and high throughput. Optimized networking services. The components needed to change your infrastructure so it’s more responsive and efficient are here.”

The NGDC architecture incorporates offerings from both companies. For example, DataSynapse’s Dynamic Application Service Management software platform for delivering and optimizing scalable enterprise-class application services. Adaptivity’s Fit for Purpose Design framework “takes business-driven workloads and resource consumption behaviors and encapsulates them in service execution contracts,” Bishop says. DataSynapse’s orchestration model “takes those policies and makes sure they execute.” Workloads are sent to Processing Execution Destinations, “self-contained logical fabrics housed in a single container footprint and interconnected with high-speed fabrics.” This is the hardware stuff: multicore processors, I/O (10GigE, Fiber Channel, Infiniband), storage, memory, networking gear, in a cooled container. Legacy apps would be migrated to these PEDs. The NGDC reference architecture also includes technology from third parties, including Cisco and VMware.

“We’re essentially offering a datacenter in a box,” Schwartz says. “It’s geared toward your business applications, optimized for your specific types of applications and workloads. You can plug-and-play PEDs based on your workload and application mix.”

Adaptivity and DataSynapse offer a 10-part program to help organizations implement an NGDC. It’s aimed at defining services and quality levels and the roadmap for getting there. Through workshops and iterative sessions, “we’ll help design the right infrastructure,” Schwartz says. “We take a look at your applications today, classify them, study workloads, and then help you design your datacenter so you can move apps to available processing units when they need to execute. Ultimately, you can manage your datacenter as a virtual resource pool that you allocate to applications as needed.”

Adaptivity and DataSynapse have most of their customers in the financial world, where they’ve been delivering the benefits promised by the NGDC. Bishop tells of one Adaptivity customer “where ripping and replacing hardware translated to one-third the footprint, and they were able to do 50 times that volume in that smaller datacenter. We were able to identify four types of processing patterns, and the workloads associated with those. Once we looked top-down and understood the demand, how things were consumed, we were able to use DataSynapse technology to dynamically match workloads to resources.” In another case, by re-orchestrating workloads, they were able to reduce three racks to one small cluster. “When you get rid of three racks of big Sun SMP machines, that’s a good chunk of money you can save,” Bishop says.

If phrases like “business process alignment” and “business value” seem like things that happen in some misty, far-off time, what Adaptivity and Data Synapse are promising with their approach can be put very plainly. “Lower cost of transactions is our goal,” Bishop says.

“Volumes of content are exploding. Equipment is getting increasingly dense. People are trying to transfer labor into automation — productivity comes from automation, but that automation has to go somewhere. That somewhere is the datacenter. As a result, datacenters are growing while their costs are going to double over the next five to 10 years, and that ends up consuming the entire IT budget. How can you innovate if you can’t get this strategy under control?”

Maybe have someone else do it?

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Mira Supercomputer Enables Cancer Research Breakthrough

November 11, 2019

Dynamic partial-wave spectroscopic (PWS) microscopy allows researchers to observe intracellular structures as small as 20 nanometers – smaller than those visible by optical microscopes – in three dimensions at a mill Read more…

By Staff report

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quantum annealing) – ion trap technology is edging into the QC Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researcher Read more…

By Jan Rowell

What’s New in HPC Research: Cosmic Magnetism, Cryptanalysis, Car Navigation & More

November 8, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Machine Learning Fuels a Booming HPC Market

November 7, 2019

Enterprise infrastructure investments for training machine learning models have grown more than 50 percent annually over the past two years, and are expected to shortly surpass $10 billion, according to a new market fore Read more…

By George Leopold

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Atom by Atom, Supercomputers Shed Light on Alloys

November 7, 2019

Alloys are at the heart of human civilization, but developing alloys in the Information Age is much different than it was in the Bronze Age. Trial-by-error smelting has given way to the use of high-performance computing Read more…

By Oliver Peckham

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This