Fetching Platform: A Tale of Big Data and Small IT

By Nicole Hemsoth

May 11, 2010

As the SaaS market grows in size, scope, and complexity, one clear emerging trend is the “small company, big data” paradigm. Enabled by the seemingly inexhaustible resources provided by on-demand provisioning of clouds of all shapes and sizes (private, hybrid, community, public, etc.) the biggest challenge no longer lies in simply affording the resources to compete, it’s about having the creative potential to create an SaaS product that is one step above the rest in terms of delivery, technology and good old-fashioned innovation.  

What SaaS enterprises in that “big data, small IT department” bind needs is a valid example that demonstrates how reliability, scalability, performance and cost-effectiveness are achieved in the cloud. While no one is saying the road is free from a few bumps, if the example from this week’s news about Platform ISF for a small company with big data issues is any sign, there are more clouds on the horizon than we might have thought.

For its relatively small size, artificial intelligence-based data extraction firm Fetch Technologies has some major mission-critical data demands that require instant scalability with maximum performance and reliability. Fetch’s clients include Fortune 500 companies, business intelligence firms, and even background-checking services — all of whom are seeking the deep web extraction of data that can be instantly turned around to integrate into analytics and business intelligence software.

And So, The Story Begins…

Once upon a time, there was a relatively small firm with massive mission-critical data demands that required flexibility, scalability, and flawless performance. This company (Fetch) scanned the vast landscape that was teeming with options to provide these elements, but to no avail. Until one day, the company’s IT leaders noticed Platform Computing as it came along, bearing its ISF offering. It might as well have been riding a gleaming white steed as far as Fetch’s Director of IT, Rich Parker, is concerned.

The Fetch – Platform ISF partnership is one of the better practical examples of a large-scale cloud deployment for data of this magnitude for enterprise in recent weeks. And the good news is, it’s already proving to be successful, thus setting the stage for new similar marriages between big data and private clouds that can burst out to meet maximum data needs automatically.

But before we get ahead of ourselves and move on to the happy ending, we should note that the happy ending for this sort of alteration in a business model is not as simple as pushing a button. Fetch Technologies spent about two years working up to the point of full deployment and also devoted a great deal of time to investigating their options. As Parker discusses in his interview with HPC in the Cloud on the experiences with Platform ISF, companies that do not fully prepare themselves at all levels for the shift from traditional software to completely cloud-based SaaS (and for that matter, fully cloud-based business operations at all levels, which is another initiative at Fetch) are setting themselves up for a rocky transition. Although Fetch represents a solid use case, it is because the company took great care to train and prepare all members of the company for the new paradigm. It is because of this preparation that the effort succeeded.

Small Company, Big Data

Fetch Technologies is not a large corporation with a vast IT department; like many other enterprises of roughly similar size that are peddling large-scale SaaS offerings, Fetch needed to find a solution that was not only infinitely scalable (with pricing that matched the scale required on any given day) but that would be free of significant up-front costs. It goes without saying as well that the service must be completely reliable since the nature of their SaaS operation would require infallibility and instant deliverables to their wide range of customers. 

Make no mistake about what Fetch does; it involves some serious compute mojo and data crunching. As Mike Horowitz, chief product officer at Fetch, notes, “this is absolutely computationally-intensive but we do it in a very efficient manner; you can imagine doing this at Web-scale, which is our goal — it requires a huge amount of compute power. I have been in IT for 25 years and I have never seen any other application run a quad-core quad-CPU process for 20 hours at 100 percent CPU usage.”

Before moving into the cloud with Platform ISF, Fetch used to rely on a manual provisioning process when they needed to boost capacity for their SaaS offering — a process that often took up to an hour of time for their IT staff for each server. The difference has been rather dramatic since Fetch is now able to provision groups of servers automatically, which means much faster results with far less manpower costs.

Although there were no percentages presented, the cost savings for Fetch is in the many thousands of dollars, says Rich Parker. Interestingly, this is not only because the company has shifted its main business to the cloud, but also because they are leveraging the cloud for many other internal operations and processes in an effort to realize what Parker referred to as “the goal of 100 percent virtualization.”

Full Company-Wide Virtualization

As Platform notes of their ISF offering in the Fetch case, ISF “supports heterogeneous virtual resources so users don’t need to know what hypervisor or hardware is running their server. This multi-visor support also reduces training since resource users only need to learn the easy Platform interface. Additionally, this gives the IT department the flexibility to select the best hypervisor for each particular application based on provisioning policies.” This means that Fetch can take advantage of the ease of use of cloud and expand its use to the whole organization.

Fetch Technologies is doing something truly interesting and innovative — they are not only early adopters of the Platform product (and there are others like it from other vendors, all of which boast different features that might be more appealing to different enterprises or organizations) they are using it in a unique way. They aim for complete virtualization — and this includes virtualization of every department at the small company. According to Rich Parker:

We started this virtual private cloud infrastructure two years ago with the idea to turn it into a networked pool of resources, CPU storage, memory, etc. that would be flexible enough to allow us to reconfigure servers whenever we needed to. The goal was 100 percent virtualization of all servers; none in the office. We’re using the full capability of VMware infrastructure so we have this very reliable, flexible infrastructure then we were looking for an application to put on top of it to allow us to make the best use of it.

Overall, I call this [extended virtualization] distributed IT — we push IT administration out to everyone; we’re rolling out Platform not only to QA and development but potentially to product managers and everyone in the company. Because of Platform, they don’t need to know what servers were running on, what physical resources, they don’t even need to know what datacenter the server’s in because Platform extracts all that backend IT infrastructure so end users are more efficient in getting resources when and as they need it.

Leveraging the Public Cloud

It is useful for firms to have the ability to leverage the public cloud as needed. In a discussion about private clouds in the model that Fetch is utilizing, “private cloud monitoring of resources and capacity planning are very critical. We need to know when we need to add more resources and how long it will take to add them. For example, we need to add more CPU and memory — that could take us 2 weeks to do. We monitor like crazy; we have over 200 monitors.” However, as Parker did note, having the capability to scale to EC2 — even if that never happens, is one of the attractive features of a cloud offering that the one they chose from Platform.

It is not difficult to see how being able to leverage a public cloud is beneficial to Fetch’s business model, particularly since it is nearly impossible for them to determine maximum capacity when the needs of their customers can change on a daily basis. While security and other issues are making it a secondary part of the IT model, Fetch is able to enable a plugin to the EC2 public cloud. While they are not yet taking advantage of this, they see that as their customer base grows, they will always have the spare capacity to scale out to meet demand — a fact that interests other SaaS vendors hanging onto news about cloud. As Parker stated, “It’s nice to know it’s there and as we evolve, because we’re delivering mission-critical data for organization so the idea of having a private cloud is important — it becomes important to secure that channel but as we grow we may have applications where use of the public cloud makes sense so it’s important to have that flexibility built-in.”

What the Early Adopters Are Noticing…

Martin Harris, director of product management at Platform, weighed in on how the Fetch case reflects what Platform is seeing on a larger scale. Harris stated that Fetch and others are an example of how cloud is enabling a new competitive playground for software vendors that allows them to differentiate themselves with the underlying support from the greater flexibility and better responsiveness at a lower cost. Harris also noted that Platform has several others who are currently examining Platform ISF in pilots including those in the semiconductor, oil and gas, and risk management side of financial services. The Fetch case study will prove to be a useful starting point for these sectors to evaluate the possibilities and performance of cloud.

The cloud has leveled the playing field in many senses, allowing the small to compete with the giants — at least in terms of SaaS. The questions are becoming more complex as more time passes and now might include issues of to what extent virtualization will reshape an entire organization versus just a core competency.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

China’s Tencent Server Design Will Use AMD Rome

November 13, 2019

Tencent, the Chinese cloud giant, said it would use AMD’s newest Epyc processor in its internally-designed server. The design win adds further momentum to AMD’s bid to erode rival Intel Corp.’s dominance of the glo Read more…

By George Leopold

NCSA Industry Conference Recap – Part 1

November 13, 2019

Industry Program Director Brendan McGinty welcomed guests to the annual National Center for Supercomputing Applications (NCSA) Industry Conference, October 8-10, on the University of Illinois campus in Urbana (UIUC). On Read more…

By Elizabeth Leake, STEM-Trek

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing components with Intel Xeon, AMD Epyc, IBM Power, and Arm server ch Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. Read more…

By Doug Black

SIA Recognizes Robert Dennard with 2019 Noyce Award

November 12, 2019

If you don’t know what Dennard Scaling is, the chances are strong you don’t labor in electronics. Robert Dennard, longtime IBM researcher, inventor of the DRAM and the fellow for whom Dennard Scaling was named, is th Read more…

By John Russell

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Leveraging Exaflops Performance to Remediate Nuclear Waste

November 12, 2019

Nuclear waste storage sites are a subject of intense controversy and debate; nobody wants the radioactive remnants in their backyard. Now, a collaboration between Berkeley Lab, Pacific Northwest National University (PNNL Read more…

By Oliver Peckham

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This