Fetching Platform: A Tale of Big Data and Small IT

By Nicole Hemsoth

May 11, 2010

As the SaaS market grows in size, scope, and complexity, one clear emerging trend is the “small company, big data” paradigm. Enabled by the seemingly inexhaustible resources provided by on-demand provisioning of clouds of all shapes and sizes (private, hybrid, community, public, etc.) the biggest challenge no longer lies in simply affording the resources to compete, it’s about having the creative potential to create an SaaS product that is one step above the rest in terms of delivery, technology and good old-fashioned innovation.  

What SaaS enterprises in that “big data, small IT department” bind needs is a valid example that demonstrates how reliability, scalability, performance and cost-effectiveness are achieved in the cloud. While no one is saying the road is free from a few bumps, if the example from this week’s news about Platform ISF for a small company with big data issues is any sign, there are more clouds on the horizon than we might have thought.

For its relatively small size, artificial intelligence-based data extraction firm Fetch Technologies has some major mission-critical data demands that require instant scalability with maximum performance and reliability. Fetch’s clients include Fortune 500 companies, business intelligence firms, and even background-checking services — all of whom are seeking the deep web extraction of data that can be instantly turned around to integrate into analytics and business intelligence software.

And So, The Story Begins…

Once upon a time, there was a relatively small firm with massive mission-critical data demands that required flexibility, scalability, and flawless performance. This company (Fetch) scanned the vast landscape that was teeming with options to provide these elements, but to no avail. Until one day, the company’s IT leaders noticed Platform Computing as it came along, bearing its ISF offering. It might as well have been riding a gleaming white steed as far as Fetch’s Director of IT, Rich Parker, is concerned.

The Fetch – Platform ISF partnership is one of the better practical examples of a large-scale cloud deployment for data of this magnitude for enterprise in recent weeks. And the good news is, it’s already proving to be successful, thus setting the stage for new similar marriages between big data and private clouds that can burst out to meet maximum data needs automatically.

But before we get ahead of ourselves and move on to the happy ending, we should note that the happy ending for this sort of alteration in a business model is not as simple as pushing a button. Fetch Technologies spent about two years working up to the point of full deployment and also devoted a great deal of time to investigating their options. As Parker discusses in his interview with HPC in the Cloud on the experiences with Platform ISF, companies that do not fully prepare themselves at all levels for the shift from traditional software to completely cloud-based SaaS (and for that matter, fully cloud-based business operations at all levels, which is another initiative at Fetch) are setting themselves up for a rocky transition. Although Fetch represents a solid use case, it is because the company took great care to train and prepare all members of the company for the new paradigm. It is because of this preparation that the effort succeeded.

Small Company, Big Data

Fetch Technologies is not a large corporation with a vast IT department; like many other enterprises of roughly similar size that are peddling large-scale SaaS offerings, Fetch needed to find a solution that was not only infinitely scalable (with pricing that matched the scale required on any given day) but that would be free of significant up-front costs. It goes without saying as well that the service must be completely reliable since the nature of their SaaS operation would require infallibility and instant deliverables to their wide range of customers. 

Make no mistake about what Fetch does; it involves some serious compute mojo and data crunching. As Mike Horowitz, chief product officer at Fetch, notes, “this is absolutely computationally-intensive but we do it in a very efficient manner; you can imagine doing this at Web-scale, which is our goal — it requires a huge amount of compute power. I have been in IT for 25 years and I have never seen any other application run a quad-core quad-CPU process for 20 hours at 100 percent CPU usage.”

Before moving into the cloud with Platform ISF, Fetch used to rely on a manual provisioning process when they needed to boost capacity for their SaaS offering — a process that often took up to an hour of time for their IT staff for each server. The difference has been rather dramatic since Fetch is now able to provision groups of servers automatically, which means much faster results with far less manpower costs.

Although there were no percentages presented, the cost savings for Fetch is in the many thousands of dollars, says Rich Parker. Interestingly, this is not only because the company has shifted its main business to the cloud, but also because they are leveraging the cloud for many other internal operations and processes in an effort to realize what Parker referred to as “the goal of 100 percent virtualization.”

Full Company-Wide Virtualization

As Platform notes of their ISF offering in the Fetch case, ISF “supports heterogeneous virtual resources so users don’t need to know what hypervisor or hardware is running their server. This multi-visor support also reduces training since resource users only need to learn the easy Platform interface. Additionally, this gives the IT department the flexibility to select the best hypervisor for each particular application based on provisioning policies.” This means that Fetch can take advantage of the ease of use of cloud and expand its use to the whole organization.

Fetch Technologies is doing something truly interesting and innovative — they are not only early adopters of the Platform product (and there are others like it from other vendors, all of which boast different features that might be more appealing to different enterprises or organizations) they are using it in a unique way. They aim for complete virtualization — and this includes virtualization of every department at the small company. According to Rich Parker:

We started this virtual private cloud infrastructure two years ago with the idea to turn it into a networked pool of resources, CPU storage, memory, etc. that would be flexible enough to allow us to reconfigure servers whenever we needed to. The goal was 100 percent virtualization of all servers; none in the office. We’re using the full capability of VMware infrastructure so we have this very reliable, flexible infrastructure then we were looking for an application to put on top of it to allow us to make the best use of it.

Overall, I call this [extended virtualization] distributed IT — we push IT administration out to everyone; we’re rolling out Platform not only to QA and development but potentially to product managers and everyone in the company. Because of Platform, they don’t need to know what servers were running on, what physical resources, they don’t even need to know what datacenter the server’s in because Platform extracts all that backend IT infrastructure so end users are more efficient in getting resources when and as they need it.

Leveraging the Public Cloud

It is useful for firms to have the ability to leverage the public cloud as needed. In a discussion about private clouds in the model that Fetch is utilizing, “private cloud monitoring of resources and capacity planning are very critical. We need to know when we need to add more resources and how long it will take to add them. For example, we need to add more CPU and memory — that could take us 2 weeks to do. We monitor like crazy; we have over 200 monitors.” However, as Parker did note, having the capability to scale to EC2 — even if that never happens, is one of the attractive features of a cloud offering that the one they chose from Platform.

It is not difficult to see how being able to leverage a public cloud is beneficial to Fetch’s business model, particularly since it is nearly impossible for them to determine maximum capacity when the needs of their customers can change on a daily basis. While security and other issues are making it a secondary part of the IT model, Fetch is able to enable a plugin to the EC2 public cloud. While they are not yet taking advantage of this, they see that as their customer base grows, they will always have the spare capacity to scale out to meet demand — a fact that interests other SaaS vendors hanging onto news about cloud. As Parker stated, “It’s nice to know it’s there and as we evolve, because we’re delivering mission-critical data for organization so the idea of having a private cloud is important — it becomes important to secure that channel but as we grow we may have applications where use of the public cloud makes sense so it’s important to have that flexibility built-in.”

What the Early Adopters Are Noticing…

Martin Harris, director of product management at Platform, weighed in on how the Fetch case reflects what Platform is seeing on a larger scale. Harris stated that Fetch and others are an example of how cloud is enabling a new competitive playground for software vendors that allows them to differentiate themselves with the underlying support from the greater flexibility and better responsiveness at a lower cost. Harris also noted that Platform has several others who are currently examining Platform ISF in pilots including those in the semiconductor, oil and gas, and risk management side of financial services. The Fetch case study will prove to be a useful starting point for these sectors to evaluate the possibilities and performance of cloud.

The cloud has leveled the playing field in many senses, allowing the small to compete with the giants — at least in terms of SaaS. The questions are becoming more complex as more time passes and now might include issues of to what extent virtualization will reshape an entire organization versus just a core competency.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

Nvidia Appoints Andy Grant as EMEA Director of Supercomputing, Higher Education, and AI

March 22, 2024

Nvidia recently appointed Andy Grant as Director, Supercomputing, Higher Education, and AI for Europe, the Middle East, and Africa (EMEA). With over 25 years of high-performance computing (HPC) experience, Grant brings a Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire