For decades, the trajectory of data analytics has moved in one simple direction – toward bigger and faster. But the journey to get there has been anything but simple.
At heart, this is the story told by the short book, Spark for Dummies, by Robert D. Schneider.
One of the main characters in the story of data analytics is of course the data itself – Big Data. It’s growing exponentially and it comes from almost everywhere – phone calls, e-mail, social media, and online shopping, to name only a few. Even driving a car – a new Ford Fusion plug-in hybrid generates 25GB of data every hour. In fact, according to Forbes, by the year 2020, about 1.7 megabytes of new information will be created every second for every human being on the planet.
But a key concept to keep in mind is that the growth of data is not the real issue. If all we want to do is store data, then there’s plenty of very inexpensive storage available on tape, with IBM scientists back in April of 2015 already demonstrating densities of 123 billion bits of uncompressed data per square inch on particulate magnetic tape, which represents the equivalent of a 220 terabyte tape cartridge that could fit in the palm of your hand.
Instead, the real driver of data analytics evolution is the simple desire for competitive advantage. Consider the benefits that Big Data analytics brings to all kinds of industries:
- Financial services:
- Gain deeper knowledge about customers
- Discover fraudulent activities
- Offer new, innovative products and services
- Make better – and faster – trading decisions
- Deliver higher quality service
- Quickly identify and correct network anomalies
- Make informed decisions about capital investments
- Offer highly tailored packages to retain more customers
- Offer smarter up‐sell and cross‐sell recommendations
- Get a better picture of overall purchasing trends
- Set optimal pricing and discounts
- Monitor social media to spot satisfied — or disgruntled — customers
The list is essentially endless, but you get the picture. The question of how to derive the most value possible from Big Data has occupied the attention of some rather impressive organizations. In 2004, Google decided to harness the power of parallel, distributed computing to help digest the enormous amounts of data produced during daily operations. The result was a group of technologies and architectural design philosophies that came to be known as MapReduce, an approach to Big Data analytics built on the proven concept of divide and conquer by using distributed computing and parallel processing. It’s much faster to break a massive task into smaller chunks, allocate them to multiple servers, and process them in parallel.
MapReduce was a great start, but it requires a significant amount of developer and technology resources to make it work. This wasn’t feasible for most enterprises, and the relative complexity led to the advent of Hadoop, a popular, standards‐based, open‐source software framework built on the foundation of MapReduce. Hadoop leverages the power of massive parallel processing to take advantage of Big Data, generally by using a lot of inexpensive commodity servers.
In typical fashion, the more we gain, the more we want. The MapReduce / Hadoop paradigm is based on batch processing – amassing large volumes of data, then running it all at once to get results. Though this approach is powerful for many use cases, what if we want results right now, not tomorrow or next week after the batch job runs? Enter Apache Spark, the Big Data solution for real-time analytics.
Apache Spark represents a revolutionary new approach to designing, developing, and distributing solutions capable of processing Big Data for real-time results. Spark offers several advantages for developing Big Data solutions, including higher performance, greater simplicity, easier administration, and faster application development.
Because most Big Data analytics solutions such as Spark are composed of numerous open‐source components, assembling a stable, scalable, manageable environment isn’t straightforward. An integrated solution from a vendor provides a single point of contact to help get your Big Data infrastructure up and running – and to keep it running if you have problems. IBM has made enormous contributions and investments in open‐source Spark. To complement these efforts, IBM also created IBM Spectrum Conductor, an all‐inclusive, turnkey commercial distribution that delivers all of Spark’s advantages, while making it easier for enterprises to build and operate Spark-based solutions.
IBM Spectrum Conductor, a member of the IBM Spectrum Computing family of software-defined solutions, enables organizations to accelerate business insights from all their data by leveraging the most current scale-out applications, open source frameworks, in-memory analytics, NoSQL databases, cloud-native application architectures, and container environments. IBM Spectrum Conductor offers significant advantages over Hadoop. It provides a more powerful resource scheduler that’s been proven in some of the world’s most demanding customer environments, as well as monitoring, reporting, diagnostics, and workload management tools. And don’t underestimate the value of IBM services and support, all managed from a single user interface.
Spark for Dummies provides many pages of explanations about why Spark-driven real-time analytics solutions are revolutionary for business and how all types of enterprises are successfully implementing Spark-based solutions leveraging the advantages of IBM Spectrum Conductor. You don’t need to wait for business insights; IBM Spectrum Conductor can help you gain competitive advantage today.
 IBM Press Release: IBM Research Sets New Record for Tape Storage, April 2015 https://www-03.ibm.com/press/us/en/pressrelease/46554.wss
 Formerly IBM Spectrum Conductor for Spark