Visit additional Tabor Communication Publications
October 12, 2011
Platform Computing will soon be under new management. IBM announced on Tuesday that it intends to buy the Toronto-based company and fold it into its Systems and Technology Group. If all goes according to plan, the deal will close in Q4, ending Platform's 19-year reign as an independent, privately held company. Financial terms of the deal were not disclosed.
As a premier vendor of cluster, grid, and cloud management software for the technical computing market, Platform fills an interesting niche in IBM's offerings, which mostly lacks this type of system software in its HPC stack. Except for IBM's xCAT, which offers similar management and provisioning functionality to Platform Cluster Manager, the rest of the portfolio including, including Platform Symphony, Platform LSF, Platform HPC, plus newer products like Platform ISF and Platform MapReduce, should fill some empty slots at IBM.
The acquisition also fits in with IBM's "Smarter Planet" strategy, although to be honest, what computer technology would actually fall outside of that mom-and-apple-pie vision of IT? Nevertheless, Platform's ability to address clusters, grids, and clouds does play into Big Blue's focus on "big data" types of applications, which tend to rely on distributed infrastructure to handle their computational workloads.
From a financial perspective, IBM is expecting to grow an already profitable business. Platform took in $71.6 million in revenue in 2010, up from about $60 million the previous year. That's not huge by IBM standards, but the growth rate is certainly attractive. But according to a 451 Group report, Platform's business is limited by its confinement to what the authors referred to as "the HPC ghetto," a weakness IBM thinks it can rectify.
During a press briefing on Tuesday, Brian Connors, VP of IBM's HPC Business Line, emphasized his company's plans to expand Platform's business. Citing IDC's projected 8 percent increase for total technical computing revenue over the next few years, Connors said he thinks IBM's go-to-market strategy for Platform portfolio will "extend the reach of high performance computing into the high growth segment of technical computing."
In this context, he has pigeon-holed HPC to mean mostly high-end government and academic supercomputing for scientists, whereas technical computing here covers a wider range of simulation, modeling and analytics codes on clusters and other scaled-out infrastructure. Specifically, IBM is looking to broaden Platform's footprint in the commercial space -- areas like product development, financial services, manufacturing, digital media, life science, and so on.
Platform is fairly well positioned in many of those sectors already, but IBM, with its considerable marketing and sales heft, plus its deep customer base, should be able to leverage those advantage for Platform's business. IBM is in about 170 countries today, compare to Platform's 20-country footprint. And even though Platform has built up a nice collection of value added resellers (VARs), OEM and software partners, not to mention about 2,000 clients, those are dwarfed by IBM's vast network of partners and customers.
Although not much was said about leveraging IBM's server portfolio, the Platform offerings are a nice fit for IBM's hardware platforms that the company sells into the technical computing space, namely the System X line, the BladeCenter servers, the Power-based systems, and System Storage hardware. Bundling technical management software with their hardware dovetails nicely with IBM's strategy of selling higher margin, integrated systems. This is yet another way for Big Blue to distance itself from white box vendors at the bottom of the server food chain.
While all of this has the makings for a happy marriage, keep in mind that not all of Platform software is running on IBM servers today. Current strategic partners (besides IBM) include HP, Dell and Cray, which bundle Platform's management tools and libraries with many of their HPC system deployments. It's reasonable to wonder what is to become of these relationships.
According to IBM's Connors, they plan is to keep those partnerships intact. "It's our intent to preserve as many of those relationships, if not all, going forward," he said, adding "coopetition is just the nature of business now." Platform CEO Songnian Zhou, reiterated that line of thinking, saying, "There's a clear recognition that the world is now very open and we need to make absolutely sure we escalate our efforts in supporting those platforms, including competitors' platforms to IBM, so that we continue to do the best job in serving enterprise customers."
Whether Platform's partner OEMs will be comforted by that sentiment is questionable. Intersect360 Research
CEO Addison Snell notes that HPC system vendors have a lot more choice in cluster and grid management vendors than even just a few years ago. "Today there are a number of other companies, such as Adaptive Computing, Bright Computing, and Univa, whose products compete with Platform's in specific markets," he explained, adding that the IBM-Platform deal could spur other acquisitions of said companies.
Certainly we've seen similar domino effects in other areas, most recently in the storage arena. With a landmark acquisition like this in cluster, grid and cloud management, it certainly wouldn't be out of the realm of possibilities to see other OEMs start choosing sides.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.