Handicapping IBM/OpenPOWER’s Odds for Success

By John Russell

January 19, 2016

2016 promises to be pivotal in the IBM/OpenPOWER effort to claim a non-trivial chunk of the Intel-dominated high-end server landscape. Big Blue’s stated goal of 20-to-30 percent market share is huge. Intel currently enjoys 90-plus percent share and has seemed virtually unassailable. In an ironic twist of the old mantra ‘no one ever got fired buying IBM’ it could be that careers at Big Blue rise or fall based upon progress.

It’s just two years since (Dec. 2013) IBM (NYSE: IBM), NVIDIA (NASDAQ: NVDA), Mellanoxn (NASDAQ: MLNX), Tyan, and Google (NASDAQ: GOOG) co-founded the OpenPOWER Foundation to build an ecosystem around the IBM Power processor and challenge Intel. At virtually the same time, IBM announced plans to jettison the remainder of its x86 business (servers) by selling it to Lenovo, which had already acquired IBM’s PC business (2005). The $2.1billion deal closed late in the year. Then IBM’s share of the HPC server market was roughly 23 percent. Today, it’s closer to five percent.[i]

IBM is making a staggering bet. Setting risk aside, much has been accomplished. OpenPOWER has grown to more than 170 members in more than 22 countries. A licensable reference architecture processor has been created. Acceleration enabling technologies have been aggressively incorporated. On the order of 25 OpenPOWER solutions are in various stages of being brought to market.

“The timing is right,” says Addison Snell, CEO of Inersect360 Research. “After roughly 20 years of clusters based on the ‘Beowulf’ model, in which standardization and portability were primary goals, the HPC industry is migrating back toward an era of specialization. Even within the envelope of Intel x86 innovation, end users are looking at three primary options, Xeon, Xeon Phi as a co-processor, and Xeon Phi as a standalone microprocessor. And that’s before considering whether FPGAs acquired from Altera or even Intel Atom processors (competing with ARM) are part of the equation. End users are already evaluating a multitude of processing alternatives, which gives OpenPOWER an opportunity.”

For so long Intel’s x86 architecture has basically owned the market. It dwarfs everyone else. The entry of IBM and OpenPOWER sets up a potentially grand struggle between two contrasting views of technology progress approaches and business opportunity. Both agree the age of accelerated/manycore computing is here, but differ fundamentally on the path forward.

IBM argues Intel’s one-size fits all approach – consolidating devices and functions into a ‘single’ piece of silicon – actually stifles innovation compared to an ecosystem in which collaboration between diverse technology partners are all beavering away on their own unique ideas for delivering the best technology solutions (acceleration, networking, storage, programming, et al.).

Intel’s position is that Moore’s law is hardly dead. In fact, the company says Moore’s law and HPC form a virtuous circle, each powering the other forward (See HPCwire article, Moore’s Law – Not Dead – and Intel’s Use of HPC to Keep it Alive). Moreover, Intel contends the coalescing of functions on silicon is not merely more elegant, but ultimately higher performing and cheaper.

Brad McCredie, vice president of IBM Power Systems Development and until recently president of the OpenPOWER Foundation, says “The appetite for compute and acceleration is going to far outstrip [silicon scaling] before we’re going to say the accelerator is going to go by way of the Southbridge and Northbridge switch chip which all got sucked into the CPU die.” He further suggests that Intel’s manufacturing business model actually requires this on-silicon consolidation and a “closed system” approach to grow profits and rebuff competition.

No doubt the constant anti-Intel drumming emanating from IBM is intended to reinforce the idea that another choice in the market would be good, that Intel’s overwhelming dominance is bad, and that IBM cum partners has sufficient strength and technology acumen to mount such a challenge. Skeptics respond IBM has no other realistic route given Intel’s head start in the high-end server market and dominance in processors. Maybe it doesn’t matter. This is capitalism after all.

IBM's Ken King
IBM’s Ken King

Much more interesting and important is how the struggle eventually plays out. Ken King, a 30-year-plus IBM veteran and general manager, OpenPOWER Alliances, and McCredie recently laid out the IBM strategy in a meeting with HPCwire editors. Discussion covered IBM’s embrace of the accelerated computing paradigm, its view of how high-end server market dynamics, particularly technology consumption patterns, are changing, and Big Blue’s strategy for reinventing itself and challenging Intel dominance.

Getting Moore’s Law Off Life Support?
“People say Moore’s law is dead. The facts are it’s declining,” says King. “You are no longer seeing the 2x gains every 18 months so you’re not going to get the value from just the silicon. From our perspective the biggest factor that is going to address that [challenge] is accelerators. We see accelerated computing as the new normal – the ability to effectively integrate CPUs with GPUs and FPGAs to accelerate processing throughout the entire system (networking, storage, etc) and with an emphasis on processing data where it resides versus having to move the data to the compute.”

This diverse and widespread implementation of acceleration technology is what’s critical to improving performance and putting Moore’s law back on that trajectory in a way that’s not just pure silicon, says King, adding “that’s the critical infrastructure for tomorrow’s economy.”

Cognitive computing will be the driver. “We moved from the internet era to the early stages of the cloud era – there’s still a lot to go – but the next era, just starting to formulate, is the cognitive era where industries will be transformed by being able to leverage cognitive computing to change their business model,” he says.

 Data – lots of it – is the fuel, says King. Science, industry, government, and virtually every other segment of society, are generating a treasure trove of data that cognitive computing can transform into insight and action. Acceleration is the engine, says King, citing two examples in medical applications that use different acceleration technologies:IBM Watson. SC15

  • IBM Watson Medical Health. Recently accelerated with GPUs, the IBM Watson cognitive platform has accelerated ‘rank and tree retrieval’ capabilities nearly 2X versus non-accelerated computers. Expectations are high for Watson Medical Health, already used extensively in sifting and interpreting clinical records and genomics research.
  • Edico Genome. DNA sequencing is notoriously tough on general purpose CPUs. Edicon’s FPGA-accelerated Dragon Processor board has been put into use at The Genome Analysis Center (TGAC) and was able to map the ash tree genome was 177 times faster per processing core than TGAC’s local HPC systems requiring only seven minutes instead of three hours on one of the larger datasets (see HPCwire article, TGAC Unleashes DRAGEN to Accelerate Genomics Workflows).

“I can go industry by industry showing how cognitive computing assisted by accelerated computing infrastructure will be transformative. Silicon is not going to do it by itself anymore,” says King.

Importantly, says McCredie, the right approach to acceleration will vary. “Genomics is looking good with FPGAs but it is going to be hard to argue that GPUs aren’t the way to go for deep learning. If you look at machine learning, that [also] has some pretty good power performance opportunities for FPGAs.”

If accelerated computing does end up requiring flexible approaches to satisfy varying cost/performance issues, OpenPOWER has taken steps to assemble needed technologies. GPU pioneer NVIDIA, of course, is an OpenPOWER founding member as is high performance interconnect specialist Mellanox. Last November, FPGA supplier Xillinx (NASDAQ: XLNX) joined OpenPOWER and contracted to a multi-year deal with IBM. In December, FPGA board specialist BittWare joined OpenPOWER.

IBM's Brad McCredie
IBM’s Brad McCredie

McCredie snipes, “You could argue Intel has figured this out too and endorsed it by their $16.7B acquisition of Altera, but it’s a different model. They are integrating Altera in a way where it is going to be a one size fits all approach.” That won’t work well moving forward, he argues, “Now, we are going to have to build systems with this or that kind of accelerator best suited (cost/performance) to the use…[but] I will take everything I just said back if there is disruptive technology.”

Snell says, particularly in the traditional HPC market, “The biggest advantage of OpenPOWER is its lead in accelerated computing, thanks to NVIDIA Tesla and CUDA. Another recent Intersect360 Research study showed that 34 of the top 50 HPC applications currently offer some level of GPU support or acceleration.

“The biggest open question is how this will evolve. Can end users continue to leverage their work on NVIDIA GPUs on future generations of Intel-based servers? How would technologies like CAPI and NVLINK get incorporated? If Intel does not incorporate these technologies in some optimized fashion, it could push end users onto OpenPOWER to protect their GPU investments.”

HPC Market Undergoes Redefinition
Leaving the sudden emergence of disruptive technology aside and assuming moderate technical comparability between the two camps’ products, IBM’s and OpenPOWER’s remaining hurdle is executing a successful go-to-market strategy: Who is going to build to the OpenPOWER spec – besides IBM – and source IBM Power8 processors? Who is going to buy the systems? To what extent will homegrown components and systems from China become a competitive wildcard?

IBM has certainly tried to think things through here, and articulated a crystallizing view of a market that is more nuanced and dynamic. There will be increasing overlap among traditional buyers and sellers, says King, as technology consumptions models shift. (In particular, think hyperscale datacenters, ODMs, and even big vertical players such in financial services.)

Today, Big Blue breaks the high-end server market into three distinct pieces – traditional HPC, hyperscale datacenter providers, and large enterprise verticals (financial service, for example). A major differentiator among them, emphasizes McCredie, is their varying technology ‘consumption” models which in turn influence the sales channels preferences and product configurations sought.

“The consumption model is so heavily tied to the particular set of skills you’ve invested in and developed over time,” says McCredie. “If you look at the skills the ‘hyperscales’ have invested in and developed, they are able to consume and like to consume differently than the classic enterprise whose skills evolved differently and HPC as well; one is programming-rich capable, one is admin-rich capable, and one is actually pretty technology capable. They all consume differently.”

Looking back, says McCredie, “Nobody ever came to us and said you guys don’t have good technology. We hear a lot of things; we don’t ever hear that. But our technology, until we did OpenPOWER, was completely unconsumable by important segments of the market.”

IBM has been aggressively adapting to make Power-based products easier to consume. “It wasn’t like I had to go back and redesign chips in the hyperscale market. We did have to go back and make a new open firmware stack, they weren’t going to take a half a billion lines of firmware, 99 percent of which they didn’t give a hoot about. So we did make a new firmware stack and we did create some new technology but mostly we just shifted how it was consumed,” says McCredie.

King adds quickly, “Google and Rackspace (NYSE: RAX) are eating that up.”

By its nature the OpenPOWER ecosystem should provide needed flexibility to satisfy varying consumption models. Core technology providers – IBM, NVIDIA, Mellanox, Xillinx, etc. – collaborate closely to push device performance and interoperability. Systems suppliers – OEMs, ODMs, and even a few big users can build systems according to needs dictated by their target markets or internal needs.

OpenPOWERinfographics-compliance3

“We want 20-30 percent market share. That’s a significant statement,” says King. “You’ve got the hyperscalers and we have to get a significant portion of those.”

No doubt, agrees Snell, “The hyperscale is a major wildcard here. Initiatives like Open Compute Project and Scorpio (“Beiji”) have been very inclusive of OpenPOWER and GPU computing, and some individual companies such as Google, Facebook (NASDAQ: FB), Microsoft (NASDAQ: MSFT), and Baidu (NASDAQ: BIDU) purchase enough infrastructure to set a market by themselves. (To get a sense of the market forces at play, note that both OCP and Scorpio have separately, and distinctly, redefined the rack height specified in a “U.”) If the hyperscale market demands a particular configuration, it will get it.”

IBM is having direct interactions with hyperscales says King, “Some are happy to buy IBM’s LC line, maybe with some tweaks or maybe not. Others we’re going to design a model with them based on industry benchmarking and workload benchmarking and go to an ODM. Some will go even further and design everything and just tell the ODM what to manufacturer.”

The point, says King, is the model is flexible to enable that level of customization where required. “To deploy in volume is what’s critical. We’ve got to get penetration to a point where any counterattacks by our competitors don’t negatively impact our ability to be able to get to that level of market share that we are looking for,” he says.

That’s a tall order. One could argue the big hyperscalers have a bit more freedom to do as they will. Big OEMs and ODMs are more deeply entrenched in the x86 ecosystem and risk alienating Intel. Most have made the most tepid of public comments regarding OpenPOWER which can be neatly distilled down as: “Well, we’re always evaluating technology options; however we have a great relationship with Intel.”

Intel is the big dog and worthy of fear. It has been mostly silent on the IBM and OpenPOWER challenge – there’s really no upside for public bashing. Conversely, Intel has a reputation for never being afraid of a little customer arm-twisting with regard to supply, pricing, and early access to emerging Intel technology.

Waiting for the BIG Deals
To date, IBM has achieved its initial goals with OpenPOWER. It has gained substantial market awareness, built out a robust stable of consortium members, and landed a yoke of high-profile wins with CORAL, says Snell. The next step is actually winning market. “Intersect360 Research is presently conducting a deep-dive assessment of end user evaluation and impressions of the full panoply of processing alternatives, including POWER, GPU, Xeon, Xeon Phi, and others, and we will additionally gauge market penetration in our 2016 HPC Site Census survey. 20 percent to 30 percent is a lofty goal, and it will take time to see how long it will take to approach it, if IBM can at all,” Snell says.

The wait to see critical customer wins won’t be long, says King. IBM is actively engaged with 10-15 hyperscalers, he says. “It takes awhile for a hyperscale, whose got 98 to 100 percent of their datacenter being x86, to make a strategic change to add another platform in volume in their datacenters. A year ago I would have said we are trying to get the hyperscales interested; now they are all engaged, not just interested, engaged and actually working with us to figure out what are the right workloads to put Power on and when do they start that deployment and what’s their model for deployment or consumption. I can tell you who has an ODM ready, who doesn’t, who’s going to buy directly, so definitely significant progress.”

In the enterprise, King says very big companies are also looking at different consumption models. “Not exactly what the hyperscales are doing but some that are part of the open compute community are starting look at if there is something similar they would do to the hyperscale community. That could be an interesting OpenPOWER market, besides just buying servers directly from IBM or our partners.”

King and McCredie say there are at least five to seven large enterprises looking at consuming OpenPOWER; several have Power systems inside now, but they are all also starting to stand up their own clouds. “What’s amazing is they are realizing, which is not a big secret in the industry, they are all competing against the big Internet datacenters and hyperscale guys in one way or another,” says King.

CORAL DOE graphicIn the traditional HPC-consuming world, IBM’s strategy sounds like most of its brethren which can be boiled down to: The Top500 and Linpack shouldn’t drive product development and is a poor overall metric; that said establishing one’s place in the Top500 is important because it’s still closely watched by important buyers in government, academia, etc.

“We look at the success we had on CORAL and it’s because we did a lot of great work on real workloads not just a Linpack bid. On the other hand the world is right now starting to get competitive and the U.S. lock on the Top500 just isn’t there. You’ve got to go fix that and I think we have to help people fix that.”

One point Snell makes shouldn’t be forgotten: even if IBM is successful achieving its 20-30 percent market share goal by the end of the decade – an immense achievement for sure – “Intel would still have a dominant market share, while having successfully moved up the value chain with the incorporation of more technologies into its Scalable System Framework approach, and Intel could rebuild share from that position of strength.

“In the near term (2016, 2017), OpenPOWER should focus on its assets, particularly its leadership in GPU acceleration and data-centric computing. This battle will be played out in software more than in hardware, and OpenPOWER needs to build as much momentum as it can. IBM will need to see volume market penetration beginning in 2016, coupled with a few more high-profile wins, in order to be on track.”

UPDATED, Jan 20: IBM released its full year and latest quarterly results after this article was posted. Big Blue beat consensus analysts forecasts for earnings but revenue slipped. Here’s an excerpt from IBM’s press release:

“We continue to make significant progress in our transformation to higher value. In 2015, our strategic imperatives of cloud, analytics, mobile, social and security grew 26 percent to $29 billion and now represent 35 percent of our total revenue,” said Ginni Rometty, IBM chairman, president and chief executive officer.  “We strengthened our existing portfolio while investing aggressively in new opportunities like Watson Health, Watson Internet of Things and hybrid cloud.  As we transform to a cognitive solutions and cloud platform company, we are well positioned to continue delivering greater value to our clients and returning capital to our shareholders.”

Fourth-quarter net income from continuing operations was $4.5 billion compared with $5.5 billion in the fourth quarter of 2014, down 19 percent.  Operating (non-GAAP) net income was $4.7 billion compared with $5.8 billion in the fourth quarter of 2014, down 19 percent.  The prior-year gain from the divestiture of the System x business impacted operating net income by 19 points.

Total revenues from continuing operations for the fourth quarter of 2015 of $22.1 billion were down 9 percent (down 2 percent adjusting for currency) from the fourth quarter of 2014.” For the full results see: http://www.hpcwire.com/off-the-wire/24279/

Initial reaction in the media was mixed as indicated here:

Forbes.com: IBM Finally Beats Earnings Consensus Again In Q4, But Has It Turned A Corner?
Wall Street Journal.com: IBM Revenue Slides, but Cloud Business Grows
New York Times.com: IBM Reports Declines in Fourth-Quarter Profit and Revenue Despite Gains in New Fields

[i] IDC HPC Update presented at SC15

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

GDPR’s Impact on Scientific Research Uncertain

May 24, 2018

Amid the angst over preparations—or lack thereof—for new European Union data protections entering into force at week’s end is the equally worrisome issue of the rules’ impact on scientific research. Among the Read more…

By George Leopold

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Francisco, one would be tempted to dismiss its claims of inventing Read more…

By John Russell

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Mastering the Big Data Challenge in Cognitive Healthcare

Patrick Chain, genomics researcher at Los Alamos National Laboratory, posed a question in a recent blog: What if a nurse could swipe a patient’s saliva and run a quick genetic test to determine if the patient’s sore throat was caused by a cold virus or a bacterial infection? Read more…

Silicon Startup Raises ‘Prodigy’ for Hyperscale/AI Workloads

May 23, 2018

There's another silicon startup coming onto the HPC/hyperscale scene with some intriguing and bold claims. Silicon Valley-based Tachyum Inc., which has been emerging from stealth over the last year and a half, is unveili Read more…

By Tiffany Trader

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Silicon Startup Raises ‘Prodigy’ for Hyperscale/AI Workloads

May 23, 2018

There's another silicon startup coming onto the HPC/hyperscale scene with some intriguing and bold claims. Silicon Valley-based Tachyum Inc., which has been eme Read more…

By Tiffany Trader

Japan Meteorological Agency Takes Delivery of Pair of Crays

May 21, 2018

Cray has supplied two identical Cray XC50 supercomputers to the Japan Meteorological Agency (JMA) in northwestern Tokyo. Boasting more than 18 petaflops combine Read more…

By Tiffany Trader

ASC18: Final Results Revealed & Wrapped Up

May 17, 2018

It was an exciting week at ASC18 in Nanyang, China. The student teams braved extreme heat, extremely difficult applications, and extreme competition in order to cross the cluster competition finish line. The gala awards ceremony took place on Wednesday. The auditorium was packed with student teams, various dignitaries, the media, and other interested parties. So what happened? Read more…

By Dan Olds

Spring Meetings Underscore Quantum Computing’s Rise

May 17, 2018

The month of April 2018 saw four very important and interesting meetings to discuss the state of quantum computing technologies, their potential impacts, and th Read more…

By Alex R. Larzelere

Quantum Network Hub Opens in Japan

May 17, 2018

Following on the launch of its Q Commercial quantum network last December with 12 industrial and academic partners, the official Japanese hub at Keio University is now open to facilitate the exploration of quantum applications important to science and business. The news comes a week after IBM announced that North Carolina State University was the first U.S. university to join its Q Network. Read more…

By Tiffany Trader

Democratizing HPC: OSC Releases Version 1.3 of OnDemand

May 16, 2018

Making HPC resources readily available and easier to use for scientists who may have less HPC expertise is an ongoing challenge. Open OnDemand is a project by t Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

CFO Steps down in Executive Shuffle at Supermicro

January 31, 2018

Supermicro yesterday announced senior management shuffling including prominent departures, the completion of an audit linked to its delayed Nasdaq filings, and Read more…

By John Russell

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

Deep Learning Portends ‘Sea Change’ for Oil and Gas Sector

February 1, 2018

The billowing compute and data demands that spurred the oil and gas industry to be the largest commercial users of high-performance computing are now propelling Read more…

By Tiffany Trader

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Sympo Read more…

By Staff

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

Google I/O 2018: AI Everywhere; TPU 3.0 Delivers 100+ Petaflops but Requires Liquid Cooling

May 9, 2018

All things AI dominated discussion at yesterday’s opening of Google’s I/O 2018 developers meeting covering much of Google's near-term product roadmap. The e Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This