Handicapping IBM/OpenPOWER’s Odds for Success

By John Russell

January 19, 2016

2016 promises to be pivotal in the IBM/OpenPOWER effort to claim a non-trivial chunk of the Intel-dominated high-end server landscape. Big Blue’s stated goal of 20-to-30 percent market share is huge. Intel currently enjoys 90-plus percent share and has seemed virtually unassailable. In an ironic twist of the old mantra ‘no one ever got fired buying IBM’ it could be that careers at Big Blue rise or fall based upon progress.

It’s just two years since (Dec. 2013) IBM (NYSE: IBM), NVIDIA (NASDAQ: NVDA), Mellanoxn (NASDAQ: MLNX), Tyan, and Google (NASDAQ: GOOG) co-founded the OpenPOWER Foundation to build an ecosystem around the IBM Power processor and challenge Intel. At virtually the same time, IBM announced plans to jettison the remainder of its x86 business (servers) by selling it to Lenovo, which had already acquired IBM’s PC business (2005). The $2.1billion deal closed late in the year. Then IBM’s share of the HPC server market was roughly 23 percent. Today, it’s closer to five percent.[i]

IBM is making a staggering bet. Setting risk aside, much has been accomplished. OpenPOWER has grown to more than 170 members in more than 22 countries. A licensable reference architecture processor has been created. Acceleration enabling technologies have been aggressively incorporated. On the order of 25 OpenPOWER solutions are in various stages of being brought to market.

“The timing is right,” says Addison Snell, CEO of Inersect360 Research. “After roughly 20 years of clusters based on the ‘Beowulf’ model, in which standardization and portability were primary goals, the HPC industry is migrating back toward an era of specialization. Even within the envelope of Intel x86 innovation, end users are looking at three primary options, Xeon, Xeon Phi as a co-processor, and Xeon Phi as a standalone microprocessor. And that’s before considering whether FPGAs acquired from Altera or even Intel Atom processors (competing with ARM) are part of the equation. End users are already evaluating a multitude of processing alternatives, which gives OpenPOWER an opportunity.”

For so long Intel’s x86 architecture has basically owned the market. It dwarfs everyone else. The entry of IBM and OpenPOWER sets up a potentially grand struggle between two contrasting views of technology progress approaches and business opportunity. Both agree the age of accelerated/manycore computing is here, but differ fundamentally on the path forward.

IBM argues Intel’s one-size fits all approach – consolidating devices and functions into a ‘single’ piece of silicon – actually stifles innovation compared to an ecosystem in which collaboration between diverse technology partners are all beavering away on their own unique ideas for delivering the best technology solutions (acceleration, networking, storage, programming, et al.).

Intel’s position is that Moore’s law is hardly dead. In fact, the company says Moore’s law and HPC form a virtuous circle, each powering the other forward (See HPCwire article, Moore’s Law – Not Dead – and Intel’s Use of HPC to Keep it Alive). Moreover, Intel contends the coalescing of functions on silicon is not merely more elegant, but ultimately higher performing and cheaper.

Brad McCredie, vice president of IBM Power Systems Development and until recently president of the OpenPOWER Foundation, says “The appetite for compute and acceleration is going to far outstrip [silicon scaling] before we’re going to say the accelerator is going to go by way of the Southbridge and Northbridge switch chip which all got sucked into the CPU die.” He further suggests that Intel’s manufacturing business model actually requires this on-silicon consolidation and a “closed system” approach to grow profits and rebuff competition.

No doubt the constant anti-Intel drumming emanating from IBM is intended to reinforce the idea that another choice in the market would be good, that Intel’s overwhelming dominance is bad, and that IBM cum partners has sufficient strength and technology acumen to mount such a challenge. Skeptics respond IBM has no other realistic route given Intel’s head start in the high-end server market and dominance in processors. Maybe it doesn’t matter. This is capitalism after all.

IBM's Ken King
IBM’s Ken King

Much more interesting and important is how the struggle eventually plays out. Ken King, a 30-year-plus IBM veteran and general manager, OpenPOWER Alliances, and McCredie recently laid out the IBM strategy in a meeting with HPCwire editors. Discussion covered IBM’s embrace of the accelerated computing paradigm, its view of how high-end server market dynamics, particularly technology consumption patterns, are changing, and Big Blue’s strategy for reinventing itself and challenging Intel dominance.

Getting Moore’s Law Off Life Support?
“People say Moore’s law is dead. The facts are it’s declining,” says King. “You are no longer seeing the 2x gains every 18 months so you’re not going to get the value from just the silicon. From our perspective the biggest factor that is going to address that [challenge] is accelerators. We see accelerated computing as the new normal – the ability to effectively integrate CPUs with GPUs and FPGAs to accelerate processing throughout the entire system (networking, storage, etc) and with an emphasis on processing data where it resides versus having to move the data to the compute.”

This diverse and widespread implementation of acceleration technology is what’s critical to improving performance and putting Moore’s law back on that trajectory in a way that’s not just pure silicon, says King, adding “that’s the critical infrastructure for tomorrow’s economy.”

Cognitive computing will be the driver. “We moved from the internet era to the early stages of the cloud era – there’s still a lot to go – but the next era, just starting to formulate, is the cognitive era where industries will be transformed by being able to leverage cognitive computing to change their business model,” he says.

 Data – lots of it – is the fuel, says King. Science, industry, government, and virtually every other segment of society, are generating a treasure trove of data that cognitive computing can transform into insight and action. Acceleration is the engine, says King, citing two examples in medical applications that use different acceleration technologies:IBM Watson. SC15

  • IBM Watson Medical Health. Recently accelerated with GPUs, the IBM Watson cognitive platform has accelerated ‘rank and tree retrieval’ capabilities nearly 2X versus non-accelerated computers. Expectations are high for Watson Medical Health, already used extensively in sifting and interpreting clinical records and genomics research.
  • Edico Genome. DNA sequencing is notoriously tough on general purpose CPUs. Edicon’s FPGA-accelerated Dragon Processor board has been put into use at The Genome Analysis Center (TGAC) and was able to map the ash tree genome was 177 times faster per processing core than TGAC’s local HPC systems requiring only seven minutes instead of three hours on one of the larger datasets (see HPCwire article, TGAC Unleashes DRAGEN to Accelerate Genomics Workflows).

“I can go industry by industry showing how cognitive computing assisted by accelerated computing infrastructure will be transformative. Silicon is not going to do it by itself anymore,” says King.

Importantly, says McCredie, the right approach to acceleration will vary. “Genomics is looking good with FPGAs but it is going to be hard to argue that GPUs aren’t the way to go for deep learning. If you look at machine learning, that [also] has some pretty good power performance opportunities for FPGAs.”

If accelerated computing does end up requiring flexible approaches to satisfy varying cost/performance issues, OpenPOWER has taken steps to assemble needed technologies. GPU pioneer NVIDIA, of course, is an OpenPOWER founding member as is high performance interconnect specialist Mellanox. Last November, FPGA supplier Xillinx (NASDAQ: XLNX) joined OpenPOWER and contracted to a multi-year deal with IBM. In December, FPGA board specialist BittWare joined OpenPOWER.

IBM's Brad McCredie
IBM’s Brad McCredie

McCredie snipes, “You could argue Intel has figured this out too and endorsed it by their $16.7B acquisition of Altera, but it’s a different model. They are integrating Altera in a way where it is going to be a one size fits all approach.” That won’t work well moving forward, he argues, “Now, we are going to have to build systems with this or that kind of accelerator best suited (cost/performance) to the use…[but] I will take everything I just said back if there is disruptive technology.”

Snell says, particularly in the traditional HPC market, “The biggest advantage of OpenPOWER is its lead in accelerated computing, thanks to NVIDIA Tesla and CUDA. Another recent Intersect360 Research study showed that 34 of the top 50 HPC applications currently offer some level of GPU support or acceleration.

“The biggest open question is how this will evolve. Can end users continue to leverage their work on NVIDIA GPUs on future generations of Intel-based servers? How would technologies like CAPI and NVLINK get incorporated? If Intel does not incorporate these technologies in some optimized fashion, it could push end users onto OpenPOWER to protect their GPU investments.”

HPC Market Undergoes Redefinition
Leaving the sudden emergence of disruptive technology aside and assuming moderate technical comparability between the two camps’ products, IBM’s and OpenPOWER’s remaining hurdle is executing a successful go-to-market strategy: Who is going to build to the OpenPOWER spec – besides IBM – and source IBM Power8 processors? Who is going to buy the systems? To what extent will homegrown components and systems from China become a competitive wildcard?

IBM has certainly tried to think things through here, and articulated a crystallizing view of a market that is more nuanced and dynamic. There will be increasing overlap among traditional buyers and sellers, says King, as technology consumptions models shift. (In particular, think hyperscale datacenters, ODMs, and even big vertical players such in financial services.)

Today, Big Blue breaks the high-end server market into three distinct pieces – traditional HPC, hyperscale datacenter providers, and large enterprise verticals (financial service, for example). A major differentiator among them, emphasizes McCredie, is their varying technology ‘consumption” models which in turn influence the sales channels preferences and product configurations sought.

“The consumption model is so heavily tied to the particular set of skills you’ve invested in and developed over time,” says McCredie. “If you look at the skills the ‘hyperscales’ have invested in and developed, they are able to consume and like to consume differently than the classic enterprise whose skills evolved differently and HPC as well; one is programming-rich capable, one is admin-rich capable, and one is actually pretty technology capable. They all consume differently.”

Looking back, says McCredie, “Nobody ever came to us and said you guys don’t have good technology. We hear a lot of things; we don’t ever hear that. But our technology, until we did OpenPOWER, was completely unconsumable by important segments of the market.”

IBM has been aggressively adapting to make Power-based products easier to consume. “It wasn’t like I had to go back and redesign chips in the hyperscale market. We did have to go back and make a new open firmware stack, they weren’t going to take a half a billion lines of firmware, 99 percent of which they didn’t give a hoot about. So we did make a new firmware stack and we did create some new technology but mostly we just shifted how it was consumed,” says McCredie.

King adds quickly, “Google and Rackspace (NYSE: RAX) are eating that up.”

By its nature the OpenPOWER ecosystem should provide needed flexibility to satisfy varying consumption models. Core technology providers – IBM, NVIDIA, Mellanox, Xillinx, etc. – collaborate closely to push device performance and interoperability. Systems suppliers – OEMs, ODMs, and even a few big users can build systems according to needs dictated by their target markets or internal needs.

OpenPOWERinfographics-compliance3

“We want 20-30 percent market share. That’s a significant statement,” says King. “You’ve got the hyperscalers and we have to get a significant portion of those.”

No doubt, agrees Snell, “The hyperscale is a major wildcard here. Initiatives like Open Compute Project and Scorpio (“Beiji”) have been very inclusive of OpenPOWER and GPU computing, and some individual companies such as Google, Facebook (NASDAQ: FB), Microsoft (NASDAQ: MSFT), and Baidu (NASDAQ: BIDU) purchase enough infrastructure to set a market by themselves. (To get a sense of the market forces at play, note that both OCP and Scorpio have separately, and distinctly, redefined the rack height specified in a “U.”) If the hyperscale market demands a particular configuration, it will get it.”

IBM is having direct interactions with hyperscales says King, “Some are happy to buy IBM’s LC line, maybe with some tweaks or maybe not. Others we’re going to design a model with them based on industry benchmarking and workload benchmarking and go to an ODM. Some will go even further and design everything and just tell the ODM what to manufacturer.”

The point, says King, is the model is flexible to enable that level of customization where required. “To deploy in volume is what’s critical. We’ve got to get penetration to a point where any counterattacks by our competitors don’t negatively impact our ability to be able to get to that level of market share that we are looking for,” he says.

That’s a tall order. One could argue the big hyperscalers have a bit more freedom to do as they will. Big OEMs and ODMs are more deeply entrenched in the x86 ecosystem and risk alienating Intel. Most have made the most tepid of public comments regarding OpenPOWER which can be neatly distilled down as: “Well, we’re always evaluating technology options; however we have a great relationship with Intel.”

Intel is the big dog and worthy of fear. It has been mostly silent on the IBM and OpenPOWER challenge – there’s really no upside for public bashing. Conversely, Intel has a reputation for never being afraid of a little customer arm-twisting with regard to supply, pricing, and early access to emerging Intel technology.

Waiting for the BIG Deals
To date, IBM has achieved its initial goals with OpenPOWER. It has gained substantial market awareness, built out a robust stable of consortium members, and landed a yoke of high-profile wins with CORAL, says Snell. The next step is actually winning market. “Intersect360 Research is presently conducting a deep-dive assessment of end user evaluation and impressions of the full panoply of processing alternatives, including POWER, GPU, Xeon, Xeon Phi, and others, and we will additionally gauge market penetration in our 2016 HPC Site Census survey. 20 percent to 30 percent is a lofty goal, and it will take time to see how long it will take to approach it, if IBM can at all,” Snell says.

The wait to see critical customer wins won’t be long, says King. IBM is actively engaged with 10-15 hyperscalers, he says. “It takes awhile for a hyperscale, whose got 98 to 100 percent of their datacenter being x86, to make a strategic change to add another platform in volume in their datacenters. A year ago I would have said we are trying to get the hyperscales interested; now they are all engaged, not just interested, engaged and actually working with us to figure out what are the right workloads to put Power on and when do they start that deployment and what’s their model for deployment or consumption. I can tell you who has an ODM ready, who doesn’t, who’s going to buy directly, so definitely significant progress.”

In the enterprise, King says very big companies are also looking at different consumption models. “Not exactly what the hyperscales are doing but some that are part of the open compute community are starting look at if there is something similar they would do to the hyperscale community. That could be an interesting OpenPOWER market, besides just buying servers directly from IBM or our partners.”

King and McCredie say there are at least five to seven large enterprises looking at consuming OpenPOWER; several have Power systems inside now, but they are all also starting to stand up their own clouds. “What’s amazing is they are realizing, which is not a big secret in the industry, they are all competing against the big Internet datacenters and hyperscale guys in one way or another,” says King.

CORAL DOE graphicIn the traditional HPC-consuming world, IBM’s strategy sounds like most of its brethren which can be boiled down to: The Top500 and Linpack shouldn’t drive product development and is a poor overall metric; that said establishing one’s place in the Top500 is important because it’s still closely watched by important buyers in government, academia, etc.

“We look at the success we had on CORAL and it’s because we did a lot of great work on real workloads not just a Linpack bid. On the other hand the world is right now starting to get competitive and the U.S. lock on the Top500 just isn’t there. You’ve got to go fix that and I think we have to help people fix that.”

One point Snell makes shouldn’t be forgotten: even if IBM is successful achieving its 20-30 percent market share goal by the end of the decade – an immense achievement for sure – “Intel would still have a dominant market share, while having successfully moved up the value chain with the incorporation of more technologies into its Scalable System Framework approach, and Intel could rebuild share from that position of strength.

“In the near term (2016, 2017), OpenPOWER should focus on its assets, particularly its leadership in GPU acceleration and data-centric computing. This battle will be played out in software more than in hardware, and OpenPOWER needs to build as much momentum as it can. IBM will need to see volume market penetration beginning in 2016, coupled with a few more high-profile wins, in order to be on track.”

UPDATED, Jan 20: IBM released its full year and latest quarterly results after this article was posted. Big Blue beat consensus analysts forecasts for earnings but revenue slipped. Here’s an excerpt from IBM’s press release:

“We continue to make significant progress in our transformation to higher value. In 2015, our strategic imperatives of cloud, analytics, mobile, social and security grew 26 percent to $29 billion and now represent 35 percent of our total revenue,” said Ginni Rometty, IBM chairman, president and chief executive officer.  “We strengthened our existing portfolio while investing aggressively in new opportunities like Watson Health, Watson Internet of Things and hybrid cloud.  As we transform to a cognitive solutions and cloud platform company, we are well positioned to continue delivering greater value to our clients and returning capital to our shareholders.”

Fourth-quarter net income from continuing operations was $4.5 billion compared with $5.5 billion in the fourth quarter of 2014, down 19 percent.  Operating (non-GAAP) net income was $4.7 billion compared with $5.8 billion in the fourth quarter of 2014, down 19 percent.  The prior-year gain from the divestiture of the System x business impacted operating net income by 19 points.

Total revenues from continuing operations for the fourth quarter of 2015 of $22.1 billion were down 9 percent (down 2 percent adjusting for currency) from the fourth quarter of 2014.” For the full results see: http://www.hpcwire.com/off-the-wire/24279/

Initial reaction in the media was mixed as indicated here:

Forbes.com: IBM Finally Beats Earnings Consensus Again In Q4, But Has It Turned A Corner?
Wall Street Journal.com: IBM Revenue Slides, but Cloud Business Grows
New York Times.com: IBM Reports Declines in Fourth-Quarter Profit and Revenue Despite Gains in New Fields

[i] IDC HPC Update presented at SC15

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “pre-exascale” award), parsed out additional information ab Read more…

By Tiffany Trader

Tsinghua Crowned Eight-Time Student Cluster Champions at ISC

June 22, 2017

Always a hard-fought competition, the Student Cluster Competition awards were announced Wednesday, June 21, at the ISC High Performance Conference 2017. Amid whoops and hollers from the crowd, Thomas Sterling presented t Read more…

By Kim McMahon

GPUs, Power9, Figure Prominently in IBM’s Bet on Weather Forecasting

June 22, 2017

IBM jumped into the weather forecasting business roughly a year and a half ago by purchasing The Weather Company. This week at ISC 2017, Big Blue rolled out plans to push deeper into climate science and develop more gran Read more…

By John Russell

Intersect 360 at ISC: HPC Industry at $44B by 2021

June 22, 2017

The care, feeding and sustained growth of the HPC industry increasingly is in the hands of the commercial market sector – in particular, it’s the hyperscale companies and their embrace of AI and deep learning – tha Read more…

By Doug Black

HPE Extreme Performance Solutions

Creating a Roadmap for HPC Innovation at ISC 2017

In an era where technological advancements are driving innovation to every sector, and powering major economic and scientific breakthroughs, high performance computing (HPC) is crucial to tackle the challenges of today and tomorrow. Read more…

At ISC – Goh on Go: Humans Can’t Scale, the Data-Centric Learning Machine Can

June 22, 2017

I've seen the future this week at ISC, it’s on display in prototype or Powerpoint form, and it’s going to dumbfound you. The future is an AI neural network designed to emulate and compete with the human brain. In thi Read more…

By Doug Black

Cray Brings AI and HPC Together on Flagship Supers

June 20, 2017

Cray took one more step toward the convergence of big data and high performance computing (HPC) today when it announced that it’s adding a full suite of big data and artificial intelligence software to its top-of-the-l Read more…

By Alex Woodie

AMD Charges Back into the Datacenter and HPC Workflows with EPYC Processor

June 20, 2017

AMD is charging back into the enterprise datacenter and select HPC workflows with its new EPYC 7000 processor line, code-named Naples, announced today at a “global” launch event in Austin TX. In many ways it was a fu Read more…

By John Russell

Hyperion: Deep Learning, AI Helping Drive Healthy HPC Industry Growth

June 20, 2017

To be at the ISC conference in Frankfurt this week is to experience deep immersion in deep learning. Users want to learn about it, vendors want to talk about it, analysts and journalists want to report on it. Deep learni Read more…

By Doug Black

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Tsinghua Crowned Eight-Time Student Cluster Champions at ISC

June 22, 2017

Always a hard-fought competition, the Student Cluster Competition awards were announced Wednesday, June 21, at the ISC High Performance Conference 2017. Amid wh Read more…

By Kim McMahon

GPUs, Power9, Figure Prominently in IBM’s Bet on Weather Forecasting

June 22, 2017

IBM jumped into the weather forecasting business roughly a year and a half ago by purchasing The Weather Company. This week at ISC 2017, Big Blue rolled out pla Read more…

By John Russell

Intersect 360 at ISC: HPC Industry at $44B by 2021

June 22, 2017

The care, feeding and sustained growth of the HPC industry increasingly is in the hands of the commercial market sector – in particular, it’s the hyperscale Read more…

By Doug Black

At ISC – Goh on Go: Humans Can’t Scale, the Data-Centric Learning Machine Can

June 22, 2017

I've seen the future this week at ISC, it’s on display in prototype or Powerpoint form, and it’s going to dumbfound you. The future is an AI neural network Read more…

By Doug Black

Cray Brings AI and HPC Together on Flagship Supers

June 20, 2017

Cray took one more step toward the convergence of big data and high performance computing (HPC) today when it announced that it’s adding a full suite of big d Read more…

By Alex Woodie

AMD Charges Back into the Datacenter and HPC Workflows with EPYC Processor

June 20, 2017

AMD is charging back into the enterprise datacenter and select HPC workflows with its new EPYC 7000 processor line, code-named Naples, announced today at a “g Read more…

By John Russell

Hyperion: Deep Learning, AI Helping Drive Healthy HPC Industry Growth

June 20, 2017

To be at the ISC conference in Frankfurt this week is to experience deep immersion in deep learning. Users want to learn about it, vendors want to talk about it Read more…

By Doug Black

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

Leading Solution Providers

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" processors. The infrastructure is based on the 68-core Intel Knights Landing processor with integrated Omni-Path fabric (the 7250F Xeon Phi). Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This