Handicapping IBM/OpenPOWER’s Odds for Success

By John Russell

January 19, 2016

2016 promises to be pivotal in the IBM/OpenPOWER effort to claim a non-trivial chunk of the Intel-dominated high-end server landscape. Big Blue’s stated goal of 20-to-30 percent market share is huge. Intel currently enjoys 90-plus percent share and has seemed virtually unassailable. In an ironic twist of the old mantra ‘no one ever got fired buying IBM’ it could be that careers at Big Blue rise or fall based upon progress.

It’s just two years since (Dec. 2013) IBM (NYSE: IBM), NVIDIA (NASDAQ: NVDA), Mellanoxn (NASDAQ: MLNX), Tyan, and Google (NASDAQ: GOOG) co-founded the OpenPOWER Foundation to build an ecosystem around the IBM Power processor and challenge Intel. At virtually the same time, IBM announced plans to jettison the remainder of its x86 business (servers) by selling it to Lenovo, which had already acquired IBM’s PC business (2005). The $2.1billion deal closed late in the year. Then IBM’s share of the HPC server market was roughly 23 percent. Today, it’s closer to five percent.[i]

IBM is making a staggering bet. Setting risk aside, much has been accomplished. OpenPOWER has grown to more than 170 members in more than 22 countries. A licensable reference architecture processor has been created. Acceleration enabling technologies have been aggressively incorporated. On the order of 25 OpenPOWER solutions are in various stages of being brought to market.

“The timing is right,” says Addison Snell, CEO of Inersect360 Research. “After roughly 20 years of clusters based on the ‘Beowulf’ model, in which standardization and portability were primary goals, the HPC industry is migrating back toward an era of specialization. Even within the envelope of Intel x86 innovation, end users are looking at three primary options, Xeon, Xeon Phi as a co-processor, and Xeon Phi as a standalone microprocessor. And that’s before considering whether FPGAs acquired from Altera or even Intel Atom processors (competing with ARM) are part of the equation. End users are already evaluating a multitude of processing alternatives, which gives OpenPOWER an opportunity.”

For so long Intel’s x86 architecture has basically owned the market. It dwarfs everyone else. The entry of IBM and OpenPOWER sets up a potentially grand struggle between two contrasting views of technology progress approaches and business opportunity. Both agree the age of accelerated/manycore computing is here, but differ fundamentally on the path forward.

IBM argues Intel’s one-size fits all approach – consolidating devices and functions into a ‘single’ piece of silicon – actually stifles innovation compared to an ecosystem in which collaboration between diverse technology partners are all beavering away on their own unique ideas for delivering the best technology solutions (acceleration, networking, storage, programming, et al.).

Intel’s position is that Moore’s law is hardly dead. In fact, the company says Moore’s law and HPC form a virtuous circle, each powering the other forward (See HPCwire article, Moore’s Law – Not Dead – and Intel’s Use of HPC to Keep it Alive). Moreover, Intel contends the coalescing of functions on silicon is not merely more elegant, but ultimately higher performing and cheaper.

Brad McCredie, vice president of IBM Power Systems Development and until recently president of the OpenPOWER Foundation, says “The appetite for compute and acceleration is going to far outstrip [silicon scaling] before we’re going to say the accelerator is going to go by way of the Southbridge and Northbridge switch chip which all got sucked into the CPU die.” He further suggests that Intel’s manufacturing business model actually requires this on-silicon consolidation and a “closed system” approach to grow profits and rebuff competition.

No doubt the constant anti-Intel drumming emanating from IBM is intended to reinforce the idea that another choice in the market would be good, that Intel’s overwhelming dominance is bad, and that IBM cum partners has sufficient strength and technology acumen to mount such a challenge. Skeptics respond IBM has no other realistic route given Intel’s head start in the high-end server market and dominance in processors. Maybe it doesn’t matter. This is capitalism after all.

IBM's Ken King
IBM’s Ken King

Much more interesting and important is how the struggle eventually plays out. Ken King, a 30-year-plus IBM veteran and general manager, OpenPOWER Alliances, and McCredie recently laid out the IBM strategy in a meeting with HPCwire editors. Discussion covered IBM’s embrace of the accelerated computing paradigm, its view of how high-end server market dynamics, particularly technology consumption patterns, are changing, and Big Blue’s strategy for reinventing itself and challenging Intel dominance.

Getting Moore’s Law Off Life Support?
“People say Moore’s law is dead. The facts are it’s declining,” says King. “You are no longer seeing the 2x gains every 18 months so you’re not going to get the value from just the silicon. From our perspective the biggest factor that is going to address that [challenge] is accelerators. We see accelerated computing as the new normal – the ability to effectively integrate CPUs with GPUs and FPGAs to accelerate processing throughout the entire system (networking, storage, etc) and with an emphasis on processing data where it resides versus having to move the data to the compute.”

This diverse and widespread implementation of acceleration technology is what’s critical to improving performance and putting Moore’s law back on that trajectory in a way that’s not just pure silicon, says King, adding “that’s the critical infrastructure for tomorrow’s economy.”

Cognitive computing will be the driver. “We moved from the internet era to the early stages of the cloud era – there’s still a lot to go – but the next era, just starting to formulate, is the cognitive era where industries will be transformed by being able to leverage cognitive computing to change their business model,” he says.

 Data – lots of it – is the fuel, says King. Science, industry, government, and virtually every other segment of society, are generating a treasure trove of data that cognitive computing can transform into insight and action. Acceleration is the engine, says King, citing two examples in medical applications that use different acceleration technologies:IBM Watson. SC15

  • IBM Watson Medical Health. Recently accelerated with GPUs, the IBM Watson cognitive platform has accelerated ‘rank and tree retrieval’ capabilities nearly 2X versus non-accelerated computers. Expectations are high for Watson Medical Health, already used extensively in sifting and interpreting clinical records and genomics research.
  • Edico Genome. DNA sequencing is notoriously tough on general purpose CPUs. Edicon’s FPGA-accelerated Dragon Processor board has been put into use at The Genome Analysis Center (TGAC) and was able to map the ash tree genome was 177 times faster per processing core than TGAC’s local HPC systems requiring only seven minutes instead of three hours on one of the larger datasets (see HPCwire article, TGAC Unleashes DRAGEN to Accelerate Genomics Workflows).

“I can go industry by industry showing how cognitive computing assisted by accelerated computing infrastructure will be transformative. Silicon is not going to do it by itself anymore,” says King.

Importantly, says McCredie, the right approach to acceleration will vary. “Genomics is looking good with FPGAs but it is going to be hard to argue that GPUs aren’t the way to go for deep learning. If you look at machine learning, that [also] has some pretty good power performance opportunities for FPGAs.”

If accelerated computing does end up requiring flexible approaches to satisfy varying cost/performance issues, OpenPOWER has taken steps to assemble needed technologies. GPU pioneer NVIDIA, of course, is an OpenPOWER founding member as is high performance interconnect specialist Mellanox. Last November, FPGA supplier Xillinx (NASDAQ: XLNX) joined OpenPOWER and contracted to a multi-year deal with IBM. In December, FPGA board specialist BittWare joined OpenPOWER.

IBM's Brad McCredie
IBM’s Brad McCredie

McCredie snipes, “You could argue Intel has figured this out too and endorsed it by their $16.7B acquisition of Altera, but it’s a different model. They are integrating Altera in a way where it is going to be a one size fits all approach.” That won’t work well moving forward, he argues, “Now, we are going to have to build systems with this or that kind of accelerator best suited (cost/performance) to the use…[but] I will take everything I just said back if there is disruptive technology.”

Snell says, particularly in the traditional HPC market, “The biggest advantage of OpenPOWER is its lead in accelerated computing, thanks to NVIDIA Tesla and CUDA. Another recent Intersect360 Research study showed that 34 of the top 50 HPC applications currently offer some level of GPU support or acceleration.

“The biggest open question is how this will evolve. Can end users continue to leverage their work on NVIDIA GPUs on future generations of Intel-based servers? How would technologies like CAPI and NVLINK get incorporated? If Intel does not incorporate these technologies in some optimized fashion, it could push end users onto OpenPOWER to protect their GPU investments.”

HPC Market Undergoes Redefinition
Leaving the sudden emergence of disruptive technology aside and assuming moderate technical comparability between the two camps’ products, IBM’s and OpenPOWER’s remaining hurdle is executing a successful go-to-market strategy: Who is going to build to the OpenPOWER spec – besides IBM – and source IBM Power8 processors? Who is going to buy the systems? To what extent will homegrown components and systems from China become a competitive wildcard?

IBM has certainly tried to think things through here, and articulated a crystallizing view of a market that is more nuanced and dynamic. There will be increasing overlap among traditional buyers and sellers, says King, as technology consumptions models shift. (In particular, think hyperscale datacenters, ODMs, and even big vertical players such in financial services.)

Today, Big Blue breaks the high-end server market into three distinct pieces – traditional HPC, hyperscale datacenter providers, and large enterprise verticals (financial service, for example). A major differentiator among them, emphasizes McCredie, is their varying technology ‘consumption” models which in turn influence the sales channels preferences and product configurations sought.

“The consumption model is so heavily tied to the particular set of skills you’ve invested in and developed over time,” says McCredie. “If you look at the skills the ‘hyperscales’ have invested in and developed, they are able to consume and like to consume differently than the classic enterprise whose skills evolved differently and HPC as well; one is programming-rich capable, one is admin-rich capable, and one is actually pretty technology capable. They all consume differently.”

Looking back, says McCredie, “Nobody ever came to us and said you guys don’t have good technology. We hear a lot of things; we don’t ever hear that. But our technology, until we did OpenPOWER, was completely unconsumable by important segments of the market.”

IBM has been aggressively adapting to make Power-based products easier to consume. “It wasn’t like I had to go back and redesign chips in the hyperscale market. We did have to go back and make a new open firmware stack, they weren’t going to take a half a billion lines of firmware, 99 percent of which they didn’t give a hoot about. So we did make a new firmware stack and we did create some new technology but mostly we just shifted how it was consumed,” says McCredie.

King adds quickly, “Google and Rackspace (NYSE: RAX) are eating that up.”

By its nature the OpenPOWER ecosystem should provide needed flexibility to satisfy varying consumption models. Core technology providers – IBM, NVIDIA, Mellanox, Xillinx, etc. – collaborate closely to push device performance and interoperability. Systems suppliers – OEMs, ODMs, and even a few big users can build systems according to needs dictated by their target markets or internal needs.

OpenPOWERinfographics-compliance3

“We want 20-30 percent market share. That’s a significant statement,” says King. “You’ve got the hyperscalers and we have to get a significant portion of those.”

No doubt, agrees Snell, “The hyperscale is a major wildcard here. Initiatives like Open Compute Project and Scorpio (“Beiji”) have been very inclusive of OpenPOWER and GPU computing, and some individual companies such as Google, Facebook (NASDAQ: FB), Microsoft (NASDAQ: MSFT), and Baidu (NASDAQ: BIDU) purchase enough infrastructure to set a market by themselves. (To get a sense of the market forces at play, note that both OCP and Scorpio have separately, and distinctly, redefined the rack height specified in a “U.”) If the hyperscale market demands a particular configuration, it will get it.”

IBM is having direct interactions with hyperscales says King, “Some are happy to buy IBM’s LC line, maybe with some tweaks or maybe not. Others we’re going to design a model with them based on industry benchmarking and workload benchmarking and go to an ODM. Some will go even further and design everything and just tell the ODM what to manufacturer.”

The point, says King, is the model is flexible to enable that level of customization where required. “To deploy in volume is what’s critical. We’ve got to get penetration to a point where any counterattacks by our competitors don’t negatively impact our ability to be able to get to that level of market share that we are looking for,” he says.

That’s a tall order. One could argue the big hyperscalers have a bit more freedom to do as they will. Big OEMs and ODMs are more deeply entrenched in the x86 ecosystem and risk alienating Intel. Most have made the most tepid of public comments regarding OpenPOWER which can be neatly distilled down as: “Well, we’re always evaluating technology options; however we have a great relationship with Intel.”

Intel is the big dog and worthy of fear. It has been mostly silent on the IBM and OpenPOWER challenge – there’s really no upside for public bashing. Conversely, Intel has a reputation for never being afraid of a little customer arm-twisting with regard to supply, pricing, and early access to emerging Intel technology.

Waiting for the BIG Deals
To date, IBM has achieved its initial goals with OpenPOWER. It has gained substantial market awareness, built out a robust stable of consortium members, and landed a yoke of high-profile wins with CORAL, says Snell. The next step is actually winning market. “Intersect360 Research is presently conducting a deep-dive assessment of end user evaluation and impressions of the full panoply of processing alternatives, including POWER, GPU, Xeon, Xeon Phi, and others, and we will additionally gauge market penetration in our 2016 HPC Site Census survey. 20 percent to 30 percent is a lofty goal, and it will take time to see how long it will take to approach it, if IBM can at all,” Snell says.

The wait to see critical customer wins won’t be long, says King. IBM is actively engaged with 10-15 hyperscalers, he says. “It takes awhile for a hyperscale, whose got 98 to 100 percent of their datacenter being x86, to make a strategic change to add another platform in volume in their datacenters. A year ago I would have said we are trying to get the hyperscales interested; now they are all engaged, not just interested, engaged and actually working with us to figure out what are the right workloads to put Power on and when do they start that deployment and what’s their model for deployment or consumption. I can tell you who has an ODM ready, who doesn’t, who’s going to buy directly, so definitely significant progress.”

In the enterprise, King says very big companies are also looking at different consumption models. “Not exactly what the hyperscales are doing but some that are part of the open compute community are starting look at if there is something similar they would do to the hyperscale community. That could be an interesting OpenPOWER market, besides just buying servers directly from IBM or our partners.”

King and McCredie say there are at least five to seven large enterprises looking at consuming OpenPOWER; several have Power systems inside now, but they are all also starting to stand up their own clouds. “What’s amazing is they are realizing, which is not a big secret in the industry, they are all competing against the big Internet datacenters and hyperscale guys in one way or another,” says King.

CORAL DOE graphicIn the traditional HPC-consuming world, IBM’s strategy sounds like most of its brethren which can be boiled down to: The Top500 and Linpack shouldn’t drive product development and is a poor overall metric; that said establishing one’s place in the Top500 is important because it’s still closely watched by important buyers in government, academia, etc.

“We look at the success we had on CORAL and it’s because we did a lot of great work on real workloads not just a Linpack bid. On the other hand the world is right now starting to get competitive and the U.S. lock on the Top500 just isn’t there. You’ve got to go fix that and I think we have to help people fix that.”

One point Snell makes shouldn’t be forgotten: even if IBM is successful achieving its 20-30 percent market share goal by the end of the decade – an immense achievement for sure – “Intel would still have a dominant market share, while having successfully moved up the value chain with the incorporation of more technologies into its Scalable System Framework approach, and Intel could rebuild share from that position of strength.

“In the near term (2016, 2017), OpenPOWER should focus on its assets, particularly its leadership in GPU acceleration and data-centric computing. This battle will be played out in software more than in hardware, and OpenPOWER needs to build as much momentum as it can. IBM will need to see volume market penetration beginning in 2016, coupled with a few more high-profile wins, in order to be on track.”

UPDATED, Jan 20: IBM released its full year and latest quarterly results after this article was posted. Big Blue beat consensus analysts forecasts for earnings but revenue slipped. Here’s an excerpt from IBM’s press release:

“We continue to make significant progress in our transformation to higher value. In 2015, our strategic imperatives of cloud, analytics, mobile, social and security grew 26 percent to $29 billion and now represent 35 percent of our total revenue,” said Ginni Rometty, IBM chairman, president and chief executive officer.  “We strengthened our existing portfolio while investing aggressively in new opportunities like Watson Health, Watson Internet of Things and hybrid cloud.  As we transform to a cognitive solutions and cloud platform company, we are well positioned to continue delivering greater value to our clients and returning capital to our shareholders.”

Fourth-quarter net income from continuing operations was $4.5 billion compared with $5.5 billion in the fourth quarter of 2014, down 19 percent.  Operating (non-GAAP) net income was $4.7 billion compared with $5.8 billion in the fourth quarter of 2014, down 19 percent.  The prior-year gain from the divestiture of the System x business impacted operating net income by 19 points.

Total revenues from continuing operations for the fourth quarter of 2015 of $22.1 billion were down 9 percent (down 2 percent adjusting for currency) from the fourth quarter of 2014.” For the full results see: http://www.hpcwire.com/off-the-wire/24279/

Initial reaction in the media was mixed as indicated here:

Forbes.com: IBM Finally Beats Earnings Consensus Again In Q4, But Has It Turned A Corner?
Wall Street Journal.com: IBM Revenue Slides, but Cloud Business Grows
New York Times.com: IBM Reports Declines in Fourth-Quarter Profit and Revenue Despite Gains in New Fields

[i] IDC HPC Update presented at SC15

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Battle Brews over Trump Intentions for Funding Science

February 27, 2017

The battle over science funding – how much and for what kinds of science – Read more…

By John Russell

Google Gets First Dibs on New Skylake Chips

February 27, 2017

As part of an ongoing effort to differentiate its public cloud services, Google made good this week on its intention to bring custom Xeon Skylake chips from Intel Corp. Read more…

By George Leopold

Thomas Sterling on CREST and Academia’s Role in HPC Research

February 27, 2017

The US advances in high performance computing over many decades have been a product of the combined engagement of research centers in industry, government labs, and academia. Read more…

By Thomas Sterling, Indiana University

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPE Extreme Performance Solutions

Manufacturers Reaping the Benefits of Remote Visualization

Today’s manufacturers are operating in an ever-changing atmosphere, and finding new ways to boost productivity has never been more vital.

This is why manufacturers are ramping up their investments in high performance computing (HPC), a trend which has helped give rise to the “connected factory” and Industrial Internet of Things (IIoT) concepts that are proliferating throughout the industry today. Read more…

Weekly Twitter Roundup (Feb. 23, 2017)

February 23, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPE Server Shows Low Latency on STAC-N1 Test

February 22, 2017

The performance of trade and match servers can be a critical differentiator for financial trading houses. Read more…

By John Russell

HPC Financial Update (Feb. 2017)

February 22, 2017

In this recurring feature, we’ll provide you with financial highlights from companies in the HPC industry. Check back in regularly for an updated list with the most pertinent fiscal information. Read more…

By Thomas Ayres

Rethinking HPC Platforms for ‘Second Gen’ Applications

February 22, 2017

Just what constitutes HPC and how best to support it is a keen topic currently. Read more…

By John Russell

Thomas Sterling on CREST and Academia’s Role in HPC Research

February 27, 2017

The US advances in high performance computing over many decades have been a product of the combined engagement of research centers in industry, government labs, and academia. Read more…

By Thomas Sterling, Indiana University

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

Leading Solution Providers

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This