AMD’s EPYC Road to Redemption in Six Slides

By John Russell

June 21, 2018

A year ago AMD returned to the server market with its EPYC processor line. The earth didn’t tremble but folks took notice. People remember the Opteron fondly but later versions of the Bulldozer line not so much. Fast forward to last Thursday when AMD executive Forrest Norrod, SVP and GM, Datacenter and Embedded Solutions, livestreamed what amounted to a clear, full-year victory lap with aspirations for much more.

Mostly, these no-news events are unwelcome. This time perhaps not. There are many pieces to AMD’s nascent revival in the server market, not least competitive pressure from Arm and IBM/Power, a bevy of young AI-chip wannabees, and, of course, Intel’s own apparent manufacturing problems at 10nm. (I still think Intel has something percolating in the lab that will show up in Aurora, but that remains some distance off.)

AMD has had a remarkable year with remarkably few missteps. From 0.2 percent market share to something over 1 percent today in the server CPU market, with what Norrod call a “clear line of sight” to mid-single digits by the end of 2018. (That’s still tiny compared to Intel). Major OEMs/ODMs are onboard with around 50 systems in the market. Adoption by hyperscalers (Azure, Baidu, others) has been solid. A working – at least in the lab – 7nm chip will sample in the second half of 2018 and launch in 2019. I would mention a robust technology roadmap, but those have a way of being banged about so we’ll see what Rome and Milan brings. Something called a one-socket market – with the potential make waves in many places.

Consider this excerpt (6/17/18) from a Motley Fool[i]post:

Nomura Securities recently published a research note in which it says Intel (NASDAQ:INTC) CEO Brian Krzanich “was very matter-of-fact in saying that Intel would lose server share to AMD (NASDAQ:AMD) in the second half of 2018.”

“This wasn’t new news,” the analysts said, “but we thought it was interesting that Krzanich did not draw a firm line in the sand as it relates to AMD’s potential gains in servers.”

Apparently, according to the analysts, Krzanich merely “indicated that it was Intel’s job not to let AMD capture 15-20% market share.”

Just wow (Krzanich resigned today over different matters).

Just six slides from Norrod’s presentation tell the core of AMD’s success story this year and provide insight into its plans not to falter next year (click to enlarge slides, taken from the AMD deck). Let’s start with netting out what all of the technology investment and mea culpas over past mistakes has wrought with the slide shown below.

AMD is ecstatic with the names on this slide.  It’s a first-year report card for AMD’s plunge back into the market. Without question cost-performance has been the major driver (perhaps along with desire for leverage against an Intel perceived as vulnerable). A ~$4200 top skew versus a ‘~$13,000’ top SKU makes a difference as does a broad enough portfolio with similar TCO advantages aimed at more mainstream SKUs and workloads. Leaving aside details for a moment of the EPYC technology/TCO arguments (slides aplenty), this is the payoff slide.

While the EPYC wins cut across market segments – including a nice Cray win in its CS500 cluster supercomputer series – the bigger volume buys have come from what Norrod calls the mega-datacenters (cloud).

“The mega-datacenter customers tend to buy in larger quantities when they buy. And they tend to move a little bit faster because they’re validating and selecting for a smaller number of workloads,” said Norrod. “The more traditional enterprise end customer has a rich set of applications that built up in their datacenters over a long period of time. They are running everything from their Oracle DB to the web servers. They take a little longer and are reached through the OEMs. Thinking about this year, the datacenter customers moved a little bit faster than end customers.

“[The] mega datacenters probably [account for] 40-45 percent of the overall server market CPU market. Frankly of those, the top three are probably ten percent each – Microsoft, Google, and Amazon. About 35 percent of the market is the traditional server OEM. The balance is probably HPC and the next wave of cloud companies, and I would lump them together because they have got pretty similar characteristics for making decisions. It’s always a very technical sales [process]. It’s always very tight performance criteria or TCO criteria. A lot of those are going by the moniker of the Next Wave. [For EPYC] we see strong applicability into all three segments. Over time I would love to have a pretty balanced shared across those three segments.”

Norrod was also asked about the ramp up process with hyperscalers.

“First off, it’s a long process,” he said. “We began engagement with the mega-datacenters early last year and maybe late ‘16 when we really had EPYC processors that were representative of the full performance that we were going to be bringing to market. Part of that quite frankly was we had credibility to gain. Some customers were saying, ‘show me that you really do have what you say you have.’ We would ship a small number of reference designs. Generally, they would all kick the tires.

“Once we got them interested. They would typically go to their favorite ODM or contract manufacturer, specify a system, and they would build hundreds of units of as a pilot build and do initial trials. Again, this is in their configuration for their workloads. [They were looking to see] is this performance really being delivered. Then they would do pilot runs. So thousands of systems and then to make sure nothing happens when they put those systems – clusters – into the data center, do their operations get disrupted, is the performance available at scale, do their sys admins have any learning curve to learn the new EPYC based systems. So they would go through that and finally would start going into productions, tens or multiple tens of thousands of units. Most of the cloud guys now are in that initial production or maybe the pilot. That whole process is nine months to a year.”

Clearly all of AMD’s market partners expect success from their EPYC investments (via cloud and systems sales) or next year’s June 2019’s progress report will sound different.

What attracted AMD’s partners in the first place was a strong technology offering and seemingly very competitive cost advantages. EPYC, designed from the ground up on AMD’s new Zen architecture, has several compelling attributes that result in high memory bandwidth and IO capacity along with versatile accelerator connectivity. Of course, it’s drop-in compatibility with the rest of the massive x86 world. There is no appreciable software lift required although one occasionally hears grumbles over AMD’s tools; a favorite work-around is to simply use Intel tools say many users.

The first generation EPYC is fabbed using a 14nm process at GlobalFoundries. Norrod boasts, “Our 32-core design is the largest capacity in the industry. Our [128] PCIe Gen3 lanes make it easy to attach Radeon (AMD’s GPU) and other accelerators or Flash storage which is increasingly connected through PCIe.” He emphasized growing bandwidth and IO demands across most applications. It’s probably worth noting that Arm also trumpets memory bandwidth – eight channels per CPU versus Intel’s six – as a competitive advantage.

Interestingly EPYC is a multi-die design with four die in each package. AMD says the approach has added flexibility, boosts die yields, and allows it to add features to the package. AMD likewise trumpets its Infinity Fabric interconnect which it argues is architected to “efficiently extend beyond the SoC” and permits use of one protocol for on-die, die-do-die, and socket-to-socket.

As you can see from the slide above, AMD is claiming advantages on Spec floating point (~3.5x) and Spec integer (~3.1x) tests versus the Intel Xeon Platinum 8180M. At least as important are price-performance comparison lower in SKU line (see slide on side) and AMD is even louder in promoting these differences. “Very few people actually buy $10-, $12-, or $13,000 processors. We believe we have delivered two-socket systems that compete with the best the competitions has to offer,” said Norrod.

Benchmark claims are often difficult to validate and for that reason, we’ve included AMD footnotes[ii]on testing environments at the end of the article. The wide-spread early adoption suggests that on balance, EPYC performs as promised.

One of the more intriguing aspects to AMD’s return to the server market is its aspiration to create a single-socket market where, for most practical purposes, none existed.

“Without a significant two-socket business to worry about cannibalizing we were free to act. We are offering capabilities that the vast majority of customers that are currently buying two socket server processors need at a substantially better TCO,” said Norrod. “HPE, Dell and Supermicro all took leadership position by putting enterprise class single socket systems into the market and positioning them against the competitions dual socket.” So they did. Baidu too, in the form of single-socket instances.

He emphasized the primary CPU is only a fraction of the total costs, “With EPYC we can drive lower power, better cost of software in many cases, and really everything else in the network where the cost is often determined by the number of sockets.” He claims the TCO will be 20-30 percent advantages over the life of the system.

“EPYC addresses [roughly] 60-to-80 percent of today’s workloads well, and with a significant competitive in probably 40 percent of those,” said Norrod. “Based on our multi-year roadmap I believe the next generation roadmap will have a growing advantage across a larger number of workloads. This should disrupt and redefine large portions of the market. [For example] big part of the market is the cloud and for the cloud deployments it’s all about the cost per VM. We deliver best cost per VM in the industry.”

The single socket gambit is fascinating. It does seem as if many workloads could be sufficiently served by AMD’s one-socket solution. We’ll see. (AMD footnotes on Single Socket comparison[iii])

So what’s next? The industry has of course buzzed with discussion of Intel’s 10nm process plans versus commercially available 7nm processes. Norrod took a gentle shot at Intel.

“There’s sort of lies, damn lies, and statistics…and process node games. So what our competitor refers to as their 10nm process and the industry standard 7nm process that we have access to from multiple foundries are in many ways roughly equivalent with maybe a slight node to the 7nm in terms of SRAM density and a few other parameters. But I think for the first time, in a very long time, maybe ever, we have access to [manufacturing] technology that is at parity with our principal competitor, and that’s a new paradigm in manufacturing process technology,” he said.

“Sort of a law of nature for the last 40 years has been Intel has a process lead. The 7nm process give us a lot of options, increased density and much better power efficiency, that translates into more capability and more performance, more features, and we are adding extra innovation on top of that. We’ve got a pretty aggressive roadmap,” said Norrod.

AMD has outlined a roadmap for next-gen EPYC processors (Rome and Naples) and Norrod promises, “Beyond the first generation the roadmap will press on with more cores, more bandwidth, more capability, and more differentiated features.” Currently the14nm Epic part is being built by GlobalFoundaries. AMD’s other major manufacturing partner is TSMC.  “We are going to use both for 7nm. We haven’t commented beyond that.”

At Computex early this month, AMD CEO Lisa Su held an early 7nm device. As noted earlier AMD plans sampling 7nm EPYC chips in the second half of 2018 followed by launch in 2019.

In April, AMD reported quarterly revenue totaling $1.65 billion, up 40 percent from the same quarter last year. The big gains prompted observers to suggest AMD has grabbed market share from Intel, which also reported strong quarterly results as both chip makers benefit from heightened big data processing demand in datacenters (see HPCwire article, AMD Continues to Nip at Intel’s Heels). That’s not all EPYC. AMD’s Radeon GPU line is also faring well and the chip market is strong overall. Intel reported $16.1 billion for the quarter citing heavy datacenter demand for its Xeon Scalable processors.

It’s important to restate that AMD’s market share is tiny compared to Intel. That said, given the size of the market, a small market share at this time is probably one of AMD’s greatest strengths – it has enormous head room for awhile. If AMD delivers as promised, Intel will be hard pressed to prevent AMD from taking significant market share. How much remains to be seen.

“We see the data center as a $21 billion-plus opportunity with a combination of CPUs and GPUs. Talking only the CPU side, and really overall x86 is the dominant architecture, we are only one of two vendors that have access to the x86 technology that’s most relevant to the server market today. We estimate just the silicon portion of that market to be well over $15 billion. Today we have roughly one percent market share, a little more, but a vast opportunity ahead of us.” Said Narrod.

Whether this is a one-time victory lap or a sign of persistent renewed strength for AMD in the server CPU business remains to be seen. Stay tuned…

Link to slide deck: http://ir.amd.com/static-files/dc38c6eb-627c-4ed0-9618-49c34eb8c14f

Blog Link: https://community.amd.com/community/amd-business/blog/2018/06/20/an-unforgettable-year-365-days-of-amd-epyc

[i]Intel’s CEO Just Validated the AMD Data Center Processor Threat

[ii]Slide 9 – SPEC:

NAP-98

Based on SPECrate®2017_int_peak results published on www.spec.org as of April 2018. AMD-based system scored 310 on a Supermicro A+ Server 2123BT-HNC0R configured with 2 x AMD EPYC 7601 SOC’s ($4200 each at AMD 1ku pricing), 1TB memory (16 x 64GB DDR4 2666MHz), SUSE 12 SP3, Supermicro BIOS 1.0b, using the AOCC 1.0 complier. Intel- based system scored 309 on a Cisco UCS C220 M5 server configured with 2 x 8180M CPU’s ($13,011 each per ark.intel.com), 384GB memory (24*16GB 2R DDR4 2666MHz), SLES 12 SP2, BIOS v3.2.1d, using the ICC 18.0.0.128 complier.

NAP-99

Based on SPECrate®2017_fp_peak results published on www.spec.org as of April 2018. AMD-based system scored 279 on a Supermicro A+ Server 4023S-TRT configured with 2 x AMD EPYC 7601 SOC’s ($4200 each at AMD 1ku pricing), 1TB memory (16 x 64GB DDR4 2666MHz), SUSE 12 SP3, Supermicro BIOS 1.0b, using the AOCC 1.0 complier. Intel-based system scored 250 on a Cisco UCS C220 M5 server configured with 2 x 8180M CPU’s ($13,011 each per ark.intel.com), 384GB memory (24*16GB 2R DDR4 2666MHz), SLES 12 SP2, BIOS v3.2.1d, using the ICC 18.0.0.128 complier.

Slide 10 – TCO:

Pricing ranges based on Intel recommended customer pricing per ark.intel.com Oct 2017; AMD 1Ku pricing June 2017. Results may vary.

NAP-87

Estimates based on SPECrate®2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 196 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 2 x AMD EPYC 7601 SOC’s, 512GB memory (16 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Supermicro SYS-1029U-TRTP server scored 169.8 in tests conducted in AMD labs configured with 2 x Xeon 8160 CPU’s, 768GB memory (24 x 32GB 2R DDR4 2666MHz), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to Extreme performance setting.

NAP-88

Estimates based on SPECrate®2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 149 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 2 x AMD EPYC 7401 SOC’s, 512GB memory (16 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Supermicro SYS-1029U-TRTP server scored 118.1 in tests conducted in AMD labs configured with 2 x Xeon 6130 CPU’s, 768GB memory (24 x 32GB 2R DDR4 2666MHz), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to Extreme performance setting.

NAP-89

Estimates based on SPECrate®2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 123 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 2 x AMD EPYC 7351 SOC’s, 512GB memory (16 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Supermicro SYS-1029U-TRTP server scored 86.2 in tests conducted in AMD labs configured with 2 x Xeon 5118 CPU’s, 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2400), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to Extreme performance setting.

NAP-90

Estimates based on SPECrate®2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 113 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 2 x AMD EPYC 7351 SOC’s, 512GB memory (16 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Supermicro SYS-1029U-TRTP server scored 78.7 in tests conducted in AMD labs configured with 2 x Xeon 4116 CPU’s, 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2400), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to Extreme performance setting.

NAP-91

Estimates based on SPECint®_rate_base2017 using the GCC-02 v7.2 compiler. AMD-based system scored 106 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 2 x AMD EPYC 7351 SOC’s, 512GB memory (16 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Intel R2224WFTZS server scored 67.6 in tests conducted in AMD labs configured with 2 x Xeon 4114 CPU’s, 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2400), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to default settings.

[iii]Slide 12 – Single Socket TCO:

NAP-62

Estimates based on SPECrate® 2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 93 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 1 x AMD EPYC 7551P SOC ($2100 each at AMD 1ku pricing), 256GB memory (8 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Supermicro SYS-1029U-TRTP server scored 86.2 in tests conducted in AMD labs configured with 2 x Xeon 5118 CPU’s (2 x $1273 each per ark.intel.com), 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2400), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to Extreme performance setting. SPEC and SPECrate are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org. NAP-62

NAP-63

Estimates based on SPECrate® 2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 77 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 1 x AMD EPYC 7401P SOC ($1075 each at AMD 1ku pricing), 256GB memory (8 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Intel R2224WFTZS server scored 67.6 in tests conducted in AMD labs configured with 2 x Xeon 4114 CPU’s (2 x $694 each per ark.intel.com), 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2400), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to default settings. SPEC and SPECrate are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org. NAP-63

NAP-64

Estimates based on SPECrate® 2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 62 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 1 x AMD EPYC 7351P SOC ($750 each at AMD 1ku pricing), 256GB memory (8 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Intel R2224WFTZS server scored 47.7 in tests conducted in AMD labs configured with 2 x Xeon 4108 CPU’s (2 x $417 each per ark.intel.com), 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2400), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to default settings. SPEC and SPECrate are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org. NAP-64

NAP-65

Estimates based on SPECrate® 2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 54 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 1 x AMD EPYC 7281 SOC ($650 each at AMD 1ku pricing), 256GB memory (8 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Intel R2224WFTZS server scored 32.8 in tests conducted in AMD labs configured with 2 x Xeon 3106 CPU’s (2 x $306 each per ark.intel.com), 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2133MHz), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to default settings. SPEC and SPECrate are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org. NAP-65

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

GTC 2019: Chief Scientist Bill Dally Provides Glimpse into Nvidia Research Engine

March 22, 2019

Amid the frenzy of GTC this week – Nvidia’s annual conference showcasing all things GPU (and now AI) – William Dally, chief scientist and SVP of research, provided a brief but insightful portrait of Nvidia’s rese Read more…

By John Russell

ORNL Helps Identify Challenges of Extremely Heterogeneous Architectures

March 21, 2019

Exponential growth in classical computing over the last two decades has produced hardware and software that support lightning-fast processing speeds, but advancements are topping out as computing architectures reach thei Read more…

By Laurie Varma

Interview with 2019 Person to Watch Jim Keller

March 21, 2019

On the heels of Intel's reaffirmation that it will deliver the first U.S. exascale computer in 2021, which will feature the company's new Intel Xe architecture, we bring you our interview with our 2019 Person to Watch Jim Keller, head of the Silicon Engineering Group at Intel. Read more…

By HPCwire Editorial Team

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Insurance: Where’s the Risk?

Insurers are facing extreme competitive challenges in their core businesses. Property and Casualty (P&C) and Life and Health (L&H) firms alike are highly impacted by the ongoing globalization, increasing regulation, and digital transformation of their client bases. Read more…

What’s New in HPC Research: TensorFlow, Buddy Compression, Intel Optane & More

March 20, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

GTC 2019: Chief Scientist Bill Dally Provides Glimpse into Nvidia Research Engine

March 22, 2019

Amid the frenzy of GTC this week – Nvidia’s annual conference showcasing all things GPU (and now AI) – William Dally, chief scientist and SVP of research, Read more…

By John Russell

At GTC: Nvidia Expands Scope of Its AI and Datacenter Ecosystem

March 19, 2019

In the high-stakes race to provide the AI life-cycle solution of choice, three of the biggest horses in the field are IBM, Intel and Nvidia. While the latter is only a fraction of the size of its two bigger rivals, and has been in business for only a fraction of the time, Nvidia continues to impress with an expanding array of new GPU-based hardware, software, robotics, partnerships and... Read more…

By Doug Black

Nvidia Debuts Clara AI Toolkit with Pre-Trained Models for Radiology Use

March 19, 2019

AI’s push into healthcare got a boost yesterday with Nvidia’s release of the Clara Deploy AI toolkit which includes 13 pre-trained models for use in radiolo Read more…

By John Russell

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

Quick Take: Trump’s 2020 Budget Spares DoE-funded HPC but Slams NSF and NIH

March 12, 2019

U.S. President Donald Trump’s 2020 budget request, released yesterday, proposes deep cuts in many science programs but seems to spare HPC funding by the Depar Read more…

By John Russell

Nvidia Wins Mellanox Stakes for $6.9 Billion

March 11, 2019

The long-rumored acquisition of Mellanox came to fruition this morning with GPU chipmaker Nvidia’s announcement that it has purchased the high-performance net Read more…

By Doug Black

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Move Over Lustre & Spectrum Scale – Here Comes BeeGFS?

November 26, 2018

Is BeeGFS – the parallel file system with European roots – on a path to compete with Lustre and Spectrum Scale worldwide in HPC environments? Frank Herold Read more…

By John Russell

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This