AMD’s EPYC Road to Redemption in Six Slides

By John Russell

June 21, 2018

A year ago AMD returned to the server market with its EPYC processor line. The earth didn’t tremble but folks took notice. People remember the Opteron fondly but later versions of the Bulldozer line not so much. Fast forward to last Thursday when AMD executive Forrest Norrod, SVP and GM, Datacenter and Embedded Solutions, livestreamed what amounted to a clear, full-year victory lap with aspirations for much more.

Mostly, these no-news events are unwelcome. This time perhaps not. There are many pieces to AMD’s nascent revival in the server market, not least competitive pressure from Arm and IBM/Power, a bevy of young AI-chip wannabees, and, of course, Intel’s own apparent manufacturing problems at 10nm. (I still think Intel has something percolating in the lab that will show up in Aurora, but that remains some distance off.)

AMD has had a remarkable year with remarkably few missteps. From 0.2 percent market share to something over 1 percent today in the server CPU market, with what Norrod call a “clear line of sight” to mid-single digits by the end of 2018. (That’s still tiny compared to Intel). Major OEMs/ODMs are onboard with around 50 systems in the market. Adoption by hyperscalers (Azure, Baidu, others) has been solid. A working – at least in the lab – 7nm chip will sample in the second half of 2018 and launch in 2019. I would mention a robust technology roadmap, but those have a way of being banged about so we’ll see what Rome and Milan brings. Something called a one-socket market – with the potential make waves in many places.

Consider this excerpt (6/17/18) from a Motley Fool[i]post:

Nomura Securities recently published a research note in which it says Intel (NASDAQ:INTC) CEO Brian Krzanich “was very matter-of-fact in saying that Intel would lose server share to AMD (NASDAQ:AMD) in the second half of 2018.”

“This wasn’t new news,” the analysts said, “but we thought it was interesting that Krzanich did not draw a firm line in the sand as it relates to AMD’s potential gains in servers.”

Apparently, according to the analysts, Krzanich merely “indicated that it was Intel’s job not to let AMD capture 15-20% market share.”

Just wow (Krzanich resigned today over different matters).

Just six slides from Norrod’s presentation tell the core of AMD’s success story this year and provide insight into its plans not to falter next year (click to enlarge slides, taken from the AMD deck). Let’s start with netting out what all of the technology investment and mea culpas over past mistakes has wrought with the slide shown below.

AMD is ecstatic with the names on this slide.  It’s a first-year report card for AMD’s plunge back into the market. Without question cost-performance has been the major driver (perhaps along with desire for leverage against an Intel perceived as vulnerable). A ~$4200 top skew versus a ‘~$13,000’ top SKU makes a difference as does a broad enough portfolio with similar TCO advantages aimed at more mainstream SKUs and workloads. Leaving aside details for a moment of the EPYC technology/TCO arguments (slides aplenty), this is the payoff slide.

While the EPYC wins cut across market segments – including a nice Cray win in its CS500 cluster supercomputer series – the bigger volume buys have come from what Norrod calls the mega-datacenters (cloud).

“The mega-datacenter customers tend to buy in larger quantities when they buy. And they tend to move a little bit faster because they’re validating and selecting for a smaller number of workloads,” said Norrod. “The more traditional enterprise end customer has a rich set of applications that built up in their datacenters over a long period of time. They are running everything from their Oracle DB to the web servers. They take a little longer and are reached through the OEMs. Thinking about this year, the datacenter customers moved a little bit faster than end customers.

“[The] mega datacenters probably [account for] 40-45 percent of the overall server market CPU market. Frankly of those, the top three are probably ten percent each – Microsoft, Google, and Amazon. About 35 percent of the market is the traditional server OEM. The balance is probably HPC and the next wave of cloud companies, and I would lump them together because they have got pretty similar characteristics for making decisions. It’s always a very technical sales [process]. It’s always very tight performance criteria or TCO criteria. A lot of those are going by the moniker of the Next Wave. [For EPYC] we see strong applicability into all three segments. Over time I would love to have a pretty balanced shared across those three segments.”

Norrod was also asked about the ramp up process with hyperscalers.

“First off, it’s a long process,” he said. “We began engagement with the mega-datacenters early last year and maybe late ‘16 when we really had EPYC processors that were representative of the full performance that we were going to be bringing to market. Part of that quite frankly was we had credibility to gain. Some customers were saying, ‘show me that you really do have what you say you have.’ We would ship a small number of reference designs. Generally, they would all kick the tires.

“Once we got them interested. They would typically go to their favorite ODM or contract manufacturer, specify a system, and they would build hundreds of units of as a pilot build and do initial trials. Again, this is in their configuration for their workloads. [They were looking to see] is this performance really being delivered. Then they would do pilot runs. So thousands of systems and then to make sure nothing happens when they put those systems – clusters – into the data center, do their operations get disrupted, is the performance available at scale, do their sys admins have any learning curve to learn the new EPYC based systems. So they would go through that and finally would start going into productions, tens or multiple tens of thousands of units. Most of the cloud guys now are in that initial production or maybe the pilot. That whole process is nine months to a year.”

Clearly all of AMD’s market partners expect success from their EPYC investments (via cloud and systems sales) or next year’s June 2019’s progress report will sound different.

What attracted AMD’s partners in the first place was a strong technology offering and seemingly very competitive cost advantages. EPYC, designed from the ground up on AMD’s new Zen architecture, has several compelling attributes that result in high memory bandwidth and IO capacity along with versatile accelerator connectivity. Of course, it’s drop-in compatibility with the rest of the massive x86 world. There is no appreciable software lift required although one occasionally hears grumbles over AMD’s tools; a favorite work-around is to simply use Intel tools say many users.

The first generation EPYC is fabbed using a 14nm process at GlobalFoundries. Norrod boasts, “Our 32-core design is the largest capacity in the industry. Our [128] PCIe Gen3 lanes make it easy to attach Radeon (AMD’s GPU) and other accelerators or Flash storage which is increasingly connected through PCIe.” He emphasized growing bandwidth and IO demands across most applications. It’s probably worth noting that Arm also trumpets memory bandwidth – eight channels per CPU versus Intel’s six – as a competitive advantage.

Interestingly EPYC is a multi-die design with four die in each package. AMD says the approach has added flexibility, boosts die yields, and allows it to add features to the package. AMD likewise trumpets its Infinity Fabric interconnect which it argues is architected to “efficiently extend beyond the SoC” and permits use of one protocol for on-die, die-do-die, and socket-to-socket.

As you can see from the slide above, AMD is claiming advantages on Spec floating point (~3.5x) and Spec integer (~3.1x) tests versus the Intel Xeon Platinum 8180M. At least as important are price-performance comparison lower in SKU line (see slide on side) and AMD is even louder in promoting these differences. “Very few people actually buy $10-, $12-, or $13,000 processors. We believe we have delivered two-socket systems that compete with the best the competitions has to offer,” said Norrod.

Benchmark claims are often difficult to validate and for that reason, we’ve included AMD footnotes[ii]on testing environments at the end of the article. The wide-spread early adoption suggests that on balance, EPYC performs as promised.

One of the more intriguing aspects to AMD’s return to the server market is its aspiration to create a single-socket market where, for most practical purposes, none existed.

“Without a significant two-socket business to worry about cannibalizing we were free to act. We are offering capabilities that the vast majority of customers that are currently buying two socket server processors need at a substantially better TCO,” said Norrod. “HPE, Dell and Supermicro all took leadership position by putting enterprise class single socket systems into the market and positioning them against the competitions dual socket.” So they did. Baidu too, in the form of single-socket instances.

He emphasized the primary CPU is only a fraction of the total costs, “With EPYC we can drive lower power, better cost of software in many cases, and really everything else in the network where the cost is often determined by the number of sockets.” He claims the TCO will be 20-30 percent advantages over the life of the system.

“EPYC addresses [roughly] 60-to-80 percent of today’s workloads well, and with a significant competitive in probably 40 percent of those,” said Norrod. “Based on our multi-year roadmap I believe the next generation roadmap will have a growing advantage across a larger number of workloads. This should disrupt and redefine large portions of the market. [For example] big part of the market is the cloud and for the cloud deployments it’s all about the cost per VM. We deliver best cost per VM in the industry.”

The single socket gambit is fascinating. It does seem as if many workloads could be sufficiently served by AMD’s one-socket solution. We’ll see. (AMD footnotes on Single Socket comparison[iii])

So what’s next? The industry has of course buzzed with discussion of Intel’s 10nm process plans versus commercially available 7nm processes. Norrod took a gentle shot at Intel.

“There’s sort of lies, damn lies, and statistics…and process node games. So what our competitor refers to as their 10nm process and the industry standard 7nm process that we have access to from multiple foundries are in many ways roughly equivalent with maybe a slight node to the 7nm in terms of SRAM density and a few other parameters. But I think for the first time, in a very long time, maybe ever, we have access to [manufacturing] technology that is at parity with our principal competitor, and that’s a new paradigm in manufacturing process technology,” he said.

“Sort of a law of nature for the last 40 years has been Intel has a process lead. The 7nm process give us a lot of options, increased density and much better power efficiency, that translates into more capability and more performance, more features, and we are adding extra innovation on top of that. We’ve got a pretty aggressive roadmap,” said Norrod.

AMD has outlined a roadmap for next-gen EPYC processors (Rome and Naples) and Norrod promises, “Beyond the first generation the roadmap will press on with more cores, more bandwidth, more capability, and more differentiated features.” Currently the14nm Epic part is being built by GlobalFoundaries. AMD’s other major manufacturing partner is TSMC.  “We are going to use both for 7nm. We haven’t commented beyond that.”

At Computex early this month, AMD CEO Lisa Su held an early 7nm device. As noted earlier AMD plans sampling 7nm EPYC chips in the second half of 2018 followed by launch in 2019.

In April, AMD reported quarterly revenue totaling $1.65 billion, up 40 percent from the same quarter last year. The big gains prompted observers to suggest AMD has grabbed market share from Intel, which also reported strong quarterly results as both chip makers benefit from heightened big data processing demand in datacenters (see HPCwire article, AMD Continues to Nip at Intel’s Heels). That’s not all EPYC. AMD’s Radeon GPU line is also faring well and the chip market is strong overall. Intel reported $16.1 billion for the quarter citing heavy datacenter demand for its Xeon Scalable processors.

It’s important to restate that AMD’s market share is tiny compared to Intel. That said, given the size of the market, a small market share at this time is probably one of AMD’s greatest strengths – it has enormous head room for awhile. If AMD delivers as promised, Intel will be hard pressed to prevent AMD from taking significant market share. How much remains to be seen.

“We see the data center as a $21 billion-plus opportunity with a combination of CPUs and GPUs. Talking only the CPU side, and really overall x86 is the dominant architecture, we are only one of two vendors that have access to the x86 technology that’s most relevant to the server market today. We estimate just the silicon portion of that market to be well over $15 billion. Today we have roughly one percent market share, a little more, but a vast opportunity ahead of us.” Said Narrod.

Whether this is a one-time victory lap or a sign of persistent renewed strength for AMD in the server CPU business remains to be seen. Stay tuned…

Link to slide deck: http://ir.amd.com/static-files/dc38c6eb-627c-4ed0-9618-49c34eb8c14f

Blog Link: https://community.amd.com/community/amd-business/blog/2018/06/20/an-unforgettable-year-365-days-of-amd-epyc

[i]Intel’s CEO Just Validated the AMD Data Center Processor Threat

[ii]Slide 9 – SPEC:

NAP-98

Based on SPECrate®2017_int_peak results published on www.spec.org as of April 2018. AMD-based system scored 310 on a Supermicro A+ Server 2123BT-HNC0R configured with 2 x AMD EPYC 7601 SOC’s ($4200 each at AMD 1ku pricing), 1TB memory (16 x 64GB DDR4 2666MHz), SUSE 12 SP3, Supermicro BIOS 1.0b, using the AOCC 1.0 complier. Intel- based system scored 309 on a Cisco UCS C220 M5 server configured with 2 x 8180M CPU’s ($13,011 each per ark.intel.com), 384GB memory (24*16GB 2R DDR4 2666MHz), SLES 12 SP2, BIOS v3.2.1d, using the ICC 18.0.0.128 complier.

NAP-99

Based on SPECrate®2017_fp_peak results published on www.spec.org as of April 2018. AMD-based system scored 279 on a Supermicro A+ Server 4023S-TRT configured with 2 x AMD EPYC 7601 SOC’s ($4200 each at AMD 1ku pricing), 1TB memory (16 x 64GB DDR4 2666MHz), SUSE 12 SP3, Supermicro BIOS 1.0b, using the AOCC 1.0 complier. Intel-based system scored 250 on a Cisco UCS C220 M5 server configured with 2 x 8180M CPU’s ($13,011 each per ark.intel.com), 384GB memory (24*16GB 2R DDR4 2666MHz), SLES 12 SP2, BIOS v3.2.1d, using the ICC 18.0.0.128 complier.

Slide 10 – TCO:

Pricing ranges based on Intel recommended customer pricing per ark.intel.com Oct 2017; AMD 1Ku pricing June 2017. Results may vary.

NAP-87

Estimates based on SPECrate®2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 196 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 2 x AMD EPYC 7601 SOC’s, 512GB memory (16 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Supermicro SYS-1029U-TRTP server scored 169.8 in tests conducted in AMD labs configured with 2 x Xeon 8160 CPU’s, 768GB memory (24 x 32GB 2R DDR4 2666MHz), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to Extreme performance setting.

NAP-88

Estimates based on SPECrate®2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 149 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 2 x AMD EPYC 7401 SOC’s, 512GB memory (16 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Supermicro SYS-1029U-TRTP server scored 118.1 in tests conducted in AMD labs configured with 2 x Xeon 6130 CPU’s, 768GB memory (24 x 32GB 2R DDR4 2666MHz), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to Extreme performance setting.

NAP-89

Estimates based on SPECrate®2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 123 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 2 x AMD EPYC 7351 SOC’s, 512GB memory (16 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Supermicro SYS-1029U-TRTP server scored 86.2 in tests conducted in AMD labs configured with 2 x Xeon 5118 CPU’s, 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2400), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to Extreme performance setting.

NAP-90

Estimates based on SPECrate®2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 113 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 2 x AMD EPYC 7351 SOC’s, 512GB memory (16 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Supermicro SYS-1029U-TRTP server scored 78.7 in tests conducted in AMD labs configured with 2 x Xeon 4116 CPU’s, 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2400), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to Extreme performance setting.

NAP-91

Estimates based on SPECint®_rate_base2017 using the GCC-02 v7.2 compiler. AMD-based system scored 106 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 2 x AMD EPYC 7351 SOC’s, 512GB memory (16 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Intel R2224WFTZS server scored 67.6 in tests conducted in AMD labs configured with 2 x Xeon 4114 CPU’s, 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2400), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to default settings.

[iii]Slide 12 – Single Socket TCO:

NAP-62

Estimates based on SPECrate® 2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 93 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 1 x AMD EPYC 7551P SOC ($2100 each at AMD 1ku pricing), 256GB memory (8 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Supermicro SYS-1029U-TRTP server scored 86.2 in tests conducted in AMD labs configured with 2 x Xeon 5118 CPU’s (2 x $1273 each per ark.intel.com), 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2400), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to Extreme performance setting. SPEC and SPECrate are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org. NAP-62

NAP-63

Estimates based on SPECrate® 2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 77 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 1 x AMD EPYC 7401P SOC ($1075 each at AMD 1ku pricing), 256GB memory (8 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Intel R2224WFTZS server scored 67.6 in tests conducted in AMD labs configured with 2 x Xeon 4114 CPU’s (2 x $694 each per ark.intel.com), 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2400), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to default settings. SPEC and SPECrate are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org. NAP-63

NAP-64

Estimates based on SPECrate® 2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 62 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 1 x AMD EPYC 7351P SOC ($750 each at AMD 1ku pricing), 256GB memory (8 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Intel R2224WFTZS server scored 47.7 in tests conducted in AMD labs configured with 2 x Xeon 4108 CPU’s (2 x $417 each per ark.intel.com), 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2400), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to default settings. SPEC and SPECrate are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org. NAP-64

NAP-65

Estimates based on SPECrate® 2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 54 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 1 x AMD EPYC 7281 SOC ($650 each at AMD 1ku pricing), 256GB memory (8 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Intel R2224WFTZS server scored 32.8 in tests conducted in AMD labs configured with 2 x Xeon 3106 CPU’s (2 x $306 each per ark.intel.com), 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2133MHz), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to default settings. SPEC and SPECrate are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org. NAP-65

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in HPC Research: Dark Matter, Arrhythmia, Sustainability & More

February 28, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Microsoft Announces General Availability of AMD-backed Azure HBv2 Instances for HPC

February 27, 2020

Nearly seven months after they were first announced, Microsoft Azure’s HPC-targeted HBv2 virtual machines (VMs) based on AMD second-generation Epyc processors are ready for primetime. The new VMs, which Azure claims of Read more…

By Staff report

Sequoia Decommissioned, Making Room for El Capitan

February 27, 2020

After eight years of service, Sequoia has been felled. Once the most powerful publicly ranked supercomputer in the world, Sequoia – hosted by Lawrence Livermore National Laboratory (LLNL) – has been decommissioned to Read more…

By Oliver Peckham

Quantum Bits: Q-Ctrl, D-Wave Start News Flow on Eve of APS March Meeting

February 27, 2020

The annual trickle of quantum computing news during the lead-up to next week’s APS March Meeting 2020 has begun. Yesterday D-Wave introduced a significant upgrade to its quantum portal and tool suite, Leap2. Today quantum computing start-up Q-Ctrl announced the beta release of its ‘professional-grade’ tool Boulder Opal software... Read more…

By John Russell

Blue Waters Supercomputer Helps Tackle Pandemic Flu Simulations

February 26, 2020

While not the novel coronavirus that is now sweeping across the world, the 2009 H1N1 flu pandemic (pH1N1) infected up to 21 percent of the global population and killed over 200,000 people. Now, a team of researchers from Read more…

By Staff report

AWS Solution Channel

Amazon FSx for Lustre Update: Persistent Storage for Long-Term, High-Performance Workloads

Last year I wrote about Amazon FSx for Lustre and told you how our customers can use it to create pebibyte-scale, highly parallel POSIX-compliant file systems that serve thousands of simultaneous clients driving millions of IOPS (Input/Output Operations per Second) with sub-millisecond latency. Read more…

IBM Accelerated Insights

Intelligent HPC – Keeping Hard Work at Bay(es)

Since the dawn of time, humans have looked for ways to make their lives easier. Over the centuries human ingenuity has given us inventions such as the wheel and simple machines – which help greatly with tasks that would otherwise be extremely laborious. Read more…

Micron Accelerator Bumps Up Memory Bandwidth

February 26, 2020

Deep learning accelerators based on chip architectures coupled with high-bandwidth memory are emerging to enable near real-time processing of machine learning algorithms. Memory chip specialist Micron Technology argues t Read more…

By George Leopold

Quantum Bits: Q-Ctrl, D-Wave Start News Flow on Eve of APS March Meeting

February 27, 2020

The annual trickle of quantum computing news during the lead-up to next week’s APS March Meeting 2020 has begun. Yesterday D-Wave introduced a significant upgrade to its quantum portal and tool suite, Leap2. Today quantum computing start-up Q-Ctrl announced the beta release of its ‘professional-grade’ tool Boulder Opal software... Read more…

By John Russell

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

NOAA Lays Out Aggressive New AI Strategy

February 24, 2020

Roughly coincident with last week’s announcement of a planned tripling of its compute capacity, the National Oceanic and Atmospheric Administration issued an Read more…

By John Russell

New Supercomputer Cooling Method Saves Half-Million Gallons of Water at Sandia National Laboratories

February 24, 2020

A new cooling method for supercomputer systems is picking up steam – literally. After saving millions of gallons of water at a National Renewable Energy Laboratory (NREL) datacenter, this innovative approach, called... Read more…

By Oliver Peckham

University of Stuttgart Inaugurates ‘Hawk’ Supercomputer

February 20, 2020

This week, the new “Hawk” supercomputer was inaugurated in a ceremony at the High-Performance Computing Center of the University of Stuttgart (HLRS). Offici Read more…

By Staff report

US to Triple Its Supercomputing Capacity for Weather and Climate with Two New Crays

February 20, 2020

The blizzard of news around the race for weather and climate supercomputing leadership continues. Just three days after the UK announced a £1.2 billion plan to build the world’s largest weather and climate supercomputer, the U.S. National Oceanic and Atmospheric Administration... Read more…

By Oliver Peckham

Japan’s AIST Benchmarks Intel Optane; Cites Benefit for HPC and AI

February 19, 2020

Last April Intel released its Optane Data Center Persistent Memory Module (DCPMM) – byte addressable nonvolatile memory – to increase main memory capacity a Read more…

By John Russell

UK Announces £1.2 Billion Weather and Climate Supercomputer

February 19, 2020

While the planet is heating up, so is the race for global leadership in weather and climate computing. In a bombshell announcement, the UK government revealed p Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

51,000 Cloud GPUs Converge to Power Neutrino Discovery at the South Pole

November 22, 2019

At the dead center of the South Pole, thousands of sensors spanning a cubic kilometer are buried thousands of meters beneath the ice. The sensors are part of Ic Read more…

By Oliver Peckham

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

IBM Debuts IC922 Power Server for AI Inferencing and Data Management

January 28, 2020

IBM today launched a Power9-based inference server – the IC922 – that features up to six Nvidia T4 GPUs, PCIe Gen 4 and OpenCAPI connectivity, and can accom Read more…

By John Russell

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Microsoft Azure Adds Graphcore’s IPU

November 15, 2019

Graphcore, the U.K. AI chip developer, is expanding collaboration with Microsoft to offer its intelligent processing units on the Azure cloud, making Microsoft Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This