AMD’s EPYC Road to Redemption in Six Slides

By John Russell

June 21, 2018

A year ago AMD returned to the server market with its EPYC processor line. The earth didn’t tremble but folks took notice. People remember the Opteron fondly but later versions of the Bulldozer line not so much. Fast forward to last Thursday when AMD executive Forrest Norrod, SVP and GM, Datacenter and Embedded Solutions, livestreamed what amounted to a clear, full-year victory lap with aspirations for much more.

Mostly, these no-news events are unwelcome. This time perhaps not. There are many pieces to AMD’s nascent revival in the server market, not least competitive pressure from Arm and IBM/Power, a bevy of young AI-chip wannabees, and, of course, Intel’s own apparent manufacturing problems at 10nm. (I still think Intel has something percolating in the lab that will show up in Aurora, but that remains some distance off.)

AMD has had a remarkable year with remarkably few missteps. From 0.2 percent market share to something over 1 percent today in the server CPU market, with what Norrod call a “clear line of sight” to mid-single digits by the end of 2018. (That’s still tiny compared to Intel). Major OEMs/ODMs are onboard with around 50 systems in the market. Adoption by hyperscalers (Azure, Baidu, others) has been solid. A working – at least in the lab – 7nm chip will sample in the second half of 2018 and launch in 2019. I would mention a robust technology roadmap, but those have a way of being banged about so we’ll see what Rome and Milan brings. Something called a one-socket market – with the potential make waves in many places.

Consider this excerpt (6/17/18) from a Motley Fool[i]post:

Nomura Securities recently published a research note in which it says Intel (NASDAQ:INTC) CEO Brian Krzanich “was very matter-of-fact in saying that Intel would lose server share to AMD (NASDAQ:AMD) in the second half of 2018.”

“This wasn’t new news,” the analysts said, “but we thought it was interesting that Krzanich did not draw a firm line in the sand as it relates to AMD’s potential gains in servers.”

Apparently, according to the analysts, Krzanich merely “indicated that it was Intel’s job not to let AMD capture 15-20% market share.”

Just wow (Krzanich resigned today over different matters).

Just six slides from Norrod’s presentation tell the core of AMD’s success story this year and provide insight into its plans not to falter next year (click to enlarge slides, taken from the AMD deck). Let’s start with netting out what all of the technology investment and mea culpas over past mistakes has wrought with the slide shown below.

AMD is ecstatic with the names on this slide.  It’s a first-year report card for AMD’s plunge back into the market. Without question cost-performance has been the major driver (perhaps along with desire for leverage against an Intel perceived as vulnerable). A ~$4200 top skew versus a ‘~$13,000’ top SKU makes a difference as does a broad enough portfolio with similar TCO advantages aimed at more mainstream SKUs and workloads. Leaving aside details for a moment of the EPYC technology/TCO arguments (slides aplenty), this is the payoff slide.

While the EPYC wins cut across market segments – including a nice Cray win in its CS500 cluster supercomputer series – the bigger volume buys have come from what Norrod calls the mega-datacenters (cloud).

“The mega-datacenter customers tend to buy in larger quantities when they buy. And they tend to move a little bit faster because they’re validating and selecting for a smaller number of workloads,” said Norrod. “The more traditional enterprise end customer has a rich set of applications that built up in their datacenters over a long period of time. They are running everything from their Oracle DB to the web servers. They take a little longer and are reached through the OEMs. Thinking about this year, the datacenter customers moved a little bit faster than end customers.

“[The] mega datacenters probably [account for] 40-45 percent of the overall server market CPU market. Frankly of those, the top three are probably ten percent each – Microsoft, Google, and Amazon. About 35 percent of the market is the traditional server OEM. The balance is probably HPC and the next wave of cloud companies, and I would lump them together because they have got pretty similar characteristics for making decisions. It’s always a very technical sales [process]. It’s always very tight performance criteria or TCO criteria. A lot of those are going by the moniker of the Next Wave. [For EPYC] we see strong applicability into all three segments. Over time I would love to have a pretty balanced shared across those three segments.”

Norrod was also asked about the ramp up process with hyperscalers.

“First off, it’s a long process,” he said. “We began engagement with the mega-datacenters early last year and maybe late ‘16 when we really had EPYC processors that were representative of the full performance that we were going to be bringing to market. Part of that quite frankly was we had credibility to gain. Some customers were saying, ‘show me that you really do have what you say you have.’ We would ship a small number of reference designs. Generally, they would all kick the tires.

“Once we got them interested. They would typically go to their favorite ODM or contract manufacturer, specify a system, and they would build hundreds of units of as a pilot build and do initial trials. Again, this is in their configuration for their workloads. [They were looking to see] is this performance really being delivered. Then they would do pilot runs. So thousands of systems and then to make sure nothing happens when they put those systems – clusters – into the data center, do their operations get disrupted, is the performance available at scale, do their sys admins have any learning curve to learn the new EPYC based systems. So they would go through that and finally would start going into productions, tens or multiple tens of thousands of units. Most of the cloud guys now are in that initial production or maybe the pilot. That whole process is nine months to a year.”

Clearly all of AMD’s market partners expect success from their EPYC investments (via cloud and systems sales) or next year’s June 2019’s progress report will sound different.

What attracted AMD’s partners in the first place was a strong technology offering and seemingly very competitive cost advantages. EPYC, designed from the ground up on AMD’s new Zen architecture, has several compelling attributes that result in high memory bandwidth and IO capacity along with versatile accelerator connectivity. Of course, it’s drop-in compatibility with the rest of the massive x86 world. There is no appreciable software lift required although one occasionally hears grumbles over AMD’s tools; a favorite work-around is to simply use Intel tools say many users.

The first generation EPYC is fabbed using a 14nm process at GlobalFoundries. Norrod boasts, “Our 32-core design is the largest capacity in the industry. Our [128] PCIe Gen3 lanes make it easy to attach Radeon (AMD’s GPU) and other accelerators or Flash storage which is increasingly connected through PCIe.” He emphasized growing bandwidth and IO demands across most applications. It’s probably worth noting that Arm also trumpets memory bandwidth – eight channels per CPU versus Intel’s six – as a competitive advantage.

Interestingly EPYC is a multi-die design with four die in each package. AMD says the approach has added flexibility, boosts die yields, and allows it to add features to the package. AMD likewise trumpets its Infinity Fabric interconnect which it argues is architected to “efficiently extend beyond the SoC” and permits use of one protocol for on-die, die-do-die, and socket-to-socket.

As you can see from the slide above, AMD is claiming advantages on Spec floating point (~3.5x) and Spec integer (~3.1x) tests versus the Intel Xeon Platinum 8180M. At least as important are price-performance comparison lower in SKU line (see slide on side) and AMD is even louder in promoting these differences. “Very few people actually buy $10-, $12-, or $13,000 processors. We believe we have delivered two-socket systems that compete with the best the competitions has to offer,” said Norrod.

Benchmark claims are often difficult to validate and for that reason, we’ve included AMD footnotes[ii]on testing environments at the end of the article. The wide-spread early adoption suggests that on balance, EPYC performs as promised.

One of the more intriguing aspects to AMD’s return to the server market is its aspiration to create a single-socket market where, for most practical purposes, none existed.

“Without a significant two-socket business to worry about cannibalizing we were free to act. We are offering capabilities that the vast majority of customers that are currently buying two socket server processors need at a substantially better TCO,” said Norrod. “HPE, Dell and Supermicro all took leadership position by putting enterprise class single socket systems into the market and positioning them against the competitions dual socket.” So they did. Baidu too, in the form of single-socket instances.

He emphasized the primary CPU is only a fraction of the total costs, “With EPYC we can drive lower power, better cost of software in many cases, and really everything else in the network where the cost is often determined by the number of sockets.” He claims the TCO will be 20-30 percent advantages over the life of the system.

“EPYC addresses [roughly] 60-to-80 percent of today’s workloads well, and with a significant competitive in probably 40 percent of those,” said Norrod. “Based on our multi-year roadmap I believe the next generation roadmap will have a growing advantage across a larger number of workloads. This should disrupt and redefine large portions of the market. [For example] big part of the market is the cloud and for the cloud deployments it’s all about the cost per VM. We deliver best cost per VM in the industry.”

The single socket gambit is fascinating. It does seem as if many workloads could be sufficiently served by AMD’s one-socket solution. We’ll see. (AMD footnotes on Single Socket comparison[iii])

So what’s next? The industry has of course buzzed with discussion of Intel’s 10nm process plans versus commercially available 7nm processes. Norrod took a gentle shot at Intel.

“There’s sort of lies, damn lies, and statistics…and process node games. So what our competitor refers to as their 10nm process and the industry standard 7nm process that we have access to from multiple foundries are in many ways roughly equivalent with maybe a slight node to the 7nm in terms of SRAM density and a few other parameters. But I think for the first time, in a very long time, maybe ever, we have access to [manufacturing] technology that is at parity with our principal competitor, and that’s a new paradigm in manufacturing process technology,” he said.

“Sort of a law of nature for the last 40 years has been Intel has a process lead. The 7nm process give us a lot of options, increased density and much better power efficiency, that translates into more capability and more performance, more features, and we are adding extra innovation on top of that. We’ve got a pretty aggressive roadmap,” said Norrod.

AMD has outlined a roadmap for next-gen EPYC processors (Rome and Naples) and Norrod promises, “Beyond the first generation the roadmap will press on with more cores, more bandwidth, more capability, and more differentiated features.” Currently the14nm Epic part is being built by GlobalFoundaries. AMD’s other major manufacturing partner is TSMC.  “We are going to use both for 7nm. We haven’t commented beyond that.”

At Computex early this month, AMD CEO Lisa Su held an early 7nm device. As noted earlier AMD plans sampling 7nm EPYC chips in the second half of 2018 followed by launch in 2019.

In April, AMD reported quarterly revenue totaling $1.65 billion, up 40 percent from the same quarter last year. The big gains prompted observers to suggest AMD has grabbed market share from Intel, which also reported strong quarterly results as both chip makers benefit from heightened big data processing demand in datacenters (see HPCwire article, AMD Continues to Nip at Intel’s Heels). That’s not all EPYC. AMD’s Radeon GPU line is also faring well and the chip market is strong overall. Intel reported $16.1 billion for the quarter citing heavy datacenter demand for its Xeon Scalable processors.

It’s important to restate that AMD’s market share is tiny compared to Intel. That said, given the size of the market, a small market share at this time is probably one of AMD’s greatest strengths – it has enormous head room for awhile. If AMD delivers as promised, Intel will be hard pressed to prevent AMD from taking significant market share. How much remains to be seen.

“We see the data center as a $21 billion-plus opportunity with a combination of CPUs and GPUs. Talking only the CPU side, and really overall x86 is the dominant architecture, we are only one of two vendors that have access to the x86 technology that’s most relevant to the server market today. We estimate just the silicon portion of that market to be well over $15 billion. Today we have roughly one percent market share, a little more, but a vast opportunity ahead of us.” Said Narrod.

Whether this is a one-time victory lap or a sign of persistent renewed strength for AMD in the server CPU business remains to be seen. Stay tuned…

Link to slide deck: http://ir.amd.com/static-files/dc38c6eb-627c-4ed0-9618-49c34eb8c14f

Blog Link: https://community.amd.com/community/amd-business/blog/2018/06/20/an-unforgettable-year-365-days-of-amd-epyc

[i]Intel’s CEO Just Validated the AMD Data Center Processor Threat

[ii]Slide 9 – SPEC:

NAP-98

Based on SPECrate®2017_int_peak results published on www.spec.org as of April 2018. AMD-based system scored 310 on a Supermicro A+ Server 2123BT-HNC0R configured with 2 x AMD EPYC 7601 SOC’s ($4200 each at AMD 1ku pricing), 1TB memory (16 x 64GB DDR4 2666MHz), SUSE 12 SP3, Supermicro BIOS 1.0b, using the AOCC 1.0 complier. Intel- based system scored 309 on a Cisco UCS C220 M5 server configured with 2 x 8180M CPU’s ($13,011 each per ark.intel.com), 384GB memory (24*16GB 2R DDR4 2666MHz), SLES 12 SP2, BIOS v3.2.1d, using the ICC 18.0.0.128 complier.

NAP-99

Based on SPECrate®2017_fp_peak results published on www.spec.org as of April 2018. AMD-based system scored 279 on a Supermicro A+ Server 4023S-TRT configured with 2 x AMD EPYC 7601 SOC’s ($4200 each at AMD 1ku pricing), 1TB memory (16 x 64GB DDR4 2666MHz), SUSE 12 SP3, Supermicro BIOS 1.0b, using the AOCC 1.0 complier. Intel-based system scored 250 on a Cisco UCS C220 M5 server configured with 2 x 8180M CPU’s ($13,011 each per ark.intel.com), 384GB memory (24*16GB 2R DDR4 2666MHz), SLES 12 SP2, BIOS v3.2.1d, using the ICC 18.0.0.128 complier.

Slide 10 – TCO:

Pricing ranges based on Intel recommended customer pricing per ark.intel.com Oct 2017; AMD 1Ku pricing June 2017. Results may vary.

NAP-87

Estimates based on SPECrate®2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 196 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 2 x AMD EPYC 7601 SOC’s, 512GB memory (16 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Supermicro SYS-1029U-TRTP server scored 169.8 in tests conducted in AMD labs configured with 2 x Xeon 8160 CPU’s, 768GB memory (24 x 32GB 2R DDR4 2666MHz), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to Extreme performance setting.

NAP-88

Estimates based on SPECrate®2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 149 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 2 x AMD EPYC 7401 SOC’s, 512GB memory (16 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Supermicro SYS-1029U-TRTP server scored 118.1 in tests conducted in AMD labs configured with 2 x Xeon 6130 CPU’s, 768GB memory (24 x 32GB 2R DDR4 2666MHz), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to Extreme performance setting.

NAP-89

Estimates based on SPECrate®2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 123 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 2 x AMD EPYC 7351 SOC’s, 512GB memory (16 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Supermicro SYS-1029U-TRTP server scored 86.2 in tests conducted in AMD labs configured with 2 x Xeon 5118 CPU’s, 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2400), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to Extreme performance setting.

NAP-90

Estimates based on SPECrate®2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 113 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 2 x AMD EPYC 7351 SOC’s, 512GB memory (16 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Supermicro SYS-1029U-TRTP server scored 78.7 in tests conducted in AMD labs configured with 2 x Xeon 4116 CPU’s, 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2400), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to Extreme performance setting.

NAP-91

Estimates based on SPECint®_rate_base2017 using the GCC-02 v7.2 compiler. AMD-based system scored 106 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 2 x AMD EPYC 7351 SOC’s, 512GB memory (16 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Intel R2224WFTZS server scored 67.6 in tests conducted in AMD labs configured with 2 x Xeon 4114 CPU’s, 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2400), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to default settings.

[iii]Slide 12 – Single Socket TCO:

NAP-62

Estimates based on SPECrate® 2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 93 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 1 x AMD EPYC 7551P SOC ($2100 each at AMD 1ku pricing), 256GB memory (8 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Supermicro SYS-1029U-TRTP server scored 86.2 in tests conducted in AMD labs configured with 2 x Xeon 5118 CPU’s (2 x $1273 each per ark.intel.com), 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2400), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to Extreme performance setting. SPEC and SPECrate are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org. NAP-62

NAP-63

Estimates based on SPECrate® 2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 77 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 1 x AMD EPYC 7401P SOC ($1075 each at AMD 1ku pricing), 256GB memory (8 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Intel R2224WFTZS server scored 67.6 in tests conducted in AMD labs configured with 2 x Xeon 4114 CPU’s (2 x $694 each per ark.intel.com), 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2400), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to default settings. SPEC and SPECrate are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org. NAP-63

NAP-64

Estimates based on SPECrate® 2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 62 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 1 x AMD EPYC 7351P SOC ($750 each at AMD 1ku pricing), 256GB memory (8 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Intel R2224WFTZS server scored 47.7 in tests conducted in AMD labs configured with 2 x Xeon 4108 CPU’s (2 x $417 each per ark.intel.com), 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2400), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to default settings. SPEC and SPECrate are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org. NAP-64

NAP-65

Estimates based on SPECrate® 2017_int_base using the GCC-02 v7.2 compiler. AMD-based system scored 54 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 1 x AMD EPYC 7281 SOC ($650 each at AMD 1ku pricing), 256GB memory (8 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel-based Intel R2224WFTZS server scored 32.8 in tests conducted in AMD labs configured with 2 x Xeon 3106 CPU’s (2 x $306 each per ark.intel.com), 768GB memory (24 x 32GB 2R DDR4 2666MHz running at 2133MHz), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to default settings. SPEC and SPECrate are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org. NAP-65

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

At SC18: Humanitarianism Amid Boom Times for HPC

November 14, 2018

At SC18 in Dallas, the feeling on the ground is one of forward-looking buoyancy. Like boom times that cycle through the Texas oil fields, the HPC industry is enjoying a prosperity seen only every few decades, one driven Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, produ Read more…

By John Russell

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry’s first plug-and-play, portable parallel file system that d Read more…

By Doug Black

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

New Data Management Techniques for Intelligent Simulations

The trend in high performance supercomputer design has evolved – from providing maximum compute capability for complex scalable science applications, to capacity computing utilizing efficient, cost-effective computing power for solving a small number of large problems or a large number of small problems. Read more…

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competition. This is the twelfth time that teams of university undergr Read more…

By Dan Olds

At SC18: Humanitarianism Amid Boom Times for HPC

November 14, 2018

At SC18 in Dallas, the feeling on the ground is one of forward-looking buoyancy. Like boom times that cycle through the Texas oil fields, the HPC industry is en Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can Read more…

By John Russell

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry Read more…

By Doug Black

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competitio Read more…

By Dan Olds

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

OpenACC Talks Up Summit and Community Momentum at SC18

November 12, 2018

OpenACC – the directives-based parallel programing model for optimizing applications on heterogeneous architectures – is showcasing user traction and HPC im Read more…

By John Russell

How ASCI Revolutionized the World of High-Performance Computing and Advanced Modeling and Simulation

November 9, 2018

The 1993 Supercomputing Conference was held in Portland, Oregon. That conference and it’s show floor provided a good snapshot of the uncertainty that U.S. supercomputing was facing in the early 1990s. Many of the companies exhibiting that year would soon be gone, either bankrupt or acquired by somebody else. Read more…

By Alex R. Larzelere

At SC18: GM, Boeing, Deere, BP Talk Enterprise HPC Strategies

November 9, 2018

SC18 in Dallas (Nov.11-16) will feature an impressive series of sessions focused on the enterprise HPC deployments at some of the largest industrial companies: Read more…

By Doug Black

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

Leading Solution Providers

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Google Releases Machine Learning “What-If” Analysis Tool

September 12, 2018

Training machine learning models has long been time-consuming process. Yesterday, Google released a “What-If Tool” for probing how data point changes affect a model’s prediction. The new tool is being launched as a new feature of the open source TensorBoard web application... Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This