HPC Reflections and (Mostly Hopeful) Predictions

By John Russell

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through the splatter; in 2018 it felt like everyone was strong-arming AI offerings. The danger is not so much disappointing products; it’s that the AI idea becomes so stretched as to lose meaning. So it was something of a surprise, at least for me, that amid the endless spray of AI announcements last year, something fascinating happened – artificial intelligence (not the HAL kind) started irreversibly transforming HPC for scientists almost as quickly as the marketing hype predicted. I like the succinct quote below posted in October on the ORNL web site.

“Emergence of AI is a very rare type of event,” said Sergei Kalinin, director of ORNL’s Institute for Functional Imaging of Materials. “Once in a generation there is a paradigm shift in science, and this is ours.” Kalinin and his colleagues use machine learning to better analyze data streams from the laboratory’s powerful electron and scanning probe microscopes. https://www.ornl.gov/blog/ornl-review/ai-experimentalist-s-experience

How we do science is changing. AI (writ large) is the change. Don’t get the wrong idea. First-principle modeling and simulation is hardly defunct! And 2018 was momentous on many fronts – Big Machines (think Summit/Sierra for starters); Flaring of a Real Processor War (we’ll get to it later); Quantum Something (at least in spending); Rehabilitation of U.S. DoE Secretary Rick Perry (knew you’d come around Rick); Nvidia’s Continued Magic (DGX-2 and T4 introductions), Mergers & Acquisitions (IBM/Red Hat, Microsoft/GitHub) and Inevitable Potholes (e.g. Meltdown/Spectre and other nasty things). It was a very full year.

But the rise of AI in earnest is the key feature of 2018. Let’s not quibble about exactly what constitutes AI – broadly, it encompasses deep learning (neural networks), machine learning, and a variety of data analytics. One has the sense it will self-identify in the hands of users like Dr. Kalinin. Whatever it is, it’s on the verge of transforming not only HPC but all of computing. With regrets to the many subjects omitted (neuromorphic computing and the Exascale Computing Project’s (ECP) admirable output are just two) and apologies for the rapid-fire treatment (less technical) of topics tackled, here are a few reflections on 2018’s frenetic rush and thoughts on what 2019 may bring. Included at the end of each section are a few links to articles on the topic appearing throughout 2018.

  1. Congratulations IBM and Brace for the March of New BIG Machines

Let start with the excellent job done by IBM, its collaborators Nvidia and Mellanox (and others), and the folks at Oak Ridge Leadership Computing Facility (OCLF) and Lawrence Livermore Computing Center (LC) in standing up Summit (an IBM AC922 system) and Sierra supercomputers. Summit and Sierra are, for now, the top performers on the Top500 List. As important is the science they are already doing (produced the 2018 Gordon Bell winner and a number of GB finalists). Both systems also reinforce the idea that heterogeneous architectures will likely dominate near-term supercomputers.

IBM has taken lumps in this yearly wrap-up/look-ahead column. This year Big Blue (and partners) deserves a victory lap for these fantastic machines. Designing and standing up leadership machines isn’t for the faint-hearted – ask Intel about Aurora. IBM has plenty of challenges in the broader HPC server market which we’ll get to later.

Supercomputing is a notoriously boom-bust game. Right now, it’s booming driven largely by the global race to exascale computing and also by efforts to create large-scale compute infrastructures able to deliver AI capabilities. Japan’s just-completed AI Bridging Cloud Infrastructure (ABCI) is a good example of the latter. Proficiency at AI is likely to be a requirement for big machines going forward.

Barring unforeseen economic dips (at one point while writing this piece the Dow was down 500 points) the supercomputing boom will continue with a fair number of pre-exascale and exascale class machines under development worldwide and other leadership class or near-leadership class systems also in the pipeline or expected soon. Hyperion Research is forecasting supercomputer spending to basically double from $4.8B in 2017 to $9.5B in 2022. Let the good times roll while they may!

How well the U.S. is doing overall in the supercomputer fray is a matter of debate.

HPCwire noted in its coverage of the Top500 list at SC18 that, “China now claims 229 systems (45.8 percent of the total), while U.S. share fell has dropped to the lowest ever: 108 systems (21.6 percent). That wide delta in system count is offset by the U.S. having the top two systems and generally operating more powerful systems (and more real HPC systems, as opposed to Web/cloud systems), allowing the U.S. to enjoy a 38 percent performance share, compared to China’s 31 percent. Related to the rise in these non-HPC systems, Gigabit Ethernet ropes together 254 systems. 275 systems on the list are tagged as industry.”

There will always be debate over the value of the Top500 as a metric. Indeed there’s a good deal more to say about supercomputing generally. New architectures coming. Cray recently unveiled its new Shasta line. The HPC community continues speculating over what Aurora’s architecture will look like. There’s even a paper out of China with ideas for reaching Zettascale.

Instead of hammering away at further big machine dynamics, stop and enjoy the standing up of Summit and Sierra for a moment.

SUPERCOMPUTING LOOK-BACK 2018

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

GW4, the MET Office, and Cray Power Up the Largest ARM-based Supercomputer in Europe

Summit Supercomputer is Already Making its Mark on Science

Berkeley Lab, Oak Ridge National Lab Share 2018 ACM Gordon Bell Prize

Zettascale by 2035? China Thinks So

HPC Under the Covers: Linpack, Exascale & the Top500

Australia Pumps Another $70 Million Into Tier-1 Supercomputing

GPUs Power Five of World’s Top Seven Supercomputers

Lenovo to Debut ‘Neptune’ Cooling Technologies at ISC

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

 

  1. Handicapping the Arm(s) Race as Processor Wars Flare Up

One is tempted to write, “The King is Dead. Long Live the King” following Intel’s discontinuance of its Knights Landing line. That’s unfair. A knight is not a king and KNL was never king, nor perhaps, ever intended to be. Although Intel has travelled a more rugged landscape than usual this year, it is still the dominant force in processors. However, KNL’s misfortune does reflect the increasingly competitive fast-moving processor market. For the first time in years a real war among diverse processor alternatives is breaking out.

AMD’s Epyc line, just launched in June 2017, now has around two percent of the x86 server market according to IDC. Last June, Intel’s then CEO Brian Krzanich worried to analysts about how to keep AMD from capturing 15-20 percent of the market. Don’t snicker. He was right. AMD is on a roll.

AMD has major wins among the major systems and cloud providers. It’s in the game, and often enjoys a price advantage. In October, Cray unveiled its next-gen supercomputing architecture, Shasta, which was selected to be the next flagship system at NERSC. Named “Perlmutter” (after Nobel Prize winning astrophysicist Saul Perlmutter), the system will feature AMD Epyc processors and Nvidia GPUs offering a combined peak performance of ~100 petaflops and a sustained application performance equivalent to about 3X that of the Cray Cori (NERSC-8) supercomputer.

Moving on. Arm, so dominant in the mobile market, has struggled to achieve traction in HPC and the broader server marker. Lately the arrival of 64-bit chips is changing attitudes, helped by the filling out of the tool/support ecosystem.

Dell EMC HPC chief Thierry Pellegrino told HPCwire at SC18, “[J]ust like other OEMs out there – we had a SKU available that was 32-bit and didn’t really sell. But I think we are not one of those OEMs that will go out there and just design it and hope people will go and buy it. We depend upon our customers. I can tell you historically customers have asked questions about Arm but have not been very committal. Those discussions are now intensifying…The TX2 (Marvel/Cavium ThunderX2, available last May) looks good and the ThunderX3 roadmap looks great but they aren’t the only ones supplying Arm. Fujitsu has an offering. We also see Ampere with an offering coming up.”

Arm is winning a few key systems deals and turning up in some big machines, such as the Astra system from HPE for Sandia National Lab and in CEA’s selection of an Arm-based system from Atos. The Isambard System at the University of Bristol is another Arm-based large system (Cray XC50) and, of course, Japan’s post K supercomputer is based on an Arm chip from Fujitsu. Arm is slowly insinuating itself into the server (big and small) market. Cray, for example, has been promoting an on-demand webinar entitled, “Embrace Arm for HPC Confidently with the Cray Programming Environment.”

Then there’s IBM’s Power9. IBM is riding high on the success of Summit and Sierra. Its challenge is winning traction for Power in the broader server market. Here’s Pellegrino again: “I think right now we are very busy and focused on Intel, x86, and Arm. It’s not impossible that Power could become more relevant. We are always looking at technologies. The Power-Nvidia integration was a pretty smart move and we’ve seen some clusters won by Power. But it’s not an avalanche. I think it works great for purposeful applications. For general purpose, I think it’s still looked at as [less attractive] than AMD Intel and ARM.”

The overall picture is growing clearer. AMD and Arm will take some market share from Intel. It’s no doubt important that AMD’s ROME line (now sampling) impress potential buyers. So far AMD’s return to the datacenter has been without significant error. To some extent, IBM still has to prove itself (price competitive and easy to use) but is making progress and selling systems. Intel, of course, remains king but fate can move quickly.

Bottom line: Processor alternatives are available and for the first time in a long time, the market seems interested.

PROCESSOR LOOK-BACK 2018

Requiem for a Phi: Knights Landing Discontinued

Intel Confirms 48-Core Cascade Lake-AP for 2019

AMD’s EPYC Road to Redemption in Six Slides

Sandia to Take Delivery of World’s Largest Arm System

IBM at Hot Chips: What’s Next for Power

Cray Unveils Shasta, Lands NERSC-9 Contract

AMD Sets Up for Epyc Epoch

AWS Launches First Arm Cloud Instances

Nvidia T4 and HGX-2 Deployments Keep Piling Up

Dell EMC’s HPC Chief on Strategy and Emerging Processor Diversity

 

  1. After the CERN AI ‘Breakthrough’, Scientific Computing Won’t be the Same

No startling predictions here. Even so, AI is not only next year’s poster child but likely the poster child for the next decade as we work toward understanding its potential and developing technologies to deliver it. That said, because AI is being adopted or at least tested in so many different venues and applications, charting its many-veined course forward is challenging. Accelerator-driven, heterogeneous architectures with advanced mixed-precision processing capabilities is just the start, and mostly in top-of-line scientific computing systems. Embedded systems are likely to show greater AI variety.

A watershed moment of sorts occurred over the summer when work by CERN scientists was awarded a best poster prize at ISC18 for demonstrating that AI-based models have the potential to act as orders-of-magnitude-faster replacements for computationally expensive tasks in simulation. Their work is part of a CERN openlab project in collaboration with Intel. That project is just one of many projects demonstrating AI effectiveness in scientific computing. The CANcer Distributed Learning Environment (CANDLE) project is another. AI tools developed by CANDLE will find use across a broad range of DoE missions.

Events are moving fast. You may not know, for example, there’s a bona fide effort to develop a Deep500 benchmark underway led by Torsten Hoefler and Tal Ben-Nun of ETH in close collaboration with other distinguished researchers such as Satoshi Matsuoka, director of the Japan’s RIKEN Center for Computational Science .

“We are organizing a monthly meeting with leading researchers and interested parties from the industry. The meetings are open and posted on the Deep500 website (https://www.deep500.org/). Following that, the next step is to establish a steering committee for the benchmark. It is imperative that we fix the ranking and metrics of the benchmark, as the community is undecided right now on several aspects of this benchmark (see below). We intend to make considerable progress this year, reconvene at SC19,” Ben-Nun told HPCwire.

More than just bragging rights, such a benchmark may have eminently practical uses. Matsuoka described the difficult effort he and colleagues had developing procurement metrics for the ABCI system. HPCwire will have coverage of the emerging Deep500 Benchmark effort and its BOF session at SC18 early in the new year.

The user community does seem hungry for comparison metrics – an HPCwire article on the broader MLPerf standard’s introduction in May, led in part by Google, Baidu, Intel, AMD, Harvard, and Stanford, was one of the highest read articles of the year. Just last week Nvidia loudly trumpeted its performance on the first round of results released by the seven-month-old standard. (Yes, Nvidia fared well.)

AI’s challenges are mostly familiar. Model training is notoriously difficult. Required datasets are often massive and sometimes remote from the compute resource. Pairing CPUs with GPUs is the most common approach. No surprise, Nvidia has jumped onto AI as a best-use of its GPUs (scale up and scale out), DGX-2 computer, and assorted software tools including containerized aps, verticalized stacks, code compatibility across its products. Intel is likewise driving a stake deep into AI territory with chips and SOCs. Intel is also working intensely on neuromorphic technology (Loihi chip) which may eventually deliver greater deep learning efficiency and lower power consumption.

All of the systems houses, Dell EMC, HPE, IBM, Lenovo, Supermicro, etc. have ‘AI’ solutions of one or another flavor. Cloud providers and social networks, of course, have been deep into AI writ large for years. They have been busily developing deep learning, machine learning, and data analytics expertise for several years, often sharing their learnings in open source. It’s a virtuous cycle since they are all also heavy consumers of ‘AI’.

It’s really not clear yet how all of this will shake out. Heck, quantum computer pioneer D-Wave launched a machine learning business unit this year. Don’t ask me exactly what it does. What does seem clear is that AI technologies will take on many new tasks and, at least in HPC, increasingly work in concert with traditional modeling and simulation.

Prediction: Next year’s Gordon Bell prize finalists will likely (again) include some AI-driven surprises.

AI LOOK-BACK 2018

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

TACC Releases Special Report on Artificial Intelligence

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

Nvidia Leads Alpha MLPerf Benchmarking Round

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

Nvidia’s Jensen Huang Delivers Vision for the New HPC

Rise of the Machines – Clarion Call on AI by U.S. House Subcommittee

At HPC on Wall Street: AI-as-a-Service Accelerates AI Journeys

Intel Announces Cooper Lake, Advances AI Strategy

AI-Focused ‘Genius’ Supercomputer Installed at KU Leuven

Deep Neural Network from University of Illinois Accelerates aLIGO Research

Neural Networking Shows Promise in Earthquake Monitoring

IBM Adds V100 GPUs to IBM Cloud; Targets AI, HPC

Intel Pledges First Commercial Nervana Product in 2019

 

  1. Quantum’s Haze…Are We There Yet? No!

Where to start?

The $1.2 billion U.S National Quantum Initiative, first passed by the House of Representatives in September, was finally passed by the Senate on Dec. 14. It’s expected to reach the president’s desk by year end and to be signed. It’s a ten-year program covering many aspects of fostering a quantum computing ecosystem. And yes, it is driven in part by geopolitical worries of falling behind in a global quantum computing race. Indeed there are several other like-minded efforts around the globe.

Intel’s director of quantum hardware, Jim Clarke, holds the new 17-qubit superconducting test chip. (Credit: Intel Corporation)

Jim Clarke, director of quantum hardware, Intel Labs, issued a statement in support back when the House acted on the bill: “This legislation will allocate funding for public research in the emerging area of Quantum Computing, which has the potential to help solve some of our nation’s greatest challenges through exponential increases in compute speed. [We] look forward to working with leaders in the Senate to help keep the U.S. at the cutting edge of quantum information science and maintain the economic advantages of this technological leadership.”

HPCwire reported then, “As spelled out in the bill, 1) National Institute of Standards and Technology (NIST) Activities and Workshops would receive $400 million (2019-2023 at $80 million per year); 2) National Science Foundation (NSF) Multidisciplinary Centers for Quantum Research and Education would receive $250 million (2019-2023, at $50 million per year); and 3) Department of Energy Research and National Quantum Information Science Research Centers would receive $625 million (2019-2023 at $125 million per year).”

It’s a big, whole-of-government program and as in all such things the devil will be in the details.

Meanwhile, a report released on December 5th by the National Academies of Science, Engineering, and Medicine (Quantum Computing: Progress and Prospects) declares robust, error-corrected quantum computers won’t be practical for at least a decade! Until then, according to the report, noisy intermediate scale quantum computers (NISQ) will have to carry the load and, adds the report, no one is quite sure what NISQs will actually be able to do.

OK then.

A little like Schrodinger’s Cat, quantum computing is alive or not, depending upon which report you look at (bit of a stretch, I know). By now most of the HPC community is familiar in broad terms with quantum computing’s potential. Many are dabbling already in QC. I attended an excellent workshop at SC18 led by Scott Pakin (Los Alamos National Laboratory) and Eleanor Reiffel (NASA Ames Research Center) and one of our exercises, using an online toolbox, was to build a two-bit adder using gate-based quantum computing code. The show of hands denoting success at the task was not overwhelming.

Quantum computing is different and tantalizing and needs pursuing.

There is so much ongoing activity in quantum computing that it’s very possible the sober near-term outlook presented in the NASEM report is too pessimistic. At least three vendors – IBM, D-Wave Systems, and Rigetti Computing – have launched web-based platforms providing access to tools, instruction, and quantum processors. The (good) idea here is to jump start a community of developers working on quantum applications and algorithms. It seems likely other notable quantum pioneers such as Microsoft, Google, and Intel will follow suit with their own web-based quantum computing sand boxes.

Also the chase for quantum supremacy has been joined by quantum advantage. I rather like the NASEM report’s thoughts here:

“Demonstration of ‘quantum supremacy’—that is, completing a task that is intractable on a classical computer, whether or not the task has practical utility—is one [milestone]. While several teams have been focused on this goal, it has not yet been demonstrated (as of mid-2018). Another major milestone is creating a commercially useful quantum computer, which would require a QC that carries out at least one practical task more efficiently than any classical computer. While this milestone is in theory harder than achieving quantum supremacy—since the application in question must be better and more useful than available classical approaches—proving quantum supremacy could be difficult, especially for analog QC. Thus, it is possible that a useful application could arise before quantum supremacy is demonstrated.”

Rigetti is offering $1 million prize to the first group/individual to demonstrate quantum advantage using its web-platform.

Overall, there are many smart, experienced researchers working on quantum computing writ large and that includes many application areas (computing, sensors, communications, etc.). I like how John Martinis, who leads Google’s quantum effort and is a Stanford researcher, put it during the Q&A at release of the NASEM report which he helped write. He’s also a former HPCwire Person to Watch  (2017):

“Progress in the field has been quite good in the last few years, and people have been able not just to do basics physics experiments but [also] to start building quantum computing systems. I think there’s a lot more optimism that people can build things and get it to work properly. Of course there’s lot of work to be done to get them to work well and match them to problems but the pace [of progress] has picked up and there’s interesting things that have come out. I think that in the next year or two, [we] won’t get to solving actual problems yet but there will be a lot better machines out there,” said Martinis.

Somewhere, there’s a pony hidden in the quantum play room. Just don’t expect to find it in 2019. Listed below are links to just a few of the many articles HPCwire has published on QC this year. Also, the NASEM report isn’t a bad reference and is free to download.

QUANTUM LOOK-BACK ON 2018

House Passes $1.275B National Quantum Initiative

Robust Quantum Computers Still a Decade Away, Says Nat’l Academies Report

Europe Launches Ten-year, €1B Quantum Flagship Project

Hyperion Tackles Elusive Quantum Computing Landscape

Rigetti (and Others) Pursuit of Quantum Advantage

D-Wave Is Latest to Offer Quantum Cloud Platform

IBM Expands Quantum Computing Network

NSF Launches Quantum Computing Faculty Fellows Program

Google Charts Two-Dimensional Quantum Course

 

  1. Too Little Space for So Many Worthwhile Items

We start our roundup with remembrance of three prominent members of the HPC and science communities. In March, legendary physicist Stephen Hawking died at age 76. Hawking made lasting contributions in many areas and advanced cosmology as a computational science and led the launch of several UK supercomputers dedicated to cosmology and particle physics. In April, computer pioneer Burton J. Smith passed away at age 77. He was an MIT and Microsoft alum, a renowned parallel computing expert, and a leader in the HPC community. His 2007 ISC keynote detailed how computing would be reinvented for the multicore era.  A third loss of note was Bob Borchers, one of the founders of the Supercomputing Conference, who died in June. Among his many accomplishments, Borchers served as Director of the Division of Advanced Scientific Computing at the National Science Foundation (NSF).

There are always a few eye-catching M&As. Microsoft’s $7.5 billion gobbling up of GitHub in June is still being closely watched. Several analysts at the time said the move reaffirms Microsoft’s commitment to open-source development. We’ll see. In October, IBM announced plans to purchase Linux powerhouse Red Hat for $34 billion. Probably too soon to say much about the latter deal. Personnel shuffling is part of life in HPC (and everywhere) The wait continues for a new Intel CEO. That said Intel snapped up Jim Keller in April from Tesla to lead Intel’s system-on-chip development efforts. Keller had been leader in AMD’s x86 Zen architecture development and has worked extensively on Arm.

The Spaceborne Computer fully installed in the International Space Station

HPE’s Spaceborne Computer (based on the HPE Apollo 40) successfully completed its first year in space, demonstrating a system built with commercial off the shelf (COTS) parts could survive the rigors of space. Haven’t heard much lately from Pattern Computer which emerged from stealth in May sporting some familiar HPC names (Michael Riddle, James Reinders). In a nutshell, Pattern Computer says it has developed an approach to exploring data that permits very high dimensionality exploration in contrast to the pairwise approach that now dominates. It hasn’t spelled out details.

Some less praiseworthy moments: Daisuke Suzuki, GM of Pezy Computer, was sentenced (July) to three years in prison which was then reduced to a four-year suspended sentence. No, word yet on Pezy President Motoaki Saito, also on trial. Both were indicted in late 2017 for defrauding the Japanese government of roughly $5.8 million (¥653 million) in 2014. In February, a group of Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency.

It’s interesting what catches readers’ attention – an article about using DL to solve Rubik’s Cube received wide readership. Retro nostalgia? Questions about leading edge semiconductor fabrication capacity are still percolating through the community following Global Foundries announcement it has decided to put its 7 nm node on hold, and entirely stopped development of nodes beyond 7 nm. With Global Foundries shuttering development there are now only three companies left in the game at 10/7 nm; TSMC, Samsung and Intel. At both ISC18 and SC18 BeeGFS was drawing attention – it’s looking more and more like BeeGFS may become a viable option in the parallel file system market.

Chalk this up under Blasé but not Passé. Container technology has clearly gone mainstream in HPC (if that’s not an oxymoron); SyLabs released Singularity 3.0 in the fall. OpenHPC also continues forward. HPCwire ran a good interview with OpenHPC project leader Karl Schulz who said, among other things, that OpenHPC was planning to offer more automated functions; it has already increased the number of recipes (~dozen) and support Singularity and Charliecloud.

Supermicro has countered a news story (The Big Hack: How China Used a Tiny Chip to Infiltrate U.S. Companies) that appeared on Bloomberg BusinessWeek claiming spies in China hacked Super Micro Computer servers widely distributed throughout the U.S. technology supply chain, including servers used by Amazon and Apple. Supermicro issued a report last week saying an investigation “found absolutely no evidence of malicious hardware on our motherboards.” Amazon also issued a denial, stating, “It’s untrue that AWS knew about a supply chain compromise, an issue with malicious chips, or hardware modifications.” No doubt there’s a dark side to the world.

Leaving the dark side to others, Happy Holidays and a hopeful new year to all. On to 2019!

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Amid Upbeat Earnings, Intel to Cut 1% of Employees, Add as Many

January 24, 2020

For all the sniping two tech old timers take, both IBM and Intel announced surprisingly upbeat earnings this week. IBM CEO Ginny Rometty was all smiles at this week’s World Economic Forum in Davos, Switzerland, after  Read more…

By Doug Black

Indiana University Dedicates ‘Big Red 200’ Cray Shasta Supercomputer

January 24, 2020

After six months of celebrations, Indiana University (IU) officially marked its bicentennial on Monday – and it saved the best for last, inaugurating Big Red 200, a new AI-focused supercomputer that joins the ranks of Read more…

By Staff report

What’s New in HPC Research: Tsunamis, Wildfires, the Large Hadron Collider & More

January 24, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Toshiba Promises Quantum-Like Advantage on Standard Hardware

January 23, 2020

Toshiba has invented an algorithm that it says delivers a 10-fold improvement for a select class of computational problems, without the need for exotic hardware. In fact, the company's simulated bifurcation algorithm is Read more…

By Tiffany Trader

Energy Research Combines HPC, 3D Manufacturing

January 23, 2020

A federal energy research initiative is gaining momentum with the release of a contract award aimed at using supercomputing to harness 3D printing technology that would boost the performance of power generators. Partn Read more…

By George Leopold

AWS Solution Channel

Challenging the barriers to High Performance Computing in the Cloud

Cloud computing helps democratize High Performance Computing by placing powerful computational capabilities in the hands of more researchers, engineers, and organizations who may lack access to sufficient on-premises infrastructure. Read more…

IBM Accelerated Insights

Intelligent HPC – Keeping Hard Work at Bay(es)

Since the dawn of time, humans have looked for ways to make their lives easier. Over the centuries human ingenuity has given us inventions such as the wheel and simple machines – which help greatly with tasks that would otherwise be extremely laborious. Read more…

TACC Highlights Its Upcoming ‘IsoBank’ Isotope Database

January 22, 2020

Isotopes – elemental variations that contain different numbers of neutrons – can help researchers unearth the past of an object, especially the few hundred isotopes that are known to be stable over time. However, iso Read more…

By Oliver Peckham

Toshiba Promises Quantum-Like Advantage on Standard Hardware

January 23, 2020

Toshiba has invented an algorithm that it says delivers a 10-fold improvement for a select class of computational problems, without the need for exotic hardware Read more…

By Tiffany Trader

In Advanced Computing and HPC, Dell EMC Sets Sights on the Broader Market Middle 

January 22, 2020

If the leading advanced computing/HPC server vendors were in the batting lineup of a baseball team, Dell EMC would be going for lots of singles and doubles – Read more…

By Doug Black

DNA-Based Storage Nears Scalable Reality with New $25 Million Project

January 21, 2020

DNA-based storage, which involves storing binary code in the four nucleotides that constitute DNA, has been a moonshot for high-density data storage since the 1960s. Since the first successful experiments in the 1980s, researchers have made a series of major strides toward implementing DNA-based storage at scale, such as improving write times and storage density and enabling easier file identification and extraction. Now, a new $25 million... Read more…

By Oliver Peckham

AMD Recruits Intel, IBM Execs; Pending Layoffs Reported at Intel Data Platform Group

January 17, 2020

AMD has raided Intel and IBM for new senior managers, one of whom will replace an AMD executive who has played a prominent role during the company’s recharged Read more…

By Doug Black

Atos-AMD System to Quintuple Supercomputing Power at European Centre for Medium-Range Weather Forecasts

January 15, 2020

The United Kingdom-based European Centre for Medium-Range Weather Forecasts (ECMWF), a supercomputer-powered weather forecasting organization backed by most of Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

White House AI Regulatory Guidelines: ‘Remove Impediments to Private-sector AI Innovation’

January 9, 2020

When it comes to new technology, it’s been said government initially stays uninvolved – then gets too involved. The White House’s guidelines for federal a Read more…

By Doug Black

IBM Touts Quantum Network Growth, Improving QC Quality, and Battery Research

January 8, 2020

IBM today announced its Q (quantum) Network community had grown to 100-plus – Delta Airlines and Los Alamos National Laboratory are among most recent addition Read more…

By John Russell

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

51,000 Cloud GPUs Converge to Power Neutrino Discovery at the South Pole

November 22, 2019

At the dead center of the South Pole, thousands of sensors spanning a cubic kilometer are buried thousands of meters beneath the ice. The sensors are part of Ic Read more…

By Oliver Peckham

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

Summit Has Real-Time Analytics: Here’s How It Happened and What’s Next

October 3, 2019

Summit – the world’s fastest publicly-ranked supercomputer – now has real-time streaming analytics. At the 2019 HPC User Forum at Argonne National Laborat Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This