Supercomputer Simulations Allow Researchers to Understand Characteristics of Diamonds

January 6, 2017

Jan. 6, 2017 — For centuries diamonds have been revered for their strength, beauty, value and utility. Now a team of researchers from Argonne National Laboratory, running molecular dynamics calculations at the Argonne Leadership Computing Facility (ALCF) and Berkeley Lab’s National Energy Research Scientific Computer Center (NERSC), are finding additional reasons to celebrate this complex material—and it has nothing to do with color, cut or clarity.

In a series of papers published in in ScienceNature and Nature Communications, experimentalists and computational scientists from Argonne’s Center for Nanoscale Materials (CNM) shared several “firsts” in their ongoing efforts to uncover new characteristics in diamond and diamond-like carbons that make these materials even more attractive, particularly for industrial applications.

For example, the Nature study, published in August 2016, highlights their discovery of a revolutionary diamond-like film that is generated by the heat and pressure of an automotive engine. This ultra-durable, self-lubricating tribofilm (a film that forms between moving surfaces) could have profound implications for the efficiency and durability of future engines and other moving metal parts that can be made to develop self-healing, diamond-like carbon tribofilms.

The phenomenon was first discovered several years ago through experiments conducted by researchers in the Tribology and Thermal-Mechanics Department in Argonne’s Center for Transportation Research. But it took theoretical insight using supercomputing resources to fully understand what was happening at the molecular level in the experiments. Argonne nanoscientist Subramanian Sankaranarayanan and postdoctoral researcher Badri Narayanan ran molecular dynamics simulations on Argonne’s Mira system and NERSC’s Edison system to understand what was happening at the atomic level. These calculations helped them determine that the catalyst metals in the nanocomposite coatings were stripping hydrogen atoms from the hydrocarbon chains of the lubricating oil, then breaking the chains down into smaller segments. The smaller chains then joined together under pressure to create the highly durable DLC tribofilm.

“This is an example of catalysis under extreme conditions created by friction. It is opening up a new field where you are merging catalysis and tribology, which has never been done before,” said Sankaranarayanan. “This new field of tribocatalysis has the potential to change the way we look at lubrication.”

In the Nature Communications study, published in July 2016, a team of Argonne and University of California-Riverside researchers once again used a combination of experiments and molecular dynamics simulations to demonstrate how diamond—in this case ultrananocrystalline diamond that serves as a substrate—can be used to grow graphene that contains relatively few impurities and costs less to make, in shorter time and at lower temperatures compared to the process widely used to make graphene today. Current graphene fabrication protocols introduce impurities during the etching process itself, which involves adding acid and extra polymers, and when they are transferred to a different substrate for use in electronics. These impurities negatively affect the electronic properties of the graphene, the researchers noted.

The simulations—which were developed by Sankaranarayanan and his post-docs, Badri Narayanan and Sanket Deshmukh, and utilized 300,000 to 500,000 node hours at NERSC in addition to computing time at Argonne—helped the team understand the molecular-level processes underlying graphene growth. They ran three different sets of calculations on NERSC’s Edison supercomputer to tease out the sequence of events leading to graphene nucleation on nickel and to determine what kind of graphene structures can grow on different crystal orientations.

“NERSC is a very good resource to have because it allows the flexibility to do intermediate, production-run calculations,” Sankaranarayanan said. “In this example, you have a lot of things happening mechanistically, and the experimentalists have an end point and the time scales involved are quite fast. But they have not yet reached a stage where in situ experiments can be performed on these kinds of rapidly evolving interfaces, and they want to understand the dynamics of what is happening at the nanosecond and microsecond time scales. It is this dynamical evolution that the experimentalists want us to simulate.”

In an earlier, related study published in Science, the Argonne team described how a series of molecular dynamics simulations paved the way for the design of a near-frictionless hybrid material. The research team again used a combination of experiments and simulations to demonstrate that superlubricity can be realized at engineering scale when graphene is used in combination with nanodiamond particles and diamond-like carbon. Considering that nearly one-third of every fuel tank is spent overcoming friction in automobiles, a material that can achieve superlubricity would greatly benefit industry and consumers alike.

“The beauty of this particular discovery is that we were able to see sustained superlubricity at the macroscale for the first time, proving this mechanism can be used at engineering scales for real-world applications,” Sankaranarayanan said. “It was really a big breakthrough that purely came out of calculations that we did initially at NERSC and then at ACLF.”

About NERSC and Berkeley Lab

The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high-performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 6,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. DOE Office of Science. »Learn more about computing sciences at Berkeley Lab.


Source: Kathy Kincade, NERSC and Berkeley Lab

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Neural Network ‘Synapse’ Technology Showcased at IEEE Meeting

December 12, 2018

There’s nice snapshot of advancing work to develop improved neural network “synapse” technologies posted yesterday on IEEE Spectrum. Lower power, ease of use, manufacturability, and performance are all key paramete Read more…

By John Russell

Is Amazon’s Plunge into Server Chips a Watershed Moment?

December 11, 2018

For several years now the big cloud providers – Amazon, Microsoft Azure, Google, et al – have been transforming from technology consumers into technology creators in hardware and software. The most recent example bei Read more…

By John Russell

Mellanox Uses Univa to Extend Silicon Design HPC Operation to Azure

December 11, 2018

Call it a corollary to Murphy’s Law: When a system is most in demand, when end users are most dependent on the system performing as required, when it’s crunch time – that’s when the system is most likely to blow up. Or make you wait in line to use it. Read more…

By Doug Black

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

Blurring the Lines Between HPC and AI @ SC18

The dominant topic at SC18 was the convergence of HPC and Artificial Intelligence (AI) with some of the biggest research and enterprise HPC users providing perspectives on how HPC and AI are moving closer together. Read more…

Clemson’s Cautionary Cryptomining Tale

December 11, 2018

In some ways, the bigger the computer, the more vulnerable it is to cryptomining as Clemson University discovered after cryptominers dug into its Palmetto supercomputer. When a number of nodes on Clemson University’s P Read more…

By Staff

Topology Can Help Us Find Patterns in Weather

December 6, 2018

Topology--–the study of shapes-- seems to be all the rage. You could even say that data has shape, and shape matters. Shapes are comfortable and familiar conc Read more…

By James Reinders

Zettascale by 2035? China Thinks So

December 6, 2018

Exascale machines (of at least a 1 exaflops peak) are anticipated to arrive by around 2020, a few years behind original predictions; and given extreme-scale performance challenges are not getting any easier, it makes sense that researchers are already looking ahead to the next big 1,000x performance goal post: zettascale computing. Read more…

By Tiffany Trader

Robust Quantum Computers Still a Decade Away, Says Nat’l Academies Report

December 5, 2018

The National Academies of Science, Engineering, and Medicine yesterday released a report – Quantum Computing: Progress and Prospects – whose optimism about Read more…

By John Russell

Revisiting the 2008 Exascale Computing Study at SC18

November 29, 2018

A report published a decade ago conveyed the results of a study aimed at determining if it were possible to achieve 1000X the computational power of the the Read more…

By Scott Gibson

AWS Debuts Lustre as a Service, Accelerates Data Transfer

November 28, 2018

From the Amazon re:Invent main stage in Las Vegas today, Amazon Web Services CEO Andy Jassy introduced Amazon FSx for Lustre, citing a growing body of applicati Read more…

By Tiffany Trader

AWS Launches First Arm Cloud Instances

November 28, 2018

AWS, a macrocosm of the emerging high-performance technology landscape, wants to be everywhere you want to be and offer everything you want to use (or at least Read more…

By Doug Black

Move Over Lustre & Spectrum Scale – Here Comes BeeGFS?

November 26, 2018

Is BeeGFS – the parallel file system with European roots – on a path to compete with Lustre and Spectrum Scale worldwide in HPC environments? Frank Herold Read more…

By John Russell

DOE Under Secretary for Science Paul Dabbar Interviewed at SC18

November 21, 2018

During the 30th annual SC conference in Dallas last week, SC18 hosted U.S. Department of Energy Under Secretary for Science Paul M. Dabbar. In attendance Nov. 13-14, Dabbar delivered remarks at the Top500 panel, met with a number of industry stakeholders and toured the show floor. He also met with HPCwire for an interview, where we discussed the role of the DOE in advancing leadership computing. Read more…

By Tiffany Trader

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Google Releases Machine Learning “What-If” Analysis Tool

September 12, 2018

Training machine learning models has long been time-consuming process. Yesterday, Google released a “What-If Tool” for probing how data point changes affect a model’s prediction. The new tool is being launched as a new feature of the open source TensorBoard web application... Read more…

By John Russell

The Convergence of Big Data and Extreme-Scale HPC

August 31, 2018

As we are heading towards extreme-scale HPC coupled with data intensive analytics like machine learning, the necessary integration of big data and HPC is a curr Read more…

By Rob Farber

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This