Some wearable electronics—like sensors sewn into fabrics, or applicable “skins”—rely on the development of new, durable, stretchable electronic materials. One way to enhance the elasticity of these often-delicate materials is by introducing strategic cuts into them, creating a stretchable mesh. Recently, a team of researchers from the University of Southern California approached this materials design problem with inspiration from kirigami, the Japanese art of paper cutting—and used supercomputing power to make their approach possible.
“Origami or kirigami design based on the ancient paper crafting technique are employed to change the mechanical behavior of 2D materials,” the authors wrote in their paper. “For example, graphene is brittle in nature but its flexibility can be substantially enhanced by introducing cut patterns in the graphene sheet, thereby enabling stretchable electronics.”
The research team combined these kirigami-inspired cuts with autonomous reinforcement learning, aiming to optimize a layout of cuts in a 2D structure of molybdenum disulfide (MoS2) for maximum stretchability. “The question is, can we use a similar behavior in materials design, like in this kirigami, where your objective is to create a more structured material that is highly stretchable, one cut at a time,” explained Pankaj Rajak, a lead researcher on the project, in an interview with John Spizzirri of the Argonne Leadership Computing Facility (ALCF). “It’s a smart strategy for figuring out where the cuts should go.”
To feed the reinforcement learning algorithm, Rajak and his colleagues ran 98,500 simulations of the material with a range of cuts (one to six) at different lengths. The simulations were run over the course of several months on Theta, a 6.9-Linpack petaflops supercomputer at Argonne that ranked 70th on the most recent Top500 list.
“You could have two hundred people each doing five experiments a day for one month collecting the data on different cuts,” said Priya Vashishta, another member of the research team. “It would be very expensive for material and for time. But in this case, the model was reasonably good and produced data that was very similar to experimental data.”
Based on this six-cut simulated data, the reinforcement learning algorithm actually learned how to predict eight- and ten-cut structures, producing billions of possible combinations that would have taken a much longer time to simulate. “If it took [a] few months to do 98,500 simulations and you go three orders of magnitude higher, that is a lifetime,” Vashishta said.
The algorithm performed admirably, producing a ten-cut structure that added 40 percent stretchability to the material within seconds (pictured in the header). “It has figured out things we never told it to figure out,” Rajak said. “It learned something the way a human learns and used its knowledge to do something different.”
To learn more about this research, read the reporting from ALCF’s John Spizzirri here and read the research paper here.