Using AI and Robots to Advance Science

By Kevin Jackson

June 4, 2024

Even though we invented it, humans can be pretty bad at science. We need to eat and sleep, we sometimes let our emotions regulate our behavior, and our bodies are easily and irreparably damaged – all of which can stand in the way of scientific achievement.

Casey Stone
Credit: Argonne National Laboratory

Human researchers will always play a role in science, but recent developments out of Argonne National Laboratory make the case that we should let robots do some of the work. Specifically, Argonne researchers are working on what they refer to as “autonomous discovery.” The lab hopes to increase productivity in science by relying on physical robots programmed with versatile AI software.

Casey Stone, a Computer Scientist and Bioinformatician at Argonne, recently gave a speech about autonomous discovery at an Argonne Outloud conference.

“Autonomous discovery will help individual scientists conduct more experiments and reach results faster,” Stone said in an interview after her speech. “In complex and large-scale experiments, robotics can run the experiments overnight and the experiments can be parallelized across multiple copies of the same robot. This would free up scientists’ time and allow them to focus on coming up with other creative solutions or focusing their lab time on smaller scale investigations that might lead to new hypotheses.”

The concept of robots doing hard or boring work has enraptured science fiction writers for decades, but actually achieving it is a difficult task. Stone outlined the challenges currently facing researchers as well as the opportunities that autonomous discovery presents.

Software and Hardware Modularity

One of the main challenges facing scientists who want to dive into autonomous discovery is a need for a high amount of modularity in both the software and hardware involved. Stone pointed out that Argonne scientists are working on complex problems that span many areas of experimentation, and as such they need their robots to be as flexible as possible to adapt to the changing needs of an experiment.

For the hardware, Argonne places each robotic instrument on its own cart. Each cart contains all of the computing power and sensors needed to make the instrument work and ensure that it functions as designed. The beauty of this system is that each instrument is self-contained, so the scientists can unhook the cart from the rest of the instruments, roll it away, and roll in another instrument without disrupting the rest of the system.

PF400/300 Sample Handler Robotic Arms
Credit: Precise Automation

This modularity also allows scientists to use more of the instruments they need. If a specific robot is taking longer than others in the process, researchers can hook in more of that same type of robot to the system to parallelize that step and increase speed.

Stone stated that the current cart system is just the first iteration of hardware modality. She spoke about a future laboratory where humans don’t need to roll the instruments into place themselves.

“Instead, the instruments are located on mobile platforms that can roll themselves into formation based on the needs of an experiment,” Stone said. “In a situation like that, we could take advantage of optimization algorithms to arrange the instruments in the optimal way to complete the experiment as fast as possible.”

For software, Stone stated that the code for each instrument is contained in individual sections on Argonne’s AD-SDL GitHub repository. For example, all the code needed to control the PF400 robotic arm can be found here, while all the code needed to control the OT-2 liquid handling robot is here.

Keeping this code separate for each instrument makes it easier to set up robotic labs because the researchers only need the code that is relevant to the setup of the instruments they currently want to use.

The combination of the instrument itself, the associated code from the GitHub repository, and the computers/sensors needed to allow the instrument to function is called a module. Stone stated that a module is a self-contained unit that can be added or removed from the overall robotic laboratory like a Lego brick.

Each module broadcasts certain information to the rest of the system – like what actions it’s able to complete, if it is ready to receive a command, and what resources it has available. Each module can receive commands, execute the command, and then indicate when the command has been completed. Then, a REST API server handles the distribution of experimental actions to the instruments in the correct order. The server waits for each command to complete before sending the next command.

OT-2 liquid handling robot
Credit: Opentrons Labworks

“This way, each instrument functions completely independently, and the server is responsible for integrating them together,” Stone said. “If you remove one instrument and replace it with something else, there are minimal code changes needed to get the system up and running again. “

Stone also noted that scientists do not need to employ the hardware modularity strategy to take advantage of the software modularity. These resources were designed to be as versatile as they are useful, and Argonne researchers are working hard to remove as many roadblocks as possible.

This mentality of sharing resources cuts to the heart of Argonne’s work with autonomous research. All of the software Argonne has developed here is open-source, and Stone underscored the collectivist nature of this work.

“As a national lab, our goal is to make discoveries and to spur innovation, rather than to profit from our scientific discoveries, “Stone said. “We try to make scientific advancements accessible and beneficial to the communities around us. Making our code open-source enables other groups to bring automated discovery into their scientific process, even if they may not have the funds to pay for the more expensive proprietary solutions for scientific instrument integration.”

While this dedication to the advancement of science itself is noble in its own right, Argonne scientists are helping themselves by helping others. By making this code open-source, researchers can develop a collective knowledge base around robotics and instrument integration. Any scientist who uses this code can contribute to the same software stack and build on the discoveries of others.

Humans Still Run the Show

This kind of autonomous research is exciting, but it’s important to note here that humans aren’t being written out of the scientific process. Stone stated that humans will still play a crucial role in every step of the research journey.

In Stone’s mind, an autonomous research experiment would begin with a human scientist formulating a research question or hypothesis. Then, the scientist would direct AI to train on relevant data. The researcher would have to check to make sure that the AI output is logical. Additionally, the scientist would perform tasks like fixing instrument errors or supplying more labware to the system.

Once the robotic experimentation is complete, a scientist would check that the data produced is of sufficient quality before the data is passed back to the AI to update the models. Finally, when the autonomous discovery loops have completed many rounds and reached a result, the researcher can further validate the results with manual tests.

A potential area that autonomous discovery could advance is the study of antimicrobial peptides. These are small proteins that help organisms like humans protect themselves from infections by acting like natural antibiotics.

Peptides are made up of a sequence of amino acids, and there are 20 common amino acids. If a scientist wanted to design an antimicrobial peptide with a length of 10 amino acids (which is short for antimicrobial peptides) and there are 20 amino acids to choose from, the scientist would end up with 2010 total possibilities for peptide sequences. This is more than 10 trillion possible sequences to test.

Of course, a knowledgeable scientist would be able to narrow down some of these possibilities, but the fact remains that it would be nearly impossible to run all the experiments necessary to reach an optimal result using traditional methods.

This is where autonomous discovery could be immensely helpful. The scientist could train the AI on large amounts of data related to known antimicrobial peptides and their sequences. Then, the AI would learn patterns in those sequences that might contribute to their effective antimicrobial nature. Thus, the AI software would narrow down the number of sequences to test. After this, the scientist could hand off the physical operation of these experiments to one of the robotic carts described above. If the researcher discovers a physical bottleneck, or if one of the carts isn’t working properly, they can swap hardware in and out as needed.

Humans will always play a crucial role in research. However, our scientific progress over the years has been tied to the tools we use. Versatile AI software with modular robotic hardware both combine to form one of the most revolutionary tools science has ever seen.

As these autonomous discovery systems become more capable, they may one day make leaps of scientific understanding that were previously unimaginable to the human mind alone.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

HPE and NVIDIA Join Forces and Plan Conquest of Enterprise AI Frontier

June 20, 2024

The HPE Discover 2024 conference is currently in full swing, and the keynote address from Hewlett-Packard Enterprise (HPE) CEO Antonio Neri on Tuesday, June 18, was an unforgettable event. Other than being the first busi Read more…

Slide Shows Samsung May be Developing a RISC-V CPU for In-memory AI Chip

June 19, 2024

Samsung may have unintentionally revealed its intent to develop a RISC-V CPU, which a presentation slide showed may be used in an AI chip. The company plans to release an AI accelerator with heavy in-memory processing, b Read more…

ASC24 Student Cluster Competition: Who Won and Why?

June 18, 2024

As is our tradition, we’re going to take a detailed look back at the recently concluded the ASC24 Student Cluster Competition (Asia Supercomputer Community) to see not only who won the various awards, but to figure out Read more…

Qubits 2024: D-Wave’s Steady March to Quantum Success

June 18, 2024

In his opening keynote at D-Wave’s annual Qubits 2024 user meeting, being held in Boston, yesterday and today, CEO Alan Baratz again made the compelling pitch that D-Wave’s brand of analog quantum computing (quantum Read more…

Apple Using Google Cloud Infrastructure to Train and Serve AI

June 18, 2024

Apple has built a new AI infrastructure to deliver AI features introduced in its devices and is utilizing resources available in Google's cloud infrastructure.  Apple's new AI backend includes: A homegrown foun Read more…

Argonne’s Rick Stevens on Energy, AI, and a New Kind of Science

June 17, 2024

The world is currently experiencing two of the largest societal upheavals since the beginning of the Industrial Revolution. One is the rapid improvement and implementation of artificial intelligence (AI) tools, while the Read more…

HPE and NVIDIA Join Forces and Plan Conquest of Enterprise AI Frontier

June 20, 2024

The HPE Discover 2024 conference is currently in full swing, and the keynote address from Hewlett-Packard Enterprise (HPE) CEO Antonio Neri on Tuesday, June 18, Read more…

Slide Shows Samsung May be Developing a RISC-V CPU for In-memory AI Chip

June 19, 2024

Samsung may have unintentionally revealed its intent to develop a RISC-V CPU, which a presentation slide showed may be used in an AI chip. The company plans to Read more…

Qubits 2024: D-Wave’s Steady March to Quantum Success

June 18, 2024

In his opening keynote at D-Wave’s annual Qubits 2024 user meeting, being held in Boston, yesterday and today, CEO Alan Baratz again made the compelling pitch Read more…

Shutterstock_666139696

Argonne’s Rick Stevens on Energy, AI, and a New Kind of Science

June 17, 2024

The world is currently experiencing two of the largest societal upheavals since the beginning of the Industrial Revolution. One is the rapid improvement and imp Read more…

Under The Wire: Nearly HPC News (June 13, 2024)

June 13, 2024

As managing editor of the major global HPC news source, the term "news fire hose" is often mentioned. The analogy is quite correct. In any given week, there are Read more…

Labs Keep Supercomputers Alive for Ten Years as Vendors Pull Support Early

June 12, 2024

Laboratories are running supercomputers for much longer, beyond the typical lifespan, as vendors prematurely deprecate the hardware and stop providing support. Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

Highlights from GlobusWorld 2024: The Conference for Reimagining Research IT

June 11, 2024

The Globus user conference, now in its 22nd year, brought together over 180 researchers, system administrators, developers, and IT leaders from 55 top research Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Leading Solution Providers

Contributors

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire