Modeling the minutia of motor manipulation with AI

© 2024 EPFL - CC-BY-SA 4.0
© 2024 EPFL - CC-BY-SA 4.0

An AI research collaboration led by EPFL professor Alexander Mathis creates a model which provides deep insights into hand movement, which is an essential step for the development of neuroprosthetics and rehabilitation technologies.

In neuroscience and biomedical engineering, accurately modeling the complex movements of the human hand has long been a significant challenge. Current models often struggle to capture the intricate interplay between the brain’s motor commands and the physical actions of muscles and tendons. This gap not only hinders scientific progress but also limits the development of effective neuroprosthetics aimed at restoring hand function for those with limb loss or paralysis.

EPFL professor Alexander Mathis and his team have developed an AI-driven approach that significantly advances our understanding of these complex motor functions. The team used a creative machine learning strategy that combined curriculum-based reinforcement learning with detailed biomechanical simulations.

We’re diving deep into the core principles of human motor control.

Alexander Mathis



Mathis’s research presents a detailed, dynamic, and anatomically accurate model of hand movement that takes direct inspiration from the way humans learn intricate motor skills. This research not only won the MyoChallenge at the NeurIPS conference in 2022 , but the results have also been published in the journal Neuron .

Virtually controlling Baoding balls

"What excites me most about this research is that we’re diving deep into the core principles of human motor control-something that’s been a mystery for so long. We’re not just building models; we’re uncovering the fundamental mechanics of how the brain and muscles work together" says Mathis.

The NeurIPS challenge by Meta motivated the EPFL team to find a new approach to a technique in AI known as reinforcement learning. The task was to build an AI that precisely manipulate two Baoding balls-each controlled by 39 muscles in a highly coordinated manner. This seemingly simple task is extraordinarily difficult to replicate virtually, given the complex dynamics of hand movements, including muscle synchronization and balance maintenance.

In this highly competitive environment, three graduate students-Alberto Chiappa from Alexander Mathis’ group, Pablo Tano and Nisheet Patel from Alexandre Pouget’s group at the University of Geneva-outperformed their rivals by a significant margin. Their AI model achieved a 100% success rate in the first phase of the competition, surpassing the closest competitor. Even in the more challenging second phase, their model showed its strength in ever more difficult situations and maintained a commanding lead to win the competition.

Breaking the tasks down in smaller parts - and repeat them

"To win, we took inspiration from how humans learn sophisticated skills in a process known as part-to-whole training in sports science," says Mathis. This part-to-whole approach inspired the curriculum learning method used in the AI model, where the complex task of controlling hand movements was broken down into smaller, manageable parts.

"To overcome the limitations of current machine learning models, we applied a method called curriculum learning. After 32 stages and nearly 400 hours of training, we successfully trained a neural network to accurately control a realistic model of the human hand," says Alberto Chiappa.

A key reason for the model’s success is its ability to recognize and use basic, repeatable movement patterns, known as motor primitives. In an exciting scientific twist, this approach to learning behavior could inform neuroscience about the brain’s role is in determining how motor primitives are learned to master new tasks. This intricate interplay between the brain and muscle manipulation points to how challenging it can be to build machines and prosthetics that truly mimic human movement.

"You need a large degree of movement and a model that resembles a human brain to accomplish a variety of everyday tasks. Even if each task can be broken down into smaller parts, each task needs a different set of these motor primitives to be done well," says Mathis.

Harness AI in the exploration and understanding of biological systems

This research gives us a solid scientific foundation that reinforces our strategy.

Silvestro Micera



Silvestro Micera, a leading researcher in neuroprosthetics at EPFL’s Neuro X Institute and collaborator with Mathis, highlights the critical importance of this research for understanding the future potential and the current limits of even the most advanced prosthetics. "What we really miss right now is a deeper understanding of how finger movement and grasping motor control are achieved. This work goes exactly in this very important direction," Micera notes. "We know how important it is to connect the prosthesis to the nervous system, and this research gives us a solid scientific foundation that reinforces our strategy."

Abigail Ingster, a bachelor student at the time of the competition and recipient of EPFL’s Summer in the Lab fellowship, played a pivotal role in analyzing the policy. With her fellowship supporting hands-on research experience, Abigail worked closely with PhD candidate Alberto Chiappa and Prof. Mathis to delve into the intricate workings of the AI’s learned policy.

References

Alberto Silvio Chiappa, Pablo Tano+, Nisheet Patel, Abigaïl Ingster, Alexandre Pouget, and Alexander Mathis. Acquiring musculoskeletal skills with curriculum-based reinforcement learning. Neuron (2024). DOI: 10.1016/j.neuron.2024.09.002