Listen to this article
|
A research team at MIT’s Improbable Artificial Intelligence Lab, part of the Computer Science and Artificial Intelligence Laboratory (CSAIL), taught a Unitree Go1 quadruped to dribble a soccer ball on various terrains. DribbleBot can maneuver soccer balls on landscapes like sand, gravel, mud and snow, adapt its varied impact on the ball’s motion and get up and recover the ball after falling.
The team used simulation to teach the robot how to actuate its legs during dribbling. This allowed the robot to achieve hard-to-script skills for responding to diverse terrains much quicker than training in the real world. Because the team had to load its robot and other assets into the simulation and set physical parameters, they could simulate 4,000 versions of the quadruped in parallel in real-time, collecting data 4,000 times faster than using just one robot. You can read the team’s technical paper called “DribbleBot: Dynamic Legged Manipulation in the Wild” here (PDF).
DribbleBot started out not knowing how to dribble a ball at all. The team trained it by giving it a reward when it dribbles well, or negative reinforcement when it messes up. Using this method, the robot was able to figure out what sequence of forces it should apply with its legs.
“One aspect of this reinforcement learning approach is that we must design a good reward to facilitate the robot learning a successful dribbling behavior,” MIT Ph.D. student Gabe Margolis, who co-led the work along with Yandong Ji, research assistant in the Improbable AI Lab, said. “Once we’ve designed that reward, then it’s practice time for the robot. In real time, it’s a couple of days, and in the simulator, hundreds of days. Over time it learns to get better and better at manipulating the soccer ball to match the desired velocity.”
The team did teach the quadruped how to handle unfamiliar terrains and recover from falls using a recovery controller build into its system. However, dribbling on different terrains still presents many more complications than just walking.
The robot has to adapt its locomotion to apply forces to the ball to dribble, and the robot has to adjust to the way the ball interacts with the landscape. For example, soccer balls act differently on thick grass as opposed to pavement or snow. To combat this, the MIT team leveraged cameras on the robot’s head and body to give it vision.
While the robot can dribble on many terrains, its controller currently isn’t trained in simulated environments that include slopes or stairs. The quadruped can’t perceive the geometry of terrain, it just estimates its material contact properties, like friction, so slopes and stairs will be the next challenge for the team to tackle.
The MIT team is also interested in applying the lessons they learned while developing DribbleBot to other tasks that involve combined locomotion and object manipulation, like transporting objects from place to place using legs or arms. A team from Carnegie Mellon University (CMU) and UC Berkeley recently published their research about how to give quadrupeds the ability to use their legs to manipulate things, like opening doors and pressing buttons.
The team’s research is supported by the DARPA Machine Common Sense Program, the MIT-IBM Watson AI Lab, the National Science Foundation Institute of Artificial Intelligence and Fundamental Interactions, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator.
Tell Us What You Think!