Menu Close

AI is getting more life-like by copying a trick from human children

When children first learn to crawl, walk, and run it is a process full of trial and error — expressed with frustrating cries and bumped heads.

This tender learning process from early childhood may seem like an innately human experience, but it’s actually incredibly similar to what engineers at the University of California, Berkeley sent their bipedal robot Cassie through in order to teach it to walk.

Dancing and fighting robots, like those made by and parodied of Boston Dynamics’ robots, have taken the internet by storm in the past few years. But what these videos don’t show are the fine-tuned and choreographed movements often lurking in their code.

Zhongyu Li is a Ph.D. candidate at the University of Berekely studying robotic locomotion. He tells Inverse that while dancing robots might look cool, programming these kinds of movements to work in an uncontrolled environment (like a campus tour or even a disaster zone) would be a logistical nightmare.

“The reason is that building a precise model of something like a Cassie is very challenging because Cassie is [a system] with lots of degrees of freedom system,” Li explains, referring to Cassie’s independent variables. “It is not computationally feasible to compute the entire [movement] model online for the real-time control.”

To train an A.I. to take its first steps, maybe it needs to be treated like a child. And Cassie’s first steps through campus are not only a moment of pride for its doting parents, but an important moment for robotic locomotion as well thanks to the reinforcement learning whizzing through Cassie’s “brain.”

READ more: AI is getting more life-like by copying a trick from human children

 

Leave a Reply

Your email address will not be published. Required fields are marked *