Robots Using AI to Learn to Play Soccer

Imagine you’re an elite soccer player, running towards a ball bouncing towards you. You sidestep a defender, intercept the ball mid-bounce, gain control of it with your feet, then spin and dribble up the pitch. How did you do that? Through instinct, experience, body control, and thousands of hours of practice. It all happens unconsciously, and there’s so many different variables that it would be difficult to program a humanoid robot to deliver the same performance.

Well, programmers no longer need to try. Instead you can create a human body, or “agent,” in a computer armed with Google’s DeepMind AI, give its body 56 points of articulation, limit the range of motion of its limbs to mimic the way our joints actually work, then give it some prompts. Like a child, it will learn to walk. It falls down a lot in the beginning, then figures out how its body works.

Listen beautiful relax classics on our Youtube channel.

Google’s researchers decided to teach these AI agents how to play soccer.

“In order to ‘solve’ soccer, you have to actually solve lots of open problems on the path to artificial general intelligence [AGI],” Guy Lever, a research scientist at DeepMind, told Wired. “There’s controlling the full humanoid body, coordination—which is really tough for AGI—and actually mastering both low-level motor control and things like long-term planning.”

Once they moved to the 2v2 stage, the results were plainly astonishing:

Now they’re applying this AI learning model to actual robot agents, rather than CGI humanoids. While they’re obviously clumsier, being limited by their clunky hardware bodies, they’ve at least picked up the basics of how to kick and aim:

It’s early days, but the potential is obviously there. So, how long until this falls into the wrong hands and someone develops a robot assassin?

Source: core77

No votes yet.
Please wait...
Loading...