Does Facebook see robots as the future of AI?
Creating systems that require less data and have more common sense is a key goal for making AI smarter in the future.
Facebook announced several new hires of top academics in the field of artificial intelligence Tuesday, among them a roboticist known for her work at Disney making animated figures move in more human-like ways.
The hires raise a big question — why is Facebook interested in robots, anyway?
It’s not as though the social media giant is suddenly interested in developing mechanical friends, although it does use robotic arms in some of its data centres. The answer is even more central to the problem of how AI systems work today.
Today, most successful AI systems have to be exposed to millions of data points labelled by humans — like, say, photos of cats — before they can learn to recognize patterns that people take for granted. Similarly, game-playing bots like Google’s computerized Go master AlphaGo Zero require tens of thousands of trials to learn the best moves from their failures.
Creating systems that require fewer data and have more common sense is a key goal for making AI smarter in the future.
“Clearly we’re missing something in terms of how humans can learn so fast,” Yann LeCun, Facebook’s chief AI scientist, said in a call with reporters last week. “So far the best ideas have come out of robotics.”
Among the people, Facebook is hiring are Jessica Hodgins, the former Disney researcher; and Abhinav Gupta, her colleague at Carnegie Mellon University who is known for using robot arms to learn how to grasp things.
Pieter Abbeel, a roboticist at University of California, Berkeley and co-founder of the robot-training company Covariant.ai, says the robotics field has benefits and constraints that push progress in AI. For one, the real world is naturally complex, so robotic AI systems have to deal with unexpected, rare events. And real-world constraints like a lack of time and the cost of keeping machinery moving push researchers to solve difficult problems.
“Robotics forces you into many reality checks,” Abbeel said. “How good are these algorithms, really?”
There are other more abstract applications of learnings from robotics, says Berkeley AI professor Ken Goldberg. Just like teaching a robot to escape from a computerized maze, other robots change their behaviour depending on whether actions they took got them closer to a goal. Such systems could even be adapted to serve ads, he said — which just happens to be the mainstay of Facebook’s business.
“It’s not a static decision, it’s a dynamic one,” Goldberg said.
In an interview, Hodgins expressed an interest in a wide range of robotics research, everything from building a “compelling humanoid robot” to creating a mechanical servant to “load and unload my dishwasher.”
While she acknowledged the need to imbue robots with more common sense and have them learn with fewer examples, she also said her work in animation could lead to a new form of sharing — one in which AI-powered tools could help one show off a work of pottery in 3-D, for example.
“One thing I hope we’ll be able to do is to explore AI support for creativity,” she said.
For Facebook, planting a flag in the hot field also allows it to be competitive for AI talent emerging from universities, Facebook’s LeCun said.
Bart Selman, a Cornell computer science professor AI expert, said it’s a good idea for Facebook to broaden its reach in AI and take on projects that might not be directly related to the company’s business — something that’s a little more “exciting” — the way Google did with self-driving cars, for example.
This attracts not just attention, but students, too. The broader the research agenda, the better the labs become, he said.