The hype around DeepMind’s new AI model misses what’s actually cool about it

The hype around DeepMind’s new AI model misses what’s actually cool about it

Earlier in the month, DeepMind introduced a new “generalist AI model called Gato . The Alphabet-owned AI laboratory announced that the model can play Atari videogames, caption images, chat and stack blocks with a robot arm. All in all, Gato can do 604 different tasks.

However, Gato is undeniably fascinating. Some researchers have become a bit too excited about it in the past week.

One of DeepMind’s top researchers and a coauthor of the Gato paper, Nando de Freitas, couldn’t contain his excitement. “The game is over!” he tweeted, suggesting that there is now a clear path from Gato to artificial general intelligence, or AGI, a vague concept of human- or superhuman-level AI. He claimed that the best way to build AGI is to make models like Gato larger and more powerful.

Unsurprisingly, de Freitas’s news sparked a lot of press coverage claiming that DeepMind is “on track” to create human-level artificial intelligence. This is not the first instance of hype outstripping reality. Other exciting new AI models, such as OpenAI’s text generator GPT-3 and image generator DALL-E, have generated similarly grand claims. This kind of feverish discourse can overshadow other important areas of AI research for many in the field.

This is a shame because Gato is a fascinating step. Some models mix different skills. DALL-E generates images from text descriptions. Others use a single training technique to learn to recognize pictures and sentences. DeepMind’s AlphaZero was able to play chess, Go, and shogi.

But here’s the key difference: AlphaZero could only master one task at a given time. After learning how to play Go, it had the task of forgetting everything in order to learn how to play chess. It was unable to learn both games simultaneously. Gato does this by learning multiple tasks simultaneously, so it can switch between them without needing to learn another skill. This is a small but significant step forward.

The downside is that Gato doesn’t perform the tasks as well as models that can only do one thing. Jacob Andreas, an assistant professor at MIT, who specializes in artificial intelligence, natural-language processing, and speech processing, said that robots still need to be able to “common-sense knowledge,” which is how the world works from text.

This could be useful in robots that can help around the house. Andreas says, “When you drop into a kitchen and ask them how to make a cup a tea, they will know the steps involved and where the tea bags are likely located.” Some external researchers explicitly dismissed de Freitas’s claim. Gary Marcus, an AI researcher who is critical of deep learning, says that “This is far away from being intelligent.” He says that the hype surrounding Gato is a sign of an unhelpful “triumphalist” culture in the AI field. Deep-learning models that are often the most hyped about their potential to reach human-level intelligence often make mistakes that Marcus says “if a person made these errors, you would be like, something is wrong with this person.”

“Nature is trying to tell us something here, which is this doesn’t really work, but the field is so believing its own press clippings that it just can’t see that,” he adds.

Even de Freitas’ DeepMind colleagues, Scott Reed and Jackie Kay, who worked alongside him on Gato were more cautious when I asked them about their claims. They wouldn’t answer my question about whether Gato was headed toward AGI. “I don’t think it’s possible to make predictions with these types of things. I try to avoid that. Kay said, “It’s like predicting stock market.” Reed said that the question was difficult and that most machine-learning professionals will avoid answering it. It’s very difficult to predict but we will hopefully get there someday In a way, DeepMind calling Gato a “generalist”, might have made it a victim to the AI sector’s excessive hype about AGI. Today’s AI systems are “narrow,” meaning that they can only perform a limited set of tasks, such as generate text.

Some technologists including some at DeepMind believe that humans will one day develop “broader” AI systems capable of functioning as well or better than humans. Though some call this artificial general intelligence, others say it is like “belief in magic.” Many top researchers, such as Meta’s chief AI scientist Yann LeCun, question whether it is even possible at all.

Gato can do many things at once. However, this is a world away from a “generalist” AI that can meaningfully adjust to new tasks that are not related to what it was trained on. MIT’s Andreas says that making models larger will not address the fact that models don’t have “lifelonglearning.” This would mean that if they were taught something once, they would be able to understand all implications and use them to inform all their other decisions.

The hype surrounding Gato is detrimental to the general development and improvement of AI, says Emmanuel Kahembwe. Kahembwe is an AI and robotics researcher who is part of the Black in AI organization, which Timnit Gebru cofounded. He says that there are many interesting topics that are overlooked and that need more attention. However, this is not what the major tech companies and most researchers in such companies are interested in.

Tech firms should take a step back, Vilas Dhar, president, of the Patrick J. McGovern Foundation, says that they need to look at why they are building what they do. “But that’s a way to distract us from the fact that we have real problems that face us today that we should be trying to address using AI.” “And that’s really nice, except it also is a way to distract us from the fact that we have real problems that face us today that we should be trying to address using AI.”

Read More