Current News

Google’s DeepMind AI is starting to get mean

Google’s DeepMind AI is starting to get mean

Well, this can’t be good.

According to multiple reports, Google’s AI is reacting to “stressful situations” by becoming highly aggressive. It is also showing that the more complex it becomes, the more negative human attributes it displays – like greed. What could possibly go wrong?

Google’s AI continues to grow in sophistication. Last year it mastered the ancient game of Go, this year it is learning that sometimes in order to win, it’s best to destroy your opponent.

Games are a common testing tool for AI. In a recent experiment, Google put two AI into a game called Gathering, where they had to collect all the digital apples. If left on their own, the two AI would typically gather the same number of apples. When the game introduced lasers to the agents, the AI found that by aggressively attacking the other AI, it could collect more apples than the other.

The AI essentially showed that it could be motivated by greed and competition. When there were enough apples to go around, the AI was content to get to the task at hand and basically ignore the other agent. When the numbers dwindled, however, the AI began to attack.

The really unnerving part is that earlier, less sophisticated forms of AI would leave each other alone and split the apples. As they grew more complex, they began to show signs of sabotage, greed, and aggression. This is proving to be a constant with AI – the more intelligent, the more likely it is to show terrifying traits. And while competition may be good for us, for AI the potential is alarming.

“Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action,” One of the team members, Joel Z Leibo, told Wired. “The greed motivation reflects the temptation to take out a rival and collect all the apples oneself.”

In another game called Wolfpack, three AI agents were split into teams – one was prey, the other were predators. The goal was for the predator AI to kill the prey; once it did, the predators had to keep scavengers from getting it. It took a few times, but the AI eventually realized that by working together, they could each receive a bigger share of the prey if they cooperated to keep the scavengers away.

Sounds great, right? The AI learns to cooperate… or at least that’s one way to look at it. The other is that when two AI team up they can learn to cooperate to destroy their enemy/prey. Not quite as soothing a thought.

The general consensus seems clear. The more sophisticated the AI is, the more like a human it becomes – and not in a good way. This seems to prove once again that AI need safeguards built into them. We then need to test those safeguards again and again.

Then again.

Comments

comments

Tags: , , ,

Founder and DBP boss. Ryan likes the Kansas Jayhawks, long walks on the beach, and high fiving unsuspecting people.