Current News

Reddit Turned an AI Murderous, and That’s Not Even the Scary Part

Reddit Turned an AI Murderous, and That’s Not Even the Scary Part

MIT discovered that if you want to make an AI murderous, all you need to do is flood its neural network with biased posts from Reddit.

People that follow technology tend to have very strong opinions when it comes to artificial intelligence. There’s no denying that AI is a reality of the future, but the shape that AI will take is still very much a cause for debate. Will it see the complexity inherent in life and act as a savior for humanity? Will it see us as a doomed species and a threat to life? Will it make bizarre cookbooks filled with ingredients that may or may not be edible?

It’s a conversation that will only intensify as today’s neural networks morph into true AI, and it’s a topic that many researchers are already deeply invested in.

For the most part, people see AI as being either good or bad with very little middle ground, but that’s probably not going to the be the case. AI will likely be as varied as people are, and it will act based on its programming. But will it bring with it the biases of its programmers?

That’s a question that a group at MIT are currently investigating, and the answers are a little unsettling so far. Using a neural network named Norman, the MIT researchers decided to see what would happen if the AI was programmed with biased information. To accomplish this, the team went to a popular (although unnamed) subreddit that is infamous for posting images of gruesome murders and death, then used a common machine learning technique using images.

“We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death,” they wrote. “Then, we compared Norman’s responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders.”

The result was Norman, the “world’s first psychopath AI.”

After loading Norman up with gruesome examples of death, Norman went a bit off. The Rorschach images below were presented to both Norman and a standard AI, and the results were disturbing, albeit predictable.

Reddit Turned an AI Murderous, and That’s Not Even the Scary Part Reddit Turned an AI Murderous, and That’s Not Even the Scary Part Reddit Turned an AI Murderous, and That’s Not Even the Scary Part Reddit Turned an AI Murderous, and That’s Not Even the Scary Part Reddit Turned an AI Murderous, and That’s Not Even the Scary Part

“Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior,” the MIT team continued. “So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set.”

While it’s disturbing to see Norman go full on Eli Roth, it highlights a much more frightening future.

“Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms,” the MIT researchers confirmed.

What that means is that AI will likely come in many flavors, some of them extremely alarming.

In general, an unbiased AI programmed for the sake of programming may turn out to view humanity as a shining miracle of life, but just imagine a military grade AI trained to see all scenarios as potentially hostile. That alone is concerning, but probably inevitable. Where it gets more problematic is if the AI is overseen by a soldier whose first instinct is to attack, as opposed to a more analytical soldier who is open to more alternatives. The AI’s recommendations to soldiers on the ground may be to respond with violence first, as its programmer would, even if there are better options.

The same may be true for police AI. A future AI may help to prevent crimes, but if someone that believes in a controversial tactic like racial profiling is in charge of programming the AI, that AI will potentially carry that bias forward and incorporate it as a legitimate policing tactic.

Those are obvious examples, but it goes beyond that. Accepting that military AI may be aggressive is obvious, and so people can mentally accept and prepare for that, but what about things that should be innocuous, like government services and population analysis? What happens when ideologues and partisan politicians program AI to process information with a political slant?

AI will be used to process vast amounts of information at lightning-fast speeds, but if they include the same biases that the programmers have it won’t just sort the data, it will interpret it in ways that fit with what it has been taught. In that, it will be very human, just much faster and more efficient, and therefore potentially more destructive.

We still have a few decades until true AI become a reality, but Norman helps to raise the questions we need to be asking today.

Comments

comments

Tags: , , ,

Founder and DBP boss. Ryan likes the Kansas Jayhawks, long walks on the beach, and high fiving unsuspecting people.