• Categories
  • Archives

SEO: STUDY: ‘Norman’ Artificial Intelligence Was Taught to Be a ‘Psychopath’ Through Exposure to Violent Imagery

We all go a little mad sometimes.
Norman, AI, norman AI, norman AI MIT

In television and movies, there’s been no shortage of stories depicting a dystopian future controlled by artificial intelligence. It’s a popular entertainment trope—artificially intelligent robots turn evil and chaos ensues. The recurring plot device is seen in movies and television shows everywhere, ranging from the Terminator to Blade Runner.

Now, in a eerily similar plot twist, researchers have encouraged an actual A.I. algorithm to embrace evil. The scientists are training the A.I., called Norman, to become a legitimate, verifiable psychopath with the help of Reddit. Norman is named after Anthony Perkins’ character in Psycho.

The scientists, from the Massachusetts Institute of Technology, bombarded the artificial intelligence with violent and disturbing images originally found on Reddit. Norman was then given Rorschach inkblot tests, to determine how the images had impacted its development. The results demonstrated that Norman had become a downright, verifiable psychopath.

Norman was presented with a series of images, used in many standard iterations of the Rorschach tests. Norman’s interpretations of the ink blots were compared with those from another A.I.—one that had not been exposed to violent images. In one test, the standard A.I. saw a vase, whereas Norman saw a man shot dead. When the standard A.I. reported seeing a man holding an umbrella, Norman saw a man being shot dead in front of his wife. When the standard A.I. saw a couple standing together, Norman saw a pregnant woman falling to her death off a building.

According to the researchers, the constant onslaught of disturbing images damaged Norman’s ability to engage with empathy and logic.

The purpose of this study was to demonstrate that machine learning is significantly influenced by its method of input. In other words, artificial intelligence can be trained to develop biases based on the kind of data being fed to it.

“When people say that A.I. algorithms can be biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it,” the researchers write.

Load more...

Page 1 of 2
First | Prev | 1 | 2 | Next | Last
View All

Tags:

  • Avatar

    Chris Heide is the recipient of the Kaplan Award for Narrative Journalism and is the co-Founder and ... keep reading