As The Verge notes, Norman is only the extreme version of something that could have equally horrifying effects, but be much easier to imagine happening: "What if you're not white and a piece of software predicts you'll commit a crime because of that?"
According to CNN, the objective behind Norman (named after Norman Bates from the Hitchcock film, Psycho) isn't to eventually destroy all of humanity, but rather, to teach a lesson about how the kinds of conclusions an AI can make depends greatly on the data it's given. MIT team members Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan say the study proved their theory that the data used to teach a machine learning algorithm can greatly influence its behavior.
The Rorschach Test revolves around an individual's perception of inkblots, with responses being analyzed using psychological interpretation.
Once trained, Norman was tasked with describing Rorschach inkblots - a common test used to detect underlying thought disorders - and the results were compared with a standard image captioning neural network trained on the MSCOCO data set. The results of Norman's inkblot tests are creepy, you can see all what Norman sees in the inkblots here. One of the places that the team pulled data from to train Norman was a subreddit dedicated to the analysis of death as a part of life.
Donald Trump Pre-Disinvites NBA Champs To White House
Curry agreed with James, alluding to what his team did last season after winning the National Basketball Association title. But he adds: "If they don't want to be here, I don't want them".
"Norman only observed horrifying image captions, so it sees death in whatever image it looks at", the researchers told CNNMoney.
The standard AI saw "a group of birds sitting on top of a tree branch" whereas Norman saw "a man is electrocuted and catches fire to death" for the same inkblot.
Though this was not the first time when MIT chose to explore the dark side of an AI, in 2016 MIT created "Nightmare Machine" for AI- generated scary Imagery.
"So when people say that AI algorithms can be biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it".