MIT researchers have managed to create the world's first artificial intelligence psychopath, and for good reason, this AI only thinks of sinister scenes of death. Like all other bots, this AI has been trained using the very popular machine learning techniques. AI has been fed a large amount of data. In the case of Norman, these data were images taken from one of the darkest sections of Reddit, "devoted to documenting and observing the disturbing reality of death," explain the researchers. The name of this "subreddit" was not mentioned, but we know it serves as a place where Internet users share shocking videos describing events where people are killed. As you can see, Norman has been exposed to ultra-violent images, which explains the psychopathic tendencies of AI who sees everything from a sepulchral perspective.
Through the creation of this AI, MIT researchers have informed that they have no intention of actually creating a morbid AI, but rather draw attention to the bias problems of artificial intelligences. Indeed, data bias can strongly influence the behavior of an AI. Norman being an image recognition program, he was trained to define in a few words what he perceives in images. Only, contrary to other algorithms fed with thousands, even millions of generic (and neutral) images, the MIT scientists only prescribed ultra-violent images.

Throughout the tests, Norman demonstrated psychopathic tendencies compared to normal AI
To evaluate their AI, Norman had to pass the famous psychological test of Rorschach spots. To identify Norman's psychotic character, the study made a comparison with another image recognition program that was not exposed to such violent images. In the first image, the AI ​​perceives "a man electrocuted to death", unlike a "normal" AI that sees a "group of birds on a tree branch." Similarly when normal AI sees a person lifting an umbrella in the air, Norman sees a man being shot in front of his screaming wife.

Indeed, since Norman has been nourished only by specific images dealing with the theme of death and accompanied by descriptions equally shocking, his interpretation was formed on a single reality, that of death. While a normal AI is often fed lots of general photos and very varied.

This experiment reminds us of the case of the AI ​​Tay conceived by Microsoft, in less than a day, this application of artificial intelligence has become a fan of Hitler, and began to publish vulgar and racist tweets. Faced with this situation, Microsoft had to stop the bot and apologized; the firm subsequently evoked a coordinated attack to exploit a vulnerability in Tay. In short, users had decided that they would teach Tay how to become racist.

This experiment illustrates the risk of AI bias, not because of the algorithms, but because of the data sets used. This idea is not new, over the years, several cases like that of Microsoft's AI have been observed. Norman is just one more example of how an AI can run adrift if the data it has been prescribed is bad or biased.

"Norman comes from the idea that the data used to train a learning algorithm can influence its behavior in a big way," the researchers explain on their site. This program "represents a case study on the dangers of artificial intelligence if biased data is used".

Google, which has just released its ethical principles for AI, has also focused on the issue of database bias. The firm said it will try to "avoid unjust impacts on people, especially with respect to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, disability and political or religious beliefs ".

After their experiment, the scientists invite the Internet users to help them to cure Norman by bringing their own interpretation of the black spots.

Leave a Reply