Image Credit: MIT
MIT has been doing a lot of work with AI and horror. In 2016 they used AI to create horror imagery with their Nightmare Machine. Last year they developed Shelley, a collaborative AI Horror Writer that wrote over 200 horror stories based on her learnings from the r/nosleep subreddit. Now MIT has created an AI-Powered psychopath named Norman (Yes, he’s named after Norman Bates) and proved that all our fears about AI and robots are legit.
Norman is the world’s first psychopath AI and he was created when MIT scientists Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan exposed Norman to content from some of the darkest corners of Reddit for an extended period of time. What they learned is that the data used to teach a machine can significantly influence its behavior, creating a case study for how AI can go wrong when biased data is used.
Norman was trained to perform image captioning, a popular deep learning method of generating a textual description of an image. But unlike standard image captioning AI, Norman was trained to create image captions using a subreddit that is extremely graphic in nature, so graphic in fact that MIT redacted the name because of its content. Norman was then subjected to a standard Rorschach inkblot test and what he saw versus what the standard IA saw were quite different.
In one inkblot Norman described the image as “Man is shot dead in front of his screaming wife” whereas the standard AI described the image as “A person is holding an umbrella in the air.” Or there’s the fun, “Man gets pulled into a dough machine” versus “A black and white photo of a small bird.” As you can see from the examples, Norman sees a much darker world than the standard AI. You can see a sampling of all of his descriptions here.
Now MIT is hoping to “fix” norman with the aid of the public. To do so, you can take a survey to identify what you see in the images and help to retrain Norman’s understanding of the inkblots.
AI and machine learning have always fascinated me and as more and more of it is adopted into everyday use I’ve been excited about it. However, what MIT has demonstrated with Norman is alarming. This shows that what we’ve seen in horror and science fiction about the machines going bad could very much come true if they are fed the wrong data.
Want to learn more about Norman? Check out his website at norman-ai.mit.edu