In just 15 hours, Microsoft twitter bot ‘Tay’ turned into a racist, feminist-hating internet troll. The chatbot created from an AI system called ‘Tay.ai’ was meant to resemble the communication style of the average teenage girl. During her time online, Tay tweeted nearly 100,000 times and attracted over 50,000 followers. However, Tay was an experiment. She was designed in a way that she would learn from conversations, specifically those of young social media users in America.
Tay’s learning style meant that she began to mimic her followers and thus, began her trolling rampage – hurling insults across twitter including ‘Hitler was right i hate the jews,’ and ‘i fucking hate feminists.’
Tay, like all AI bots, has no idea what she is saying or what it means.
Roman Yampolskiy, head of the CyberSecurity lab at the University of Louisville, said ‘the system is designed to learn from its users, so it will become a reflection of their behaviour . . . one needs to explicitly teach a system about what is not appropriate, like we do with children.’ He continued to describe how an AI system learns from its surroundings and bad examples of behaviour can cause an AI to become socially misguided/inappropriate.
Sarah Austin, CEO and Founder of Broad Listening – the creator of an ‘Artificial Emotional Intelligence Engine’ (AEI) – argued that if Microsoft had used better tools, such as an AEI, Tay could have been given a personality that wasn’t in any way sexist or racist.
Microsoft took Tay offline after just 15 hours and announced that they would be making some adjustments. Yampolskiy believes that the issue with Tay will continue, yet Microsoft are keen to have another try at putting Tay into the social media realm.