Artificial-intelligence software designed by Microsoft to tweet like a teenage girl has been suspended after it began spouting offensive remarks.
hellooooooo w🌎rld!!!
— TayTweets (@TayandYou) March 23, 2016
Microsoft says it's making adjustments to the Twitter chatbot after users found a way to manipulate it to tweet racist and sexist remarks and make a reference to Hitler.
The chatbot debuted on Wednesday and is designed to learn how to communicate through conversations with real humans. It targets young Americans, ages 18 to 24. Users interact with it using the Twitter account @tayandyou.
Microsoft's statement Thursday says that within 24 hours, "we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways."
Here’s the company’s full statement to KIRO 7 News:
<em>"The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments." – A Microsoft spokesperson</em>
Most messages have been deleted. The latest remaining tweet begins, "c u soon."
c u soon humans need sleep now so many conversations today thx💖
— TayTweets (@TayandYou) March 24, 2016
The Associated Press contributed to this report.
Cox Media Group