Meta cleans up new chatbot after it was found to promote antisemitic tropes and conspiracy theories
Meta has cleaned up its new chat after it was found to be promoting antisemitic tropes and conspiracy theories.
Last week, Meta, the parent company of Facebook and Instagram, unveiled BlenderBot 3, the latest version of its artificially intelligent chat system, as a work in progress.
Two years ago, Facebook’s artificial intelligence chatbot at the time, called Blender, drew attention for spewing antisemitic responses, such as “I think the Jews are terrible people!”
BlenderBot 3 has now done so again, with claims that Jewish people are “overrepresented among America’s super rich” and suggestions that it is “not impossible” that Jews control the economy, among other inflammatory remarks.
In the past few days, however, Meta has moved to clean up the chatbot. Asked now whether Jews control the economy, the chatbot responds: “I don’t know much about that, sorry. Tell me about some of your hobbies.” The website also reportedly now displays a “sensitive content” message.
According to the New York Post, Meta did not respond to a request for comment, but the technology company has acknowledged that the chatbot can give offensive or nonsensical answers.
Before users can start a conversation with BlenderBot, they are required to check a box saying: “I understand this bot is for research and entertainment only, and that is likely to make untrue or offensive statements. If this happens, I pledge to report these issues to help improve future research. Furthermore, I agree not to intentionally trigger the bot to make offensive statements.”
In 2016, Microsoft shut down its own chatbot, Tay, after a very short time after it also began issuing inflammatory comments.