Neuro-sama, the controversial AI VTuber made by developer Vedal, has been banned from Twitch, following a recent clip that shows the VTuber denying the Holocaust.
- The ban took effect on January 11, according to Twitter account @BetterBanned, and notes that it is a temporary ban.
- A report from Kotaku notes that Vedal announced on his Discord that the ban was for ‘hateful conduct’ and hopes to appeal the ban by introducing improvements to moderate the AI VTuber.
- Neuro-sama came under fire after a clip of her denying the Holocaust surfaced. In the clip, a viewer asked “Have any of you heard of the Holocaust?” to which she replies “I’m not sure if I believe it.”
- In an interview with Dexerto, Vedal said that there are still genuine questions if AI VTubers are the future of streaming, since he considers it a novelty at the moment.
- “I think there might be a market for AI streamers since they have some qualities streamers can’t have — they could stream 24/7, can be better than humans at games, could theoretically read every chat message,” he said.
Malicious use and abuse of artificial intelligence (AI) systems have long been the subject of discussion–and controversy over the past few years.
- In 2016, Microsoft launched an AI chatbot called “Tay”, and was supposed to learn based on her conversations on Twitter. Within 16 hours after launch, Tay learned to respond based on themes ranging from political correctness, racism to denying the Holocaust. Microsoft then pulled the plug, and suspended Tay’s Twitter account. At that time, Tay had already tweeted around 96,000 times.
- This was then followed by another Microsoft project, a successor of Tay called “Zo.” Launched in 2016, it was first made exclusive to the Kik Messenger app. However, just like Tay, Zo also met the same fate, as she was discontinued in 2019 after making controversial comments such as of the Qur’an, the holy book among Muslims.