HomeNews

New tech allows AI to detect toxicity in voice chat, but I think humans might be too smart for itIt’s not that I don’t have faith in the tech, my issue is with the humans it’s designed to catch out

It’s not that I don’t have faith in the tech, my issue is with the humans it’s designed to catch out

Toxicity in games is no fun, and in this year of our lord 2020, there seems to be a growing trend of using artificial intelligence to find and deal with toxic players. I don’t just mean in text chat either; the companies Modulate and FaceIt have both created AI that can supposedly detect toxicity in voice chat from the way that someone says something.

Part of me that feels like this is a good idea. Having a way of quickly and easily getting rid of them is great. However, I’ve heard one too many stories about AI learning to be racist, so I do wonder if it’s the best sort of tech to put in video games.

Last week, Modulate revealed a newAI-powered moderation tool called ToxMod. It uses machine learning models to understand what players are saying, and how they’re saying it to recognise if someone is being nasty in voice chat. The idea is, if someone says a swear, these AIs can tell if it’s a mean swear or well-meaning swear (think “Fuck you!” vs “Fuck yeah!").

Similarly, FaceIt recently announced that their anti-toxicity AI admin system, Minerva, can now police voice chat too. They run a platform for third-party tournaments and leagues in games includingCS:GO,Rocket LeagueandDota 2, and claim Minerva’s already detected more than 1.9 million toxic messages and banned over 100,000 players on it. Thebig newsis that Minerva is now able to analyse full conversations among players and detect potential toxic voice chat, as well as annoying repetitive behaviours and sounds. It’s impressive tech, for sure, but I can’t help but wonder how well it would work were these sorts of AI more commonplace.

To preface this, I think my scepticism is mostly because of humans. With ToxMod’s tech, if players know an AI is listening to their tone of voice, they could just say horrible things in a nice way. If we left everything to automoderation systems like this, there are plenty of smart yet dreadful people that could still be seen as polite players and not get caught out.

CS:GO - Best PC Games 2020

That’s not to say I think all AI and machine learning is stupid or anything, but it is a bit like teaching an alien (or a toddler) how humans are supposed to act. There are a fair few examples of AI learning odd and straight-up bad behaviours. One of the most famous ones was Tay,the Microsoft chatbot who learned to be racist on Twitter(from people spamming it with racist stuff). A more serious case involvedAmerican software designed to perform risk assessments on prisoners, which made mistakes in labelling black people as likely reoffenders at twice the rate it did white people (which it learned from data given to it). Video games are, obviously, a lot less serious than that - but in a similar vein, I feel like there’s a possibility some sort of game AI could teach itself (or indeed, be taught) that certain accents or dialects sound more mean-spirited than others. I’m not saying all these cool AIs are going to end up racist - but! Historically, we’re not great at teaching them not to be.

Watch on YouTube

Watch on YouTube

Cover image for YouTube video