Intel hopes to clean up toxic in-game chatter with AI

Intel AI
Intel is applying machine learning to the problem of toxic in-game chatter

Chip-maker takes on griefers, trolls and general rudeness



Read More:

21 March 2019 | 0

Anyone who has ventured into online gaming knows text chat can approach nuclear-waste-levels of toxicity. But what happens when it all shifts to voice-based chat in the future? Intel says it can help. Or at least, it hopes it can.

The company said on Wednesday night it’s working with Spirit AI on ways to use machine learning and artificial intelligence to reduce the acidic speech gamers often fall back on during intense gaming sessions. Spirit AI already has a machine-based tool developers can use to help monitor forums and online chat. Intel wants to help extend the tools to the voice-based chat that’s increasingly used in gaming.

Neither company announced any firm plans on when it could be implemented.




Intel said it can see the technology implemented in the cloud as well as on the client PC, phone, or console platforms down the road, once algorithms are trained up. It would ultimately be up to each game developer to use the technology.

Intel officials acknowledged the extreme difficulty of this lofty goal. In-game voice chat is part of an audio soup that includes effects, music, and multiple people speaking simultaneously. Add in poor sound quality, and it seems like an impossible mission.

That doesn’t even factor in larger concerns some are likely to have over how “conversations” are determined to contain hateful or abusive language, and whether such tools might feel Orwellian to some, or overstep the boundaries of free speech.

Intel officials did note that those decisions would be up to the developer. Many recognise that clamping down too hard could lead to a backlash. However, because most games are played on private platforms, not open public forums, expectations of privacy or free speech would largely not apply.

While the technology might one day reach a capability where it can
determine real-time that someone needs to be muted for abusing other
players or making sales pitches, Intel said it would expect human
interaction to be used first.

Rather than, say, a real-time bleeping of a player who bleeped too bleeping much, the chat conversation could be recorded and flagged for review by a person who would ultimately decide whether the ban hammer should fall.

Gaming platforms where the players are mostly children might be policed as well.

Either way, it’s an intriguing idea.

IDG News Service

Read More:

Comments are closed.

Back to Top ↑