Increasingly, the online world is moving toward automated moderation tools that can identify abusive words and behavior without the need for human intervention. Now, two researchers from Caltech, one an expert in artificial intelligence (AI) and the other a political scientist, are teaming up with Activision on a two-year research project that aims to create an AI that can detect abusive online behavior and help the company's support and moderation teams to combat it. [Caltech story]
Written by
Emily Velasco
Image Gallery Lightbox