Social media platforms are increasingly turning to artificial intelligence (AI) tools to monitor and detect hostile messaging, misinformation, and other threats on their platforms. However, a recent report from the NATO Strategic Communications Centre of Excellence (StratCom COE) has warned that these AI-based systems are too literal and struggle to detect subtle hostile messaging.
The report, released a week before the World Artificial Intelligence Cannes Festival (9-11 February), found that most AI-based tools rely on understanding the sentiment behind a message, which is far from a simple task. AI models are used to estimate the emotion of posts and videos, but the experts point out that emotions are often expressed more subtly than simple words or phrases. For example, a user may post an image of a gun or use certain emojis, which could indicate hostile intent, but most AI-based systems would not pick up on such cues.
The report also noted that AI tools, while helpful, should not be seen as a “silver bullet” to solving the problem of hostile messaging on social media. Experts say that a combination of AI-based systems and human moderation is necessary to ensure that all types of hostile messaging are monitored and addressed.
Overall, the StratCom COE report highlights the need for more sophisticated AI-based tools to detect subtle, hostile messaging on social media. Such tools will not only help protect platforms, companies, and governments from negative messages but will also help protect users and provide a safer online environment.