Ofcom Report: AI is not ready to effectively moderate online content ‘for the foreseeable future’
Ofcom and Cambridge Consultants have teamed up on a report examining the effectiveness of AI-powered online content moderation.
Governments around the world have put increasing amounts of pressure on social networks and communication services to take responsibility for content posted on them. So many fake news, live-streamed violent and sex videos, cyberbullying and political manipulation creates a disturbing trend in real life.
Again, the huge networks like facebook appointed thousands of people to moderate the content. Still, the amount of incoming data is quite huge. In this kind of situation, it’s extremely difficult to rely only on humans.
Ofcom and Cambridge Consultants’ report suggests that AI could help to reduce the psychological impact on human moderators in a few key ways:
- Varying the level and type of harmful content they are exposed to.
- Automatically blurring out parts of the content which the moderator can optionally choose to view if required for a decision.
- Humans can ‘ask’ the AI questions about the content to prepare themselves or know whether it will be particularly difficult for them, perhaps due to past individual experiences.
The slow process of manual content moderation often means harmful content is seen by millions before it’s taken down. .So, to increase the speed, A.I. is must.
Earlier this month, Facebook-owned Instagram uneviled improvements to an AI-powered moderation system it uses in a bid to prevent troublesome content from ever being posted. While previously restricted to comments, Instagram will now ask users “Are you sure you want to post this?” for any posts it deems may cause distress to others.
The report essentially determines that, for the foreseeable future, effective fully automated content moderation is not possible.
Among the chief reasons for fully automated content moderation being problematic is that – while some harmful posts can be identified by analyzing it alone – other content requires a full understanding of context. For example, the researchers note how regional and cultural differences in national laws and what’s socially acceptable are difficult for today’s AI moderation solutions to account for but trivial for local human moderators.
Some content is also easier to analyze than others. Photos and pre-recorded videos could be analyzed before they’re posted, whereas live-streams pose a particular difficulty because what appears to be an innocent scene could become harmful very quickly.
“Human moderators will continue to be required to review highly contextual, nuanced content,’ says Cambridge Consultants’ report. “However, AI-based content moderation systems can reduce the need for human moderation and reduce the impact on them of viewing harmful content.”
You can find a copy of the full report here.
By
Srini