The Trouble with Traditional Content Moderation
As society embraced the information age, digital channels and user-generated content exploded. In response, organizations had to find new ways to cut through the noise and establish highly personalized conversations with online audiences.
Stories surfaced in every shape and size, with the production and management of content growing more complex by the day. Content became 24/7, increasing 10x over the past few years alone.¹
Naturally, managing and storing all this content became quite the feat. On Twitter alone there are around 500 million tweets sent each day, or 6,000 tweets every second.² It’s a lot to keep up with and even more to somehow organize, filter, and moderate.
The nature of content itself only compounded this problem. In the digital realm, millions of voices leverage social and digital media to share their two cents. While this leads to unique storytelling, widespread knowledge-sharing, and strengthened connections, it can also lead to the spread of contentious and harmful messages or even misinformation.
In fact, George Washington University researchers found that ten predominantly “fake news” and conspiracy outlets were responsible for 65% of tweets linking to such stories.³
This is precisely why content moderation is so vital. Content moderation is effectively an act of public good. Moderators are like firemen or first responders, tasked with shielding members of society from harm. In order to root out this negativity, moderators must look through a diverse spectrum of content. The good, the bad, and the ugly. The type of material – and sheer volume of it – is often overwhelming, which impacts on moderator happiness and well-being. As moderators increasingly face a mountain of complex content and organizations struggle to protect them or their users, companies must start looking beyond the traditional methods for content moderation.