Blog | Communications Media and Entertainment

Twitter's New Audio Feature Reignites the Content Moderation Challenge

JUNE 29, 2020

Social media is often where people go to raise their voice, to have a say about something. 

Only now, that’s true in a more literal sense. A few weeks ago, Twitter unveiled a new voice feature that allows users to record and publish 140 seconds of audio instead of text.

Cool, right?

Well, it’s complicated. While social savants find their voice on the popular platform, this announcement also raised an important question – how are moderators going to manage and flag inappropriate content in this new medium? And is this only compounding the existing 24/7 struggle of online content moderation? 

Let’s take a look.

Add Voice to the Laundry List

Content moderation is a must for today’s social media companies and online publishers. It’s also a herculean challenge.

If organizations hope for users to essentially live on their platform, they must ensure it’s a safe place to inhabit. That’s where content moderators come in. 

 Twitter’s new voice recording feature has raised questions around content moderation. Learn how AI supports moderators to address content concerns.

Content moderators face a mountain of obstacles in their day-to-day work. They’re tasked with manually reviewing a never-ending stream of user-generated content, as well as filtering malicious and inaccurate social media posts. This is both time-consuming and cumbersome.

Twitter's new voice feature, while entertaining and engaging, adds to the laundry list of complexities companies face when it comes to moderation. 

However, Twitter is not the only company grappling with this challenge. Facebook, Google, and other tech giants have experienced their own bouts of moderation woes. With platform growth and expansion comes more content volume and complexity, which requires greater – and more nuanced – moderation. 

This conundrum has organizations scratching their heads. How do you keep users safe without inhibiting platform growth? And how do you help moderators keep up with the influx of content while safeguarding their own well-being? Enter AI./p>

Dream Team: Human and Machine

Artificial Intelligence (AI) is the helping hand content moderators need to address today’s complex digital landscape. That’s because AI-based platforms continuously analyze and learn to recognize certain patterns and words through automated cognitive intelligence, making the moderation process quicker and smarter each time. 

That said, AI cannot stand alone in moderating content and requires the additional context – including societal, cultural, and political factors – that human moderators provide. By pairing AI with human empathy and situational thinking, the two become the dream team for moderation. Together, they safely, accurately, and effectively vet high volumes of multimedia content. 

This balance is the backbone of Sutherland’s Content Moderation solution, which blends the strengths of humans and AI to moderate content at scale to create safe and trustworthy online environments for organizations and their communities. The solution makes moderator safety a top priority as well, as our proprietary Happiness and Social Indexes protect moderators and monitor well-being, empowering them to safely and effectively do their job. 

In the digital age, the only certainty is that innovation – such as Twitter’s voice recording feature – will create new experiences that unearth new challenges. It will take the same level of innovation to keep those challenges in check. By adapting to and overcoming these obstacles, organizations will engineer experiences for the future.

 Twitter’s new voice recording feature has raised questions around content moderation. Learn how AI supports moderators to address content concerns.

Content Strategy, Creation and Services

Sutherland Editorial

symbol

Related Insights