Blog | Technology

Balancing AI and Humans to Combat Misinformation

Never before has the internet — and information we consume — been in such a considerable state of flux. We’re facing a true crisis of misinformation, where fake news is 70 percent more likely to be retweeted than true stories.

MARCH 25, 2021

Never before has the internet — and information we consume — been in such a considerable state of flux. We’re facing a true crisis of misinformation, where fake news is 70 percent more likely to be retweeted than true stories. The consequences of this cannot be understated, from affecting public perceptions of reality to shaping human emotions, distorting election campaigns or even leading to violence in more extreme cases. 

As the sheer amount of content, including fake content, continues to rise each day, many sources of information have also become more polarized and contentious, resulting in a precipitous decline in public trust. It therefore comes as no surprise that 86 percent of American users believe online content contains some form of misinformation. With rumors running rampant amidst events like COVID-19, organizations are under mounting public pressure to dispel false information. 

The need to fight back against false content is unrefuted. However, an increasingly loud debate exists around how to best address this issue and maintain truth in society. 

To Machine or Not to Machine?

Many organizations still rely on traditional methods of content moderation, where human moderators are tasked with painstakingly trawling through endless amounts of content. Oftentimes, this risks a psychological toll by exposing human moderators to problematic content, containing everything from hate speech to excessive violence. 

In response, many have hailed artificial intelligence (AI) as a viable alternative to tackle misinformation — particularly as many human moderators have been sent home due to the pandemic. By leveraging deep-learning AI algorithms, organizations can effectively rate content for authenticity and determine if a piece of content is inaccurate. In one example, researchers from UC Berkeley and the University of Southern California developed an AI-based tool that is at least 92% accurate in spotting deepfakes. 

AI- and machine learning-based tools are also able to continuously analyze and learn to recognize certain patterns of content and words through automated cognitive intelligence, making the process quicker and smarter over time. 

Yet, AI is far from the end-all-be-all for content moderation, as some would have you believe. While AI can quickly monitor content and reduce human exposure to harmful material, it comes with its own pitfalls, as automated tools are, frankly, still not smart enough and fall short of human discernment. 

For instance, while AI tools excel at reviewing images, they often struggle to understand more nuanced text and video content. Especially with fake content, where it may not be blatantly offensive or violent, a deeper understanding of the context and intent is required. Content containing foreign languages or slang are also an entirely different ballgame. 

Finding the Sweet Spot

The interpretation of content therefore requires additional societal, cultural and political context — the kind that only human moderators can provide. Furthermore, human moderators are able to offer much needed empathy and insight into the broader landscape, how individuals are feeling and may react to a piece of content.

Relying solely on human moderators or AI tools alone clearly does not suffice. So, how should humans and machines work together?

As quick action is essential to keep up with the rate of user-generated content, a more effective approach would involve first tapping the unparalleled speed offered by AI. Leveraging AI as a first layer of defense, organizations can swiftly identify large quantities of inaccurate content across multiple channels and remove content that is outrightly harmful, fake or that violates predetermined guidelines. For content that requires additional screening, AI tools can flag and prioritize content that will undergo further human review to ensure it abides by company standards. 

Moreover, to reduce the level of harmful content that human moderators have to review, organizations can apply visual question answering, a technique that allows moderators to ask the AI platform a series of questions in order to determine its degree of harmfulness without having to view the content directly. This greatly reduces the psychological effects of the process and increases productivity in the long run. 

Content Guardians

Developing Content “Guardians”

At the same time, it doesn’t stop at just incorporating humans and AI. While understanding how these work in tandem is critical, organizations also need to dedicate the right resources and equip moderators with appropriate mental and emotional support.

This must start right at the recruiting and hiring phase. Content moderation, at its core, is about the need to protect the public good and shield the public from harmful and fake information. That means that certain personality types — those drawn to public-good and “protector” roles — may also be better suited for content moderation.

We should think of human moderators as content “guardians” who protect the internet, and focus on finding these unique individuals from the start through psychological assessment and indexing. By creating social profiles of candidates and prioritizing training and development, organizations can determine those who will be best suited for the work and nurture their strengths to place them in the right roles.

Once they’re on the job, working conditions are a significant factor in the challenges human moderators face on a daily basis. As such, organizations need to design and implement wellness programs with specialized offerings, from counseling to individual coaches that provide mental health support. Advanced analytics can even be helpful in assessing and understanding the state of human moderators while on the floor.

Human-Centric, High-Tech

While work remains to be done, hope is far from lost in achieving effective content moderation. For any organization developing a content moderation strategy, it pays to err on the side of caution. Taking an approach built on a synergistic relationship is ideal in the face of fake news and misinformation. 

Even in the not-too-distant future when AI algorithms become so advanced that they are able to incorporate a large set of content and context, human oversight will remain critical to provide expertise and complex decision-making. That’s where a multi-dimensional approach comes in — one that strikes the right balance of human and machine, but also equips them each to perform at the fullest of their potential. 

Successfully Manage Content in a 24/7 World

Sutherland Editorial

symbol

Related Insights