Is It Even Possible to Dam the Flow of Misleading Content Online?


As a polarizing US presidential election nears, moderating controversial content on social media poses a pressing problem for tech giants.

But no matter how many employees they hire, lines of code they write, or new content policies they implement, major platforms face an overwhelming task. In the second quarter of 2023 alone, for example, Meta, Facebook’s parent company, took action on 13.6 million pieces of terrorism-related content and 1.1 million posts on organized hate.

“Content moderation is simply not a good way to counter information that creates false beliefs.”

A new study suggests that it’s time for platforms to admit defeat when it comes to trying to flag, edit, or block misinformation on their sites. Instead, they should focus on curtailing specific information that can lead to direct harm, such as hate speech or the public sharing of home addresses.

“Content moderation is simply not a good way to counter information that creates false beliefs,” says Scott Duke Kominers, the Sarofim-Rock Professor of Business Administration at Harvard Business School, who coauthored the new analysis. “But it is a very powerful tool for addressing information that enables harm.”

Kominers worked with Jesse Shapiro, the George Gund Professor of Economics and Business Administration at Harvard Business School, to study how content moderation works best.

The debate over moderating content comes as the US presidential race heats up and pressure builds on social media companies to limit the amount of false information flowing on their platforms; the specter of the January 6, 2021, attack on Congress hangs heavy over the debate. But Kominers and Shapiro say the findings go beyond this year’s elections and have far-reaching implications for businesses dealing with the thorny issue of moderating controversial content of all kinds.

How much do users trust tech companies?

The coauthors relied on a mathematical model to study strategic interactions, testing outcomes under different scenarios and rules.

“It’s difficult to do purely empirical analysis in this setting, so the fundamental reason why we use theory here is that we’re trying to sort out all the possible rules you could ever try to use,” says Shapiro. “We’re reasoning about policies that have never been tested.”

One of the first findings was that transparent moderation—or making clear a platform’s content policies and the actions a platform will take to enforce those policies—works but is difficult to achieve. If users trust the platform to know the right thing to do and then it consistently does the right thing, then companies can eliminate a lot of misinformation without too much controversy.

The problem is that, in practice, many users don’t trust platforms to do the right thing, often thinking of them as biased actors. When social media companies publish their policies completely, bad actors could more easily circumvent and exploit their algorithms. Fact-checking operations, even when transparent, are subject to human error and bias.

Recognizing these limits, the authors focus most of their attention on “opaque” policies, where the precise mechanisms and processes of content moderation are not fully transparent to users.

Two types of content

While examining interactions between “senders” and “receivers,” Kominers and Shapiro observed that moderators need to be aware of information that enables a receiver to create false beliefs or harm others.

“The existence of two different categories of information is one of the core discoveries of our project,” says Kominers.

Referred to as a “key” in the model, information leading to actual harm often contains specific details that receivers can act upon, such as a sender posting where and when receivers can gather to take part in potentially troublesome activities. By contrast, general information that just shapes beliefs is hard to moderate effectively.

  • Specific: For example, the home address scenario. It’s one thing for a “sender” to call an official “corrupt.” But if that sender shares something specific and actionable, such as the official’s home address (a “key”), a tech company can reduce harm by blocking the address. Users might think the social media company is biased, but its action still reduces the potential for a bad outcome.
  • General: For example, “don’t eat GMOs.” With a more general expression, such as a post that genetically modified foods are dangerous—tech companies have fewer effective moderation options. That’s because if they determine that the information is false and block it, “receivers” might just interpret this as indicating that the platform is beholden to agribusiness, inadvertently fanning the flames of the debate.

“Finding that the effectiveness of content moderation can vary so widely across information types was another core discovery,” says Kominers.

Far-reaching implications for businesses

Due to human nature and the sheer amount of misinformation expressed on platforms, “it’s not surprising that moderation has proven so difficult and controversial,” Kominers says. “We think our findings help make it possible for platforms to focus more on preventing information that directly enables harm. That’s where the strongest opportunity lies for really improving moderation.”

“There’s a tremendous number of areas today where we’re seeing businesses intermediate communication—and they’re being asked to decide what is and isn’t OK to transmit.”

That applies beyond tech companies, the authors say.

“There’s a tremendous number of areas today where we’re seeing businesses intermediate communication—and they’re being asked to decide what is and isn’t OK to transmit,” says Shapiro. “It’s simply not a good idea to ask businesses to accomplish things they can’t accomplish. Our findings show that there are some clear boundaries between things that moderating can address very effectively and other things that moderation is just not well suited to address.”

You Might Also Like:

Feedback or ideas to share? Email the Working Knowledge team at hbswk@hbs.edu.

Image: Image by HBSWK with asset from AdobeStock/nikolas_stock



Source link

About The Author

Scroll to Top