Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    India doubles down on state-backed venture capital, approving $1.1B fund

    February 14, 2026

    Why top talent is walking away from OpenAI and xAI

    February 13, 2026

    Fusion startup Helion hits blistering temps as it races toward 2028 deadline

    February 13, 2026
    Facebook X (Twitter) Instagram
    Trending
    • India doubles down on state-backed venture capital, approving $1.1B fund
    • Why top talent is walking away from OpenAI and xAI
    • Fusion startup Helion hits blistering temps as it races toward 2028 deadline
    • AI burnout, billion-dollar bets, and Silicon Valley’s Epstein problem
    • Score, the dating app for people with good credit, is back
    • Didero lands $30M to put manufacturing procurement on ‘agentic’ autopilot
    • Eclipse backs all-EV marketplace Ever in $31M funding round
    • Complyance raises $20M to help companies manage risk and compliance
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»Can crowdsourced fact-checking curb misinformation on social media?
    AI

    Can crowdsourced fact-checking curb misinformation on social media?

    TechurzBy TechurzMay 19, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Can crowdsourced fact-checking curb misinformation on social media?
    Share
    Facebook Twitter LinkedIn Pinterest Email

    While Community Notes has the potential to be extremely effective, the difficult job of content moderation benefits from a mix of different approaches. As a professor of natural language processing at MBZUAI, I’ve spent most of my career researching disinformation, propaganda, and fake news online. So, one of the first questions I asked myself was: will replacing human factcheckers with crowdsourced Community Notes have negative impacts on users?

    Wisdom of crowds

    Community Notes got its start on Twitter as Birdwatch. It’s a crowdsourced feature where users who participate in the program can add context and clarification to what they deem false or misleading tweets. The notes are hidden until community evaluation reaches a consensus—meaning, people who hold different perspectives and political views agree that a post is misleading. An algorithm determines when the threshold for consensus is reached, and then the note becomes publicly visible beneath the tweet in question, providing additional context to help users make informed judgments about its content.

    Community Notes seems to work rather well. A team of researchers from University of Illinois Urbana-Champaign and University of Rochester found that X’s Community Notes program can reduce the spread of misinformation, leading to post retractions by authors. Facebook is largely adopting the same approach that is used on X today.

    Having studied and written about content moderation for years, it’s great to see another major social media company implementing crowdsourcing for content moderation. If it works for Meta, it could be a true game-changer for the more than 3 billion people who use the company’s products every day.

    That said, content moderation is a complex problem. There is no one silver bullet that will work in all situations. The challenge can only be addressed by employing a variety of tools that include human factcheckers, crowdsourcing, and algorithmic filtering. Each of these is best suited to different kinds of content, and can and must work in concert.

    Spam and LLM safety

    There are precedents for addressing similar problems. Decades ago, spam email was a much bigger problem than it is today. In large part, we’ve defeated spam through crowdsourcing. Email providers introduced reporting features, where users can flag suspicious emails. The more widely distributed a particular spam message is, the more likely it will be caught, as it’s reported by more people.

    Another useful comparison is how large language models (LLMs) approach harmful content. For the most dangerous queries—related to weapons or violence, for example—many LLMs simply refuse to answer. Other times, these systems may add a disclaimer to their outputs, such as when they are asked to provide medical, legal, or financial advice. This tiered approach is one that my colleagues and I at the MBZUAI explored in a recent study where we propose a hierarchy of ways LLMs can respond to different kinds of potentially harmful queries. Similarly, social media platforms can benefit from different approaches to content moderation.

    Automatic filters can be used to identify the most dangerous information, preventing users from seeing and sharing it. These automated systems are fast, but they can only be used for certain kinds of content because they aren’t capable of the nuance required for most content moderation.

    crowdsourced curb factchecking media misinformation social
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous Article8 security risks overlooked in the rush to implement AI
    Next Article Breville’s new premium drip coffee maker combines sophistication with precision brewing
    Techurz
    • Website

    Related Posts

    Opinion

    UpScrolled’s social network is struggling to moderate hate speech after fast growth

    February 11, 2026
    Opinion

    Can a social app fix the ‘terrible devastation’ of social media?

    January 4, 2026
    Opinion

    Fizz social app’s CEO on why anon works

    January 1, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    College social app Fizz expands into grocery delivery

    September 3, 20251,599 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    College social app Fizz expands into grocery delivery

    September 3, 20251,599 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Our Picks

    India doubles down on state-backed venture capital, approving $1.1B fund

    February 14, 2026

    Why top talent is walking away from OpenAI and xAI

    February 13, 2026

    Fusion startup Helion hits blistering temps as it races toward 2028 deadline

    February 13, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.