Project Details
Community-Led Analysis and Reporting to Improve Trust and Transparency
Applicant
Professor Dr. Nicolas Pröllochs
Subject Area
Data Management, Data-Intensive Systems, Computer Science Methods in Business Informatics
Term
since 2026
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 570600893
Community-led fact-checking on social media is an emerging approach that seeks to reduce reliance on external factchecking organizations by harnessing the collective intelligence of users. In this model, users contribute fact-checking comments and vote on the accuracy of others' assessments. The most helpful contributions are displayed alongside the original post, providing readers with additional context to inform their opinions. To date, this approach has been fully implemented on only one major platform -- Community Notes on X. However, other social media companies, such as Meta and YouTube, are actively exploring the integration of community fact-checking features on their platforms. While early studies highlight the potential of Community Notes in curbing the spread of misinformation and indicate a high level of user trust, critical questions remain. In particular, the long-term reliability of this approach is not yet established. Concerns have also been raised about its resilience to manipulation (e.g. coordinated attempts to label false information as true under the guise of fact-checks). Another critical issue concerns transparency: how should these fact-checking results be presented to ensure users understand and trust the process, while also encouraging critical thinking? The CLARITY project addresses these open challenges from an algorithmic and user-centered perspective. It will investigate effective strategies for implementing community-led fact-checking systems, develop approaches to ensure transparency, and identify best practices for presenting fact-checks in ways that foster user trust and critical engagement. Additionally, it will explore the limits and broader applications of this approach, including its potential use in areas beyond misinformation, such as hate speech detection.
DFG Programme
Research Grants
International Connection
Luxembourg
Cooperation Partner
Professor Gabriele Lenzini, Ph.D.
