Longterm Wiki
Navigation
Updated 2026-03-13HistoryData
Page StatusResponse
Edited today2.2k wordsUpdated monthlyDue in 4 weeks
20QualityDraft •17.5ImportancePeripheral19.5ResearchMinimal
Summary

X.com presents a deeply mixed epistemic profile. Community Notes demonstrates genuine innovation in crowdsourced fact-checking, reducing repost virality by 46% and encouraging voluntary retraction of misleading posts. However, the platform's engagement-driven algorithm systematically amplifies emotionally charged and low-credibility content, API restrictions have destroyed independent research access, verification changes have degraded trust signals, and the platform owner's personal misinformation has generated over 2 billion views. The net epistemic impact is substantially negative.

Content6/13
LLM summaryScheduleEntityEdit historyOverview
Tables1/ ~9Diagrams1/ ~1Int. links9/ ~17Ext. links12/ ~11Footnotes0/ ~6References3/ ~6Quotes0Accuracy0RatingsN:3.5 R:5 A:3 C:5.5
Issues2
QualityRated 20 but structure suggests 87 (underrated by 67 points)
Links1 link could use <R> components

X.com Platform Epistemics

Approach

X.com Platform Epistemics

X.com presents a deeply mixed epistemic profile. Community Notes demonstrates genuine innovation in crowdsourced fact-checking, reducing repost virality by 46% and encouraging voluntary retraction of misleading posts. However, the platform's engagement-driven algorithm systematically amplifies emotionally charged and low-credibility content, API restrictions have destroyed independent research access, verification changes have degraded trust signals, and the platform owner's personal misinformation has generated over 2 billion views. The net epistemic impact is substantially negative.

2.2k words

Overview

X.com (formerly Twitter) occupies a unique position in the global information ecosystem. With approximately 550 million monthly active users,1 it remains one of the primary platforms for real-time news dissemination, political discourse, and public accountability. Since Elon Musk's acquisition in October 2022,2 the platform has undergone sweeping changes affecting nearly every dimension of its epistemic function.

The platform's epistemic profile is deeply contradictory. On one hand, X hosts Community Notes, a genuinely innovative crowdsourced fact-checking system that reportedly reduces misinformation virality when notes display.3 On the other hand, the platform's engagement-driven algorithm has been found to systematically amplify emotionally charged and low-credibility content,4 API restrictions have ended numerous academic research projects,5 and the platform owner himself has generated over 2 billion views on false or misleading election claims, according to reporting by TechCrunch.6 The net effect is a platform where isolated epistemic innovations coexist with structural features that degrade information quality at scale.

Quick Assessment

DimensionAssessmentEvidence
X Community Notes effectivenessMedium-HighReportedly reduces reposts by ≈46% when notes display,1 but only an estimated 8–10% of submitted notes reach public visibility2
Algorithm transparencyLow-MediumPartially open-sourced in 2023–2024, but reportedly not kept current; practical transparency limited
Content moderationLowAccording to some researchers, hate speech increased ≈50% post-acquisition;3 trust and safety staff reportedly cut by up to 80%4
Research accessVery LowFree API eliminated 2023; reportedly 100+ studies canceled; legal threats against researchers documented5
Verification integrityLowPay-for-checkmark system degraded trust signals; impersonation cases publicly demonstrated
Link/source sharingLowReportedly a 30–50% algorithmic penalty on external links discourages sourcing6
AI integration (Grok)Very LowDocumented misinformation, deepfake generation, and ideological prompt manipulation reported by journalists
Owner conductVery LowAccording to some sources, 87 false election-related claims accumulated 2B+ views;7 attacks on journalists documented
Real-time informationMediumStill valuable for breaking events, but increasingly compromised by bots and misinformation
Net epistemic impactNegativePositive innovations outweighed by structural degradation

How It Works

X.com Platform Epistemics emerge from the interaction of several systems: the recommendation algorithm, Community Notes, content moderation policies, verification infrastructure, and the integrated X.com Platform Epistemics chatbot.

Recommendation Algorithm

The recommendation algorithm was partially open-sourced in early 2024, revealing an engagement scoring formula that reportedly weights retweets at 20x, replies at 13.5x, profile clicks at 12x, link clicks at 11x, and bookmarks at 10x relative to likes.1 This weighting structure inherently favors content that provokes strong reactions over content that is merely informative.

Loading diagram...

Key algorithmic features that affect epistemics:

  • Premium account boost: Paid accounts reportedly receive a 4x in-network and 2x out-of-network algorithmic amplification,2 meaning subscribers gain disproportionate reach regardless of content quality.
  • Link penalty: External links reportedly receive a 30–50% reach reduction,3 with some tests suggesting a 94% decrease in visibility for posts containing links.3 This directly discourages citation and external sourcing.
  • Political amplification: Research from Queensland University of Technology found that after Elon Musk (AI Industry)'s endorsement of Trump in July 2024, Musk's posts received approximately 6.4 million additional views (a 138% increase), and Republican-leaning accounts received significant boosts.4

Community Notes

Community Notes remains the platform's strongest epistemic feature. See the dedicated Community Notes page for detailed analysis. Key findings:

  • Posts with Community Notes saw reposts drop 46% and likes drop 44% on average, according to a 2025 study published in PNAS.5
  • Posts receiving notes were reportedly 32% more likely to be voluntarily deleted by their authors.6
  • Medical professionals rated 98% of COVID-19-related Community Notes as accurate in a University of Rochester study.7

Critical limitations persist: only 8–10% of proposed notes reach "helpful" status,8 the average delay to note display is approximately 75.5 hours — by which time 96.7% of reposts have already occurred8 — and participation is declining, with monthly submissions reportedly dropping from approximately 120,000 in January 2025 to below 60,000 by May 2025.8

Content Moderation

Content moderation capacity was dramatically reduced following the acquisition. Trust and safety teams experienced up to 80% cuts in dedicated engineering roles,9 and the Trust and Safety Council was dissolved in December 2022. The total moderation workforce reportedly dropped to approximately 1,849 for 550 million monthly active users, a ratio of roughly 1 moderator per 297,000 users.9

Consequences have been measurable: hate speech increased 50% overall post-acquisition,10 with transphobic slurs up 260%, racist tweets up 42%, and homophobic tweets up 30%, according to USC Viterbi School researchers publishing in PLOS ONE (2024).10 As of August 2023, 86% of posts reported for hate speech were reportedly still hosted on the platform.11

Epistemic Harms: Detailed Analysis

Engagement Algorithm Amplifies Low-Credibility Content

The research consensus on X's algorithm is clear. A preregistered algorithmic audit published in Science (2025) found that X's engagement-based algorithm amplifies emotionally charged, out-group hostile content that makes users feel worse about their political opponents, and that users reported they did not prefer the political tweets selected by the algorithm.1

A 10-day experiment with 1,256 volunteers during the 2024 U.S. presidential campaign provided causal evidence that algorithmic exposure to anti-democratic attitudes and partisan hostility alters affective polarization, shifting out-party animosity by reportedly more than 2 points on a 100-point feeling thermometer.1 Analysis of approximately 2.7 million posts reportedly confirmed that engagement-based recommender systems amplify low-credibility content on COVID-19 and climate change topics, according to a study published in EPJ Data Science (March 2024).2

API Restrictions Destroyed Research Access

In February 2023, X eliminated free API access and introduced tiered pricing — reportedly Basic at $100/month, Pro at $5,000/month, and Enterprise at custom pricing — and discontinued the free Academic Research API tier entirely.3

The impact on academic research has been severe:

  • According to a Columbia Journalism Review report, over 100 studies were canceled or suspended, with more than 250 projects jeopardized.4
  • A letter from the ARC reported that 76 long-term efforts were terminated, including public tools such as Botometer (bot detection) and Hoaxy (misinformation visualization).5
  • A preprint study found a 13% decline in Twitter-related academic studies in 2024.6
  • Approximately 50% of surveyed researchers reportedly expressed increased worry about legal repercussions of studying the platform, according to a study in Social Media + Society.7

The EU's Digital Services Act (Article 40, effective 2024) attempts to address this by allowing national authorities to compel researcher access, but enforcement remains inconsistent.

Verification Changes Degraded Trust Signals

The blue checkmark shifted from a merit-based credential confirming identity and notability to a subscription product reportedly priced at $8/month, available to anyone meeting basic eligibility criteria.8 Legacy verification was reportedly removed on or around April 1, 2023.8

The epistemic consequences are significant: the checkmark, once a reliable signal of account authenticity, became less meaningful as a trust indicator. Reports at the time indicated that at least one journalist successfully created a verified impersonation account of a sitting U.S. Senator shortly after the subscription rollout. The paid checkmark also reportedly provides algorithmic amplification, meaning paying users receive disproportionate reach regardless of their credibility or accuracy.

Grok AI Integration

X's integrated AI chatbot Grok has been characterized as an "epistemic weapon" by Tech Policy Press.9 Documented incidents include:

  • Election misinformation (2024): Grok incorrectly stated that ballot deadlines had passed in multiple states; according to PBS NewsHour, the false information persisted for over a week.10
  • Fabricated breaking news (April 2024): Grok reportedly treated unverified X posts about Iran attacking Israel as confirmed real-world news.10
  • Prompt manipulation (February 2025): Grok 3's system prompt reportedly contained instructions to discount sources characterizing Musk or Trump as spreading misinformation; this claim has not been independently verified by major news organizations and should be treated with caution.
  • Pro-Kremlin narratives (October 2025): The Institute for Strategic Dialogue and Global Witness reportedly found Grok amplifying pro-Russian narratives in responses to political queries.11
  • Deepfake crisis (2025): Users reportedly produced an estimated 6,700 sexually suggestive AI-generated images per hour during an incident involving Grok, according to some sources, leading Malaysia and Indonesia to block Grok access.12

Millions of users globally reportedly use Grok as a fact-checking tool despite its demonstrated inaccuracy, which is particularly concerning in markets where alternative fact-checking infrastructure is limited, according to Al Jazeera.13

The algorithmic suppression of external links has direct epistemic consequences. According to Social Media Today, A/B tests showed posts with links received only approximately 3,670 views versus approximately 65,400 for nearly identical link-free posts.14 This structure incentivizes users to make claims without citing sources, post screenshots rather than links to primary sources, and remain within X's information ecosystem rather than consulting more detailed or authoritative external content.

In October 2025, X reportedly began testing an in-app browser to display links without sending users off-platform, effectively acknowledging the suppression problem while attempting to retain users within its ecosystem.

Treatment of Journalists and Media

The platform has engaged in a documented pattern of actions against journalists and news organizations:

  • December 2022: Ten journalists from major outlets were suspended, reportedly for covering Musk's jet-tracking controversy, according to PBS NewsHour.15
  • January 2024: At least eight prominent accounts were suspended, predominantly belonging to left-leaning journalists, according to Vice.16
  • Media outlets including BBC and NPR were labeled as "state-affiliated" and subjected to visibility restrictions.
  • The Guardian, which reportedly held approximately 10.7 million followers on the platform, announced it would stop posting to X entirely.17
  • Musk filed a lawsuit against Media Matters after the organization published a report documenting increases in hate speech adjacency on the platform.18

Risks Addressed

Despite its predominantly negative trajectory, X addresses several epistemic needs:

  • Real-time information: The platform remains one of the fastest channels for breaking news, crisis information, and public accountability, though this is increasingly compromised by reportedly widespread bot activity
  • Crowdsourced fact-checking: Community Notes demonstrates that bridging algorithms can produce cross-partisan consensus fact-checks perceived as more legitimate than centralized alternatives
  • Platform adoption of Community Notes: According to some sources, Meta, TikTok, and YouTube have adopted similar community-based fact-checking models, suggesting X's approach may become a cross-platform standard for content moderation1
  • Algorithm transparency: The partial open-sourcing of the recommendation algorithm was reportedly unprecedented among major platforms, even if practical transparency remains limited

Limitations

The positive epistemic features of X.com are severely constrained by structural factors:

  1. Community Notes timing: The bridging algorithm's requirement for cross-partisan consensus means notes reportedly arrive after approximately 96.7% of viral spread has already occurred,1 limiting aggregate impact.
  2. Community Notes declining participation: Monthly note submissions reportedly halved between January and May 2025,2 coinciding with Musk's claim that the system was "being gamed."
  3. Owner conflict of interest: Musk reportedly spent over $200 million supporting Trump's 2024 campaign3 while simultaneously controlling the platform's algorithm and moderation policies — according to some analysts, an unprecedented concentration of media and political power.
  4. Research ecosystem destruction: The API shutdown has degraded the academic community's ability to monitor epistemic effects precisely when the platform is undergoing its most significant changes.
  5. Platform fragmentation: User exodus — daily active users reportedly declining from approximately 250 million to around 157 million4 — has driven growth at alternatives like Bluesky and Threads, reducing shared information spaces.
  6. Advertiser flight: According to some sources, only around 4% of marketers believe brands are safe on X,5 with advertising revenue reportedly falling approximately 46.4%,5 reducing economic incentives for platform quality improvement.

Impact on Elections

The 2024 U.S. presidential election provided a critical test case. Musk personally posted reportedly 87 false or misleading election claims generating over 2 billion views, according to some sources.1 According to some analyses, approximately 74% of accurate Community Notes on election misinformation were never displayed to users.2 USC researchers uncovered coordinated information operations amplifying partisan narratives across X and other platforms.3

Internationally, Brazil banned X in August 2024 for non-compliance with judicial orders related to disinformation, lifting the ban in October 2024 after reportedly $5.2 million in fines.4 In early 2025, Musk reportedly published over 100 posts (100M+ views) attacking the UK Labour government and openly supporting far-right European parties, according to some sources.5

Key Uncertainties

Key Questions

  • ?Will Community Notes participation stabilize or continue declining, potentially rendering the system ineffective?
  • ?Can engagement-driven algorithms be reformed to reduce amplification of low-credibility content without sacrificing platform growth?
  • ?Will regulatory frameworks like the EU Digital Services Act effectively restore independent research access?
  • ?How will Grok's role as a de facto fact-checker affect information quality, particularly in developing markets with limited alternatives?
  • ?Will platform fragmentation (Bluesky, Threads) produce better epistemic environments, or merely fragment shared information spaces?

Sources

  1. University of Washington (2025). Community Notes reduce virality. PNAS.
  2. Gies Business, UIUC (2024). Community Notes and voluntary retraction. Information Systems Research.
  3. Columbia Journalism Review (2024). Impact on academic research.
  4. Science (2025). Algorithmic exposure and affective polarization.
  5. USC Viterbi / PLOS ONE (2024). Hate speech trends post-acquisition.
  6. TechPolicy.Press (2025). Grok as epistemic weapon.
  7. TechCrunch (2024). Musk election misinformation at 2B views.
  8. EPJ Data Science (2024). Algorithmic amplification of low-credibility content.
  9. Fortune (2024). Community Notes fail on election misinfo.
  10. NBC News (2025). Community Notes participation declining.
  11. EDMO (2025). Musk's disinformation machine.
  12. ACM FAccT (2025). Political exposure bias on X.

Footnotes

  1. X reportedly cited this figure in 2023 investor materials; see Linda Yaccarino's public statements and coverage in *T... — X reportedly cited this figure in 2023 investor materials; see Linda Yaccarino's public statements and coverage in The Verge and Reuters (2023). 2 3 4 5 6 7 8

  2. Elon Musk completed his acquisition of Twitter on October 27, 2022. See The New York Times, "Elon Musk Completes Twitter Takeover" (Oct. 27, 2022). 2 3 4 5 6

  3. See reporting on Community Notes efficacy, including discussion in Prolific Academic and platform transparency reports. The 46% figure circulates in academic coverage but should be treated as provisional pending peer review. 2 3 4 5 6 7

  4. See Huszár et al., "Algorithmic amplification of politics on Twitter," PNAS (2022), and related work published in *... — See Huszár et al., "Algorithmic amplification of politics on Twitter," PNAS (2022), and related work published in Science (2025). 2 3 4 5 6

  5. See Columbia Journalism Review Tow Center, "What happened to academic research on Twitter?" (2023), documenting loss of API access for researchers. 2 3 4 5 6 7

  6. TechCrunch, "Elon Musk's false and misleading election claims have been viewed 2 billion times on X" (Nov. 5, 2024). 2 3 4

  7. False-claims tally attributed to fact-checking organizations tracking owner posts during 2024 election period; method... — False-claims tally attributed to fact-checking organizations tracking owner posts during 2024 election period; methodology varies by source. 2 3

  8. ArXiv preprint on Community Notes helpfulness rates and timing. (https://arxiv.org/html/2510.00650v1) 2 3 4 5

  9. Fortune, "Inside Elon Musk's X/Twitter Austin Content Moderation" (Feb. 6, 2024). (https://fortune.com/2024/02/06/inside-elon-musk-x-twitter-austin-content-moderation/) 2 3

  10. USC Viterbi School / PLOS ONE (2024), reported at (https://viterbischool.usc.edu/news/2025/02/a-platform-problem-ha... — USC Viterbi School / PLOS ONE (2024), reported at (https://viterbischool.usc.edu/news/2025/02/a-platform-problem-hate-speech-and-bots-still-thriving-on-x/). 2 3 4

  11. UC Berkeley study on hate speech removal rates, reported Feb. 13, 2025. (https://news.berkeley.edu/2025/02/13/study-finds-persistent-spike-in-hate-speech-on-x/) 2

  12. Wikipedia summary of Grok sexual deepfake incident. https://en.wikipedia.org/wiki/Grok_sexual_deepfake_scandal

  13. Al Jazeera, "As millions adopt Grok to fact-check, misinformation abounds" (July 2025). https://www.aljazeera.com/economy/2025/7/11/as-millions-adopt-grok-to-fact-check-misinformation-abounds

  14. Social Media Today, reporting on X link-penalty A/B tests. https://www.socialmediatoday.com/news/x-formerly-twitter-testing-links-in-app-link-post-penalties/803176/

  15. Citation rc-bf99 (data unavailable — rebuild with wiki-server access)

  16. Vice, "X purges prominent journalists, leftists with no explanation" (January 2024). https://www.vice.com/en/article/x-purges-prominent-journalists-leftists-with-no-explanation/

  17. The Guardian's follower count and decision to leave X, per contemporaneous reporting.

  18. Media Matters lawsuit filed by Musk/X, per contemporaneous reporting.

References

Related Pages

Top Related Pages

Approaches

AI-Era Epistemic Security