AI Trust Cascade Failure
AI Trust Cascade Failure
Analysis of how declining institutional trust (media 31%, federal government 17% per 2024-2025 Gallup/Pew data) could create self-reinforcing collapse where no trusted entity can validate others, potentially accelerated by AI-enabled synthetic evidence and coordinated disinformation. Identifies cascade pathways through media, science, elections, and finance. Documents partisan trust polarization, deepfake-driven trust erosion, and the bootstrapping problem in recovery. Covers defensive strategies including C2PA provenance standards, content labeling, and open accountability protocols, but notes fundamental gaps in AI-resistant trust mechanisms.
AI Trust Cascade Failure
Analysis of how declining institutional trust (media 31%, federal government 17% per 2024-2025 Gallup/Pew data) could create self-reinforcing collapse where no trusted entity can validate others, potentially accelerated by AI-enabled synthetic evidence and coordinated disinformation. Identifies cascade pathways through media, science, elections, and finance. Documents partisan trust polarization, deepfake-driven trust erosion, and the bootstrapping problem in recovery. Covers defensive strategies including C2PA provenance standards, content labeling, and open accountability protocols, but notes fundamental gaps in AI-resistant trust mechanisms.
Trust cascade failure represents one of the most underappreciated risks facing modern society: the collapse of institutional trust creating a self-reinforcing cycle where no trusted entity remains to validate or rebuild trust in others. Unlike isolated institutional failures, AI Trust Cascade Failures create systemic vulnerabilities where the very mechanisms societies use to establish truth, coordinate action, and resolve disputes become inoperative. This scenario poses particular risks as AI systems increasingly enable sophisticated attacks on institutional credibility while simultaneously making it harder for institutions to defend their legitimacy.
The emergence of AI Capabilities for generating synthetic evidence, coordinating massive disinformation campaigns, and personalizing distrust narratives threatens to accelerate trust erosion beyond historical precedent. Current data shows institutional trust already at concerning levels: media trust at 32% (Public Opinion), federal government trust at 16% (Public Opinion), and declining confidence across scientific, medical, and judicial institutions. What makes trust cascades particularly dangerous is their self-perpetuating nature—once trust falls below critical thresholds, the normal mechanisms for rebuilding trust through institutional vouching and cross-validation cease to function effectively.
The Cascade Mechanism
Trust cascades operate through a bootstrapping problem that becomes increasingly severe as more institutions lose credibility. In normal circumstances, institutional trust operates through a network of mutual validation—courts validate elections, science validates policy, media validates institutions, and expert bodies credential each other. This creates redundancy and resilience against attacks on any single institution. Public trust in AI and information systems likewise emerges from a broad network of interconnected social actors, including political figures, media, academia, and government bodies, rather than from any single authoritative source.1
The four-stage feedback loop driving trust cascade failure can be visualized as follows:
mermaid flowchart TD A["Trust Erosion\n(Institutions lose credibility)"] --> B["Validation Network Breakdown\n(Mutual credentialing fails)"] B --> C["Bootstrap Failure\n(No entity can vouch for others)"] C --> D["Accelerated Collapse\n(Recovery conditions worsen)"] D --> A style A fill:#c0392b,color:#fff style B fill:#e67e22,color:#fff style C fill:#8e44ad,color:#fff style D fill:#2c3e50,color:#fff
However, when trust erosion reaches critical mass, this validation network breaks down. Research on cascading failures in social networks demonstrates that cascading failure occurs when the unavailability of a few key relationships triggers successive failures across large networks—a ripple effect in which sparse, inhomogeneous structures show somewhat better robustness than homogeneous ones.1 The collapse follows predictable patterns: initial attacks on vulnerable institutions create credibility gaps that spread to interconnected entities. Scientists cannot validate public health recommendations if science itself is viewed as politically compromised. Courts cannot establish electoral legitimacy if judicial processes are seen as partisan. Media cannot fact-check misinformation if journalism is dismissed as propaganda. Each institutional failure reduces the capacity to rebuild trust in others, creating a vicious cycle.
Empirical data underscore the severity of this dynamic. Only 17% of Americans trusted the federal government to do what is right as of September 2025, near historic lows since the question was first asked nearly seven decades ago.2 Meanwhile, institutional trust has become increasingly contingent on partisan affiliation rather than rooted in the institutions themselves, further fragmenting the shared epistemic ground on which validation networks depend.3
The mathematical structure of trust networks suggests they exhibit threshold effects—gradual decline can accelerate rapidly once critical connection points fail. Empirical modeling of trust collapse in networked societies shows a first-order phase transition in which small decreases in trustworthiness are amplified through panic mechanisms, causing sudden catastrophic drops in collective trust rather than smooth linear decline.2 Research on global cascades on random networks similarly finds that network connectivity determines whether small shocks remain local or propagate system-wide, with highly connected nodes serving as pivotal vulnerabilities.4 AI systems amplify this vulnerability by enabling simultaneous attacks across multiple institutions, overwhelming their capacity for coordinated defense. Cascading failures in Agentic AI systems specifically propagate across autonomous agents and compound into system-wide harm before human operators can intervene.5
The bootstrap problem emerges most clearly during reconstruction attempts. Rebuilding institutional trust requires some credible entity to vouch for reformed institutions. But if no institutions retain sufficient credibility, none can serve this validating function. Institutional trust dissolves rather than migrating to AI systems because vendors disclaim responsibility and bear no accountability, leaving no anchor for reconstruction.6 Trust levels in social networks must actually increase as network size grows in order to maintain stability, meaning that a collapsing network faces structurally worsening conditions for recovery.7 Historical examples of trust reconstruction typically rely on external validation (foreign allies, international bodies) or generational change, processes that may be insufficient for AI-accelerated collapses.
| Stage | Mechanism | Key Vulnerability | Observable Indicator |
|---|---|---|---|
| Trust Erosion | Synthetic content floods verification channels | Shared epistemic ground dissolves | Institutional approval ratings near historic lows |
| Validation Network Breakdown | Mutual credentialing among institutions fails | Interconnected nodes amplify single-point failures | Cross-institutional credibility gaps widen |
| Bootstrap Failure | No credible entity can vouch for others | Accountability vacuum as vendors disclaim responsibility | Reconstruction attempts find no validating anchor |
| Accelerated Collapse | Recovery conditions structurally worsen | Network size growth increases required trust threshold | Partisan-contingent trust replaces institutional trust |
AI Acceleration Vectors
Artificial intelligence introduces unprecedented capabilities for attacking institutional trust through multiple simultaneous vectors. Synthetic media generation enables the creation of compelling fake evidence against any institution, from fabricated documents to deepfake videos showing institutional corruption or incompetence. Deepfakes are fundamentally different from traditional disinformation—they are convincing, scalable, and increasingly accessible, with survey data across eight countries showing that prior exposure to deepfakes increases belief in misinformation.8 Unlike traditional disinformation requiring significant resources, AI democratizes sophisticated forgery, allowing state and non-state actors to generate attacks at industrial scale. Experimental research confirms that deepfakes depicting infrastructure failures heighten distrust in government more effectively than traditional disinformation formats.9
Coordinated campaign capabilities represent another critical acceleration factor. During the 2024 U.S. election cycle, Russian operatives created AI-generated deepfakes of political figures that were amplified across major social media platforms, while AI-generated robocalls impersonated a sitting president to suppress voter turnout.10 Large language models enable personalized distrust campaigns, crafting institution-specific attacks tailored to individual psychological vulnerabilities and existing biases. The World Economic Forum Global Risks Report 2025 ranked disinformation as the top severe global challenge for the coming two years, with accessible generative AI tools cited as a key amplifying factor.11 AI-generated disinformation in 2025 elections created shadow economies including paid engagement farms and amplification-for-hire schemes operating across more than 100 national elections.12
The erosion of information verification creates additional vulnerabilities. As generative AI produces text, images, audio, and video that are perceptually convincing at negligible marginal cost, the most consequential risk is progressive erosion of shared epistemic ground and institutional verification practices.1 The "liar's dividend" phenomenon enables bad actors to dismiss authentic evidence as AI-generated, further weakening democratic institutions over time.10 Authentication systems lag behind generation capabilities, creating windows where any statement or document can be credibly disputed.
AI also enables historical revisionism at scale, systematically undermining institutional track records. Machine learning systems can identify the most damaging historical moments for any institution and amplify them through sophisticated narrative construction, creating an environment where no institution can point to historical credibility.
Current Trust Landscape
Data from multiple sources reveals concerning trends across key institutional categories. Media trust has declined to 31% according to 2024 Gallup polling, representing near-historic lows—down from 68–72% in the 1970s.13 This decline is particularly concerning given media's role in validating other institutions—when journalism loses credibility, other institutions lose a crucial channel for defending their legitimacy and communicating with the public.
Government trust shows even more severe erosion, with Pew Research finding only 17% of Americans trust the federal government to do what is right always or most of the time as of September 2025.2 This represents a dramatic decline from 1960s levels near 80% and creates challenges for coordinated policy responses to complex problems.14 State and local government trust remains somewhat higher but shows similar downward trajectories.
The following table summarizes confidence levels across major U.S. institutions based on the most recent available data:
| Institution | Confidence / Trust Level | Source | Year |
|---|---|---|---|
| Small business | 70% | Gallup | 2025 |
| Military | 62% | Gallup | 2025 |
| Science | 61% | Gallup | 2025 |
| Police | 51% | Gallup | 2024 |
| Federal government | 17% | Pew Research | 2025 |
| Television news | ≈10% | Gallup | 2025 |
| Congress | ≈10% | Gallup | 2025 |
| Media (overall) | 31% | Gallup | 2024 |
| Average across key institutions | 28% | Gallup | 2025 |
Only three of eighteen key U.S. institutions tracked by Gallup earn majority-level confidence: small business (70%), military (62%), and science (61%).15 The average confidence across institutions was 28%, marking the fourth consecutive year below 30%.15
Scientific institutional trust, while historically more resilient, has experienced significant partisan polarization. Pew Research documents that 76% of Americans express at least a fair amount of confidence in scientists, but a significant partisan gap persists—88% of Democrats versus 66% of Republicans.6 Trust in scientists had declined during the pandemic era before stabilizing slightly in 2024.6
Partisan dynamics are reshaping institutional trust broadly. Gallup finds that trust in the executive branch among opposition-party supporters has collapsed from 49% in the 1970s to just 7% today, and Republican confidence in the executive branch swung 83 percentage points between 2024 and 2025 following the change in administration.3 This volatility suggests trust is increasingly contingent on party control rather than rooted in institutions themselves.3
The judicial system shows particular vulnerability to cascade effects. Gallup polling indicates that unfavorable views of the Supreme Court exceeded favorable ones for the first time since 1987, driven by declining trust among Democrats.16 This creates risks for electoral dispute resolution and constitutional crisis management, core functions requiring broad institutional legitimacy.
Financial institutions present mixed patterns. While banks experienced temporary trust loss during 2023 regional banking stress, cryptocurrency adoption suggests some populations seek alternative trust mechanisms outside traditional financial institutions. This fragmentation of financial trust creates new vulnerabilities to coordinated attacks.
Cascade Pathways and Dynamics
Several specific pathways could trigger comprehensive trust cascades, each with distinct characteristics and implications. The media-initiated cascade represents perhaps the most likely near-term scenario. As media trust continues declining—Gallup recorded American media trust at a historic low of 31%, with 36% of adults expressing no trust at all for three consecutive years13—other institutions lose their primary channel for communicating with the public and defending against attacks. Scientists, government officials, and judicial authorities depend on journalistic intermediation to reach broad audiences. Without trusted media, these institutions become vulnerable to unmediated attack while losing defensive capabilities.
The science-to-policy cascade poses particular risks for technically complex challenges. Trust in scientists has declined in post-pandemic years, with institutional mistrust increasingly intertwined with extreme political polarization17. If scientific institutions lose credibility, evidence-based policy becomes impossible as elected officials cannot credibly claim scientific backing for decisions. This creates policy paralysis on issues requiring technical expertise, from climate change to pandemic response to economic regulation. Government authority increasingly depends on technocratic legitimacy; without scientific validation, democratic governance loses a crucial foundation. Only 22% of U.S. adults trust the federal government to do the right thing, continuing a decline from over 70% in the late 1950s18.
Electoral cascades create the most immediate threats to democratic governance. AI tools have already demonstrated this risk: AI-generated robocalls impersonating President Biden urged New Hampshire primary voters not to vote in 2024, and Russian operatives created AI-generated deepfakes of Vice President Harris shared on major platforms10. The Brennan Center identifies a "liar's dividend" dynamic wherein bad actors can dismiss authentic evidence as AI-fabricated, making post-election disputes increasingly difficult to resolve10. If election administration loses trust, losing political factions may refuse to accept results, creating constitutional crises. Unlike other institutional failures, electoral trust collapse directly threatens peaceful power transfer, the foundation of democratic stability.
Financial cascades operate through different mechanisms but create equally severe consequences. Deloitte projects that generative AI could drive U.S. fraud losses from $12.3 billion in 2023 to $40 billion by 2027, with deepfake-driven fraud already exceeding $200 million in losses in the first quarter of 2025 alone812. If banking institutions, currency systems, or contract enforcement mechanisms lose trust, economic coordination becomes impossible. Market systems depend on shared confidence in financial infrastructure; without this foundation, complex economic activity requiring trust between strangers becomes prohibitively expensive or impossible.
Societal Consequences and Recovery Challenges
Trust cascade failures create cascading effects throughout social organization. Collective action problems become unsolvable without trusted coordinating institutions. Public goods provision fails when no entity has sufficient credibility to organize collective contributions. International cooperation becomes impossible as domestic institutions cannot credibly commit to agreements.
Erosion of Knowledge and Epistemic Infrastructure
Knowledge production faces particular challenges in post-cascade environments. Without trusted certification mechanisms, expertise becomes meaningless as individuals cannot distinguish between qualified and unqualified sources.16 GenAI enables personalized synthetic realities where content, identity, and interaction are jointly manufactured, further undermining shared epistemic ground.16 Institutions face escalating verification costs when shared evidence becomes scarce and disagreement becomes epistemically irresolvable through synthetic documentation.16 Scientific progress slows as research findings cannot gain broad acceptance without trusted validation mechanisms, a trend measurable in declining public confidence in scientists and research institutions.6
Social Cohesion and Fragmentation
Social cohesion erodes as shared institutions that previously provided common ground for disagreement resolution disappear. Political conflict intensifies without trusted mediation mechanisms, potentially escalating to violence. Tribal fragmentation accelerates as trust concentrates within narrow in-groups while extending to no broader institutions.4 Research on AI-mediated communication shows that reduced epistemic trust creates a dilemma: maintaining normal trust levels risks gullibility, while adopting reduced trust risks epistemic injustice for all participants.4 Deepfakes depicting infrastructure failures have been shown experimentally to increase government distrust, demonstrating how synthetic media can accelerate institutional confidence collapse.9 Gallup data confirm that average confidence across fourteen regularly-tracked U.S. institutions has stood below 30% for three consecutive years, with Congress and television news earning roughly 10% confidence each.15
The Bootstrapping Problem in Recovery
Recovery from trust cascades proves extraordinarily difficult due to fundamental bootstrapping problems. Rebuilding requires some credible entity to vouch for reformed institutions, but cascade scenarios specifically involve the absence of any such credible entities. Trust at scale requires shared rules tying outcomes to accountable processes, implemented as open protocols—yet those protocols themselves presuppose a baseline of institutional trust to achieve adoption.6 Historical examples of trust reconstruction typically involve external validation, generational change, or crisis-induced cooperation—processes that may be insufficient for AI-accelerated collapses.
Local trust-building offers some recovery potential but faces significant scaling challenges. Face-to-face communities can rebuild trust through direct interaction and repeated cooperation. Gallup World Poll data show that people satisfied with local conditions report confidence in an average of 3.7 national institutions, versus 1.3 for those satisfied with none—suggesting local investment as a potential recovery lever.19 However, complex modern societies require institutional trust that extends beyond personal networks, and technological solutions like cryptographic verification face adoption barriers and technical limitations.
Defensive Strategies and Uncertainties
Preventing trust cascades requires multi-layered approaches addressing both attack vectors and institutional vulnerabilities. Institutional resilience involves hardening organizations against sophisticated attacks through improved cybersecurity, authentication systems, and rapid response capabilities. However, these defenses are resource-intensive and may lag behind evolving AI attack capabilities—as of 2025, only 34% of organizations had deployed AI-specific security controls despite agentic systems outnumbering human operators 82 to 1 in enterprise environments.20
Cross-institutional coordination offers promise but requires unprecedented cooperation levels. Institutions must develop shared defensive strategies, mutual vouching protocols, and coordinated communication during crises. Research on cascading failure mitigation in social networks suggests that targeted interventions at high-centrality nodes can interrupt propagating trust failures before they reach cascade thresholds.5 This coordination proves difficult given existing institutional competition and different organizational cultures.
Early intervention systems could identify trust erosion before cascade thresholds, but detection proves challenging given the gradual nature of trust decline and the difficulty of measuring trust in real-time. Warning systems require sophisticated monitoring of public opinion, institutional performance, and attack campaign detection. Network models of public trust in generative AI indicate that trust emerges from material interactions across a vast and precarious network of political figures, media, academia, and government bodies—meaning monitoring must span all these actor classes, not just AI systems themselves.16
Several critical uncertainties complicate defensive planning. The reversibility of trust collapse remains unclear—research on sudden trust collapse in networked societies demonstrates that trust exhibits a first-order phase transition with history dependence, meaning outcomes depend strongly on initial conditions and recovery paths differ from decline paths.16 This is consistent with hysteresis effects, where rebuilding trust after collapse is substantially slower and more costly than the original decline.16 The minimum trust thresholds required for societal functioning are poorly understood, making it difficult to assess whether current levels represent manageable challenges or approaching crisis points.
The potential for AI-resistant trust mechanisms represents another key uncertainty. Proposals span several layers:
| Mechanism | Example | Key Limitation |
|---|---|---|
| Content provenance standards | C2PA cryptographic provenance metadata attached to media at point of creation | Requires universal adoption by platforms and devices; easily stripped in transit |
| Synthetic media labeling | India's IT Rules Amendment 2026 mandatory identifiers on AI-generated content | No independent mechanism exists to verify AI refusals to generate harmful content3 |
| Agentic threat modeling | METR for multi-agent AI security (Cloud Security Alliance, 2025) | Existing frameworks like STRIDE and PASTA lack AI-specific threat categories such as adversarial attacks and data poisoning15 |
| Open accountability protocols | Trust-as-infrastructure via open protocols analogous to email and web standards | Requires shared governance and vendor participation; vendors currently disclaim responsibility for AI outputs6 |
Cryptographic systems, decentralized verification, and algorithmic transparency could theoretically provide attack-resistant foundations for institutional trust, but face significant usability barriers and may not scale to complex institutional coordination requirements. The OWASP Agentic Security Initiative identifies cascading failures (ASI08) as a distinct risk category in which agent-to-agent communications allow semantic errors to pass validation and propagate as valid data, compounding before human operators can intervene.5
AI-driven disinformation has been identified as among the top global risks precisely because algorithmically amplified falsehoods systematically distort political information environments and erode public trust faster than defensive institutions can respond.4 Indirect prompt injection—which allows attackers to hijack agent actions through poisoned documents or web content—remains a fundamental unsolved flaw despite extensive red-teaming efforts by leading AI developers.21 The epistemic cost of synthetic content scales with output volume: as GenAI produces perceptually convincing material at negligible marginal cost, institutions face escalating verification costs when shared evidence becomes scarce and disagreement becomes irresolvable.16
Whether technological solutions can substitute for traditional institutional trust remains an open question with profound implications for societal organization in an AI-enabled world. Open protocol models—in which trust infrastructure is implemented as shared, accountable standards rather than proprietary vendor claims—represent a structurally distinct alternative, but depend on governance coordination that has not yet materialized at scale.6
Key Questions
- ?Has institutional trust already passed critical cascade thresholds, or are current levels still recoverable?
- ?Can cryptographic and technological trust mechanisms effectively substitute for traditional institutional trust at societal scale?
- ?What are the minimum trust levels required for complex democratic societies to function effectively?
- ?How quickly could AI-accelerated trust attacks trigger cascade failures compared to historical institutional decline rates?
- ?Are there AI-resistant institutional designs that could maintain trust even under sophisticated coordinated attacks?
Research and Resources
Academic Research
- Edelman Trust Barometer↗🔗 web★★★☆☆EdelmanTrust Research (Edelman)epistemiccascadetrustinstitutional-trust+1Source ↗ - Annual global trust survey
- Pew Research: Institutional Trust↗🔗 web★★★★☆Pew Research CenterPew: 16% trust federal gov'tepistemiccascadetrustinstitutional-trust+1Source ↗
- Gallup: Confidence in Institutions↗🔗 web★★★★☆GallupGallup: Confidence in InstitutionsA survey assessing public trust and confidence levels across different institutions in American society. Examines perceptions of key organizations and sectors.epistemiccascadetrustinstitutional-trust+1Source ↗
- Oxford Martin School: Governance Futures↗🔗 webOxford Martin School: Governancegovernanceinstitutional-trustsocial-capitallegitimacySource ↗
Key Papers
- Fukuyama, F. (1995): "Trust: The Social Virtues and the Creation of Prosperity"
- Putnam, R. (2000): "Bowling Alone: The Collapse and Revival of American Community"
- Zak, P. & Knack, S. (2001): "Trust and Growth" — Economic Journal↗🔗 webEconomic Journaleconomicinstitutional-trustsocial-capitallegitimacySource ↗
- Algan, Y. & Cahuc, P. (2014): "Trust, Growth, and Well-Being" — Annual Review of Economics↗🔗 webAnnual Review of Economicseconomicinstitutional-trustsocial-capitallegitimacySource ↗
Policy Analysis
- Brookings: Trust in Government↗🔗 web★★★★☆Brookings InstitutionBrookings: Trust in Governmentinstitutional-trustsocial-capitallegitimacySource ↗
- Atlantic Council: Digital Trust↗🔗 web★★★★☆Atlantic CouncilAtlantic Council: Digital Trustinstitutional-trustsocial-capitallegitimacySource ↗
- RAND: Institutional Trust↗🔗 web★★★★☆RAND CorporationRAND: Institutional Trustinstitutional-trustsocial-capitallegitimacySource ↗
Footnotes
-
The Generative AI Paradox (https://arxiv.org/html/2601.00306v1) ↩ ↩2 ↩3
-
Public Trust in Government: 1958-2025 | Pew Research Center (https://pewresearch.org/politics/2025/12/04/public-trust... — Public Trust in Government: 1958-2025 | Pew Research Center (https://pewresearch.org/politics/2025/12/04/public-trust-in-government-1958-2025/) ↩ ↩2 ↩3
-
U.S. Trust in Government Depends Upon Party Control (https://news.gallup.com/poll/697421/trust-government-depends-upon-party-control.aspx) ↩ ↩2 ↩3 ↩4
-
Data Behind Americans' Waning Trust in Institutions (https://pewtrusts.org/en/trend/archive/fall-2024/data-behind-ame... — Data Behind Americans' Waning Trust in Institutions (https://pewtrusts.org/en/trend/archive/fall-2024/data-behind-americans-waning-trust-in-institutions) ↩ ↩2 ↩3 ↩4
-
Cascading Failures in Agentic AI: Complete OWASP ASI08 Security Guide 2026 (https://adversa.ai/blog/cascading-failure... — Cascading Failures in Agentic AI: Complete OWASP ASI08 Security Guide 2026 (https://adversa.ai/blog/cascading-failures-in-agentic-ai-complete-owasp-asi08-security-guide-2026) ↩ ↩2 ↩3
-
Eli Alshanetsky, When AI Dissolves Trust (https://philarchive.org/rec/ALSWAD) ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7
-
Knowledge Collapse (https://hal.science/hal-04534111v1/file/Knowledge_collapse.pdf) ↩
-
Deepfakes and the crisis of knowing (https://www.unesco.org/en/articles/deepfakes-and-crisis-knowing) ↩ ↩2
-
False failures, real distrust: the impact of an infrastructure failure deepfake on government trust (https://pmc.ncbi... — False failures, real distrust: the impact of an infrastructure failure deepfake on government trust (https://pmc.ncbi.nlm.nih.gov/articles/PMC12141277) ↩ ↩2
-
Gauging the AI Threat to Free and Fair Elections (https://www.brennancenter.org/our-work/analysis-opinion/gauging-ai-... — Gauging the AI Threat to Free and Fair Elections (https://www.brennancenter.org/our-work/analysis-opinion/gauging-ai-threat-free-and-fair-elections) ↩ ↩2 ↩3 ↩4
-
Disinformation after Generative AI and Synthetic Data (https://www.crisp-surveillance.com/blog/253327/disinformation-... — Disinformation after Generative AI and Synthetic Data (https://www.crisp-surveillance.com/blog/253327/disinformation-after-generative-ai-and-synthetic-data) ↩
-
From Deepfake Scams to Poisoned Chatbots: AI and Election Security in 2025 (https://cetas.turing.ac.uk/publications/d... — From Deepfake Scams to Poisoned Chatbots: AI and Election Security in 2025 (https://cetas.turing.ac.uk/publications/deepfake-scams-poisoned-chatbots) ↩ ↩2
-
Americans' Trust in Media Remains at Trend Low (https://news.gallup.com/poll/651977/americans-trust-media-remains-tre... — Americans' Trust in Media Remains at Trend Low (https://news.gallup.com/poll/651977/americans-trust-media-remains-trend-low.aspx) ↩ ↩2
-
Public trust in government near historic lows (https://pewresearch.org/chart/public-trust-in-government-near-historic... — Public trust in government near historic lows (https://pewresearch.org/chart/public-trust-in-government-near-historic-lows-5) ↩
-
Citation rc-b9b3 (data unavailable — rebuild with wiki-server access) ↩ ↩2 ↩3 ↩4
-
Americans' Deepening Mistrust of Institutions | The Pew Charitable Trusts (https://pew.org/en/trend/archive/fall-2024... — Americans' Deepening Mistrust of Institutions | The Pew Charitable Trusts (https://pew.org/en/trend/archive/fall-2024/americans-deepening-mistrust-of-institutions) ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8
-
Americans' Deepening Mistrust of Institutions (https://www.pew.org/en/trend/archive/fall-2024/americans-deepening-mis... — Americans' Deepening Mistrust of Institutions (https://www.pew.org/en/trend/archive/fall-2024/americans-deepening-mistrust-of-institutions) ↩
-
Americans' trust in federal government and attitudes toward it (https://www.pewresearch.org/politics/2024/06/24/ameri... — Americans' trust in federal government and attitudes toward it (https://www.pewresearch.org/politics/2024/06/24/americans-trust-in-federal-government-and-attitudes-toward-it) ↩
-
Citation rc-d9ad (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-618c (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-bbe1 (data unavailable — rebuild with wiki-server access) ↩
References
A survey assessing public trust and confidence levels across different institutions in American society. Examines perceptions of key organizations and sectors.