Back
KPMG Global AI Trust Study
webUseful empirical reference for AI governance discussions; provides large-scale public opinion data that can inform policy decisions, though it is a corporate survey rather than peer-reviewed academic research.
Metadata
Importance: 52/100organizational reportdataset
Summary
A large-scale survey of 48,000 respondents across 47 countries examines public attitudes toward AI adoption, identifying rising usage alongside significant trust deficits. The study highlights demographic and regional variations in AI acceptance and concern, with implications for governance and responsible deployment. It provides empirical grounding for understanding societal readiness and resistance to AI integration.
Key Points
- •Surveyed 48,000 people across 47 countries, making it one of the largest cross-national studies of public AI attitudes.
- •Finds rising AI adoption globally but persistent trust gaps, particularly around transparency, accountability, and safety.
- •Reveals significant regional and demographic differences in AI trust levels and willingness to rely on AI systems.
- •Highlights public concern about AI risks including job displacement, bias, and lack of human oversight.
- •Provides data relevant to policymakers and organizations designing governance frameworks for responsible AI deployment.
Review
The KPMG Global AI Trust Study provides a comprehensive insight into the current state of AI perception and usage worldwide. By surveying over 48,000 participants across 47 countries, the research reveals a complex landscape where AI adoption is rapidly increasing, yet public trust remains tentative. Key findings indicate that while 66% of people use AI regularly and 83% believe it will generate significant benefits, only 46% are willing to trust AI systems fully.
The study underscores the critical need for strategic interventions, recommending four key organizational actions: transformational leadership, enhancing trust, boosting AI literacy, and strengthening governance. These recommendations address the significant challenges revealed in the research, such as 66% of users relying on AI output without accuracy verification and 56% acknowledging work mistakes due to AI. The research provides a data-driven perspective on the urgent requirements for responsible AI development, emphasizing the importance of national and international regulation, with 70% of respondents supporting regulatory frameworks.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Trust Erosion Dynamics Model | Analysis | 59.0 |
Cached Content Preview
HTTP 200Fetched Apr 7, 202610 KB
Trust, attitudes and use of artificial intelligence: A global study 2025
Empowering human-AI collaboration for a trusted future.</p>\r\n"}}">
Trust, attitudes and use of artificial intelligence: A global study 2025
Empowering human-AI collaboration for a trusted future.
Download the report
AI has the immense potential to transform lives, boost industries and help tackle some of the most pressing global issues. Fully realizing this potential requires collaboration, a collective commitment to responsible innovation and appropriate regulation with education programs and skills development initiatives to help individuals better harness AI’s power.
Led by the University of Melbourne in collaboration with KPMG, Trust, attitudes and use of Artificial Intelligence: A global study 2025 , surveyed more than 48,000 people across 47 countries to explore the impact AI is having on individuals and organizations. It is one of the most wide-ranging global studies into the public’s trust, use, and attitudes towards AI to date.
The findings reveal that AI adoption is on the rise, but trust remains a critical challenge - reflecting a tension between the benefits and risks :
66% of people use AI regularly, and <b>83% believe the use of AI will result in a wide range of benefits</b>.</p>\r\n"}}">
psychology
The intelligent age has arrived
66% of people use AI regularly, and 83% believe the use of AI will result in a wide range of benefits .
Yet, trust remains a critical challenge: only <b>46% of people globally are willing to trust AI systems</b>.</p>\r\n"}}">
shield
Trust remains a critical challenge
Yet, trust remains a critical challenge: only 46% of people globally are willing to trust AI systems .
There is a public mandate for national and international AI regulation with <b>70% believing regulation is needed</b>.</p>\r\n"}}">
gavel
AI regulation
There is a public mandate for national and international AI regulation with 70% believing regulation is needed .
Many <b>rely on AI output without evaluating accuracy</b> (66%) and are making mistakes in their work due to AI (56%).</p>\r\n<p> </p>\r\n"}}">
work_outline
... (truncated, 10 KB total)Resource ID:
2f254d7fc3f63c7f | Stable ID: sid_d27mNO9E8w