interdisciplinary review of AI evaluation
paperAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
Interdisciplinary review examining how AI benchmarks are used to evaluate model performance, capabilities, and safety, with attention to their growing influence on AI development and regulatory frameworks and concerns about evaluating high-impact capabilities.
Paper Details
Metadata
Abstract
Quantitative Artificial Intelligence (AI) Benchmarks have emerged as fundamental tools for evaluating the performance, capability, and safety of AI models and systems. Currently, they shape the direction of AI development and are playing an increasingly prominent role in regulatory frameworks. As their influence grows, however, so too does concerns about how and with what effects they evaluate highly sensitive topics such as capabilities, including high-impact capabilities, safety and systemic risks. This paper presents an interdisciplinary meta-review of about 100 studies that discuss shortcomings in quantitative benchmarking practices, published in the last 10 years. It brings together many fine-grained issues in the design and application of benchmarks (such as biases in dataset creation, inadequate documentation, data contamination, and failures to distinguish signal from noise) with broader sociotechnical issues (such as an over-focus on evaluating text-based AI models according to one-time testing logic that fails to account for how AI models are increasingly multimodal and interact with humans and other technical systems). Our review also highlights a series of systemic flaws in current benchmarking practices, such as misaligned incentives, construct validity issues, unknown unknowns, and problems with the gaming of benchmark results. Furthermore, it underscores how benchmark practices are fundamentally shaped by cultural, commercial and competitive dynamics that often prioritise state-of-the-art performance at the expense of broader societal concerns. By providing an overview of risks associated with existing benchmarking procedures, we problematise disproportionate trust placed in benchmarks and contribute to ongoing efforts to improve the accountability and relevance of quantitative AI benchmarks within the complexities of real-world scenarios.
Summary
This interdisciplinary meta-review of ~100 studies examines critical shortcomings in quantitative AI benchmarking practices over the past decade. The paper identifies fine-grained technical issues (dataset biases, data contamination, inadequate documentation) alongside broader sociotechnical problems (overemphasis on text-based single-test evaluation, failure to account for multimodal and interactive AI systems). The authors highlight systemic flaws including misaligned incentives, construct validity issues, and gaming of results, arguing that benchmarking practices are shaped by commercial and competitive dynamics that often prioritize performance metrics over societal concerns. The review challenges the disproportionate trust placed in benchmarks and advocates for improved accountability and real-world relevance in AI evaluation.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Capability Threshold Model | Analysis | 72.0 |
Cached Content Preview
Can We Trust AI Benchmarks? An Interdisciplinary Review of Current Issues in AI Evaluation Disclaimer: The views expressed in this paper are purely those of the authors and may not, under any circumstances, be regarded as an official position of the European Commission.
Can We Trust AI Benchmarks?
An Interdisciplinary Review of
Current Issues in AI Evaluation
† † thanks: Disclaimer: The views expressed in this paper are purely those of the authors and may not, under any circumstances, be regarded as an official position of the European Commission.
Maria Eriksson
European Commission, Joint Research Centre (JRC)
Seville, Spain
maria.eriksson@ec.europa.eu
\And Erasmo Purificato
European Commission, Joint Research Centre (JRC)
Ispra, Italy
erasmo.purificato@ec.europa.eu
\And Arman Noroozian
European Commission, Joint Research Centre (JRC)
Brussels, Belgium
arman.noroozian@ec.europa.eu
\And João Vinagre
European Commission, Joint Research Centre (JRC)
Seville, Spain
joao.vinagre@ec.europa.eu
\And Guillaume Chaslot
European Commission, Joint Research Centre (JRC)
Brussels, Belgium
guillaume.chaslot@ec.europa.eu
\And Emilia Gómez
European Commission, Joint Research Centre (JRC)
Seville, Spain
emilia.gomez-gutierrez@ec.europa.eu
\And David Fernandez-Llorca
European Commission, Joint Research Centre (JRC)
Seville, Spain
david.fernandez-llorca@ec.europa.eu
Abstract
Quantitative Artificial Intelligence (AI) Benchmarks have emerged as fundamental tools for evaluating the performance, capability, and safety of AI models and systems. Currently, they shape the direction of AI development and are playing an increasingly prominent role in regulatory frameworks. As their influence grows, however, so too does concerns about how and with what effects they evaluate highly sensitive topics such as capabilities, including high-impact capabilities, safety and systemic risks. This paper presents an interdisciplinary meta-review of about 100 studies that discuss shortcomings in quantitative benchmarking practices, published in the last 10 years. It brings together many fine-grained issues in the design and application of benchmarks (such as biases in dataset creation, inadequate documentation, data contamination, and failures to distinguish signal from noise) with broader sociotechnical issues (such as an over-focus on evaluating text-based AI models according to one-time testing logic that fails to account for how AI models are increasingly multimodal and interact with humans and other technical systems). Our review also highlights a series of systemic flaws in current benchmarking practices, such as misaligned incentives, construct validity issues, unknown unknowns, and problems with the gaming of benchmark results. Furthermore, it underscores how benchmark practices are fundamentally shaped by cultural, commercial and competitive dynamics that often prioritise
... (truncated, 98 KB total)c5a21da9e0c0cdeb | Stable ID: sid_QQPKkUqqR9