Skip to content
Longterm Wiki
benchmark-result

Claude Opus 4.5 on HumanEval: 92

Child of HumanEval

Metadata

Source Tablebenchmark_results
Source IDI4H3zQ7VJK
Source URLautomatio.ai/models/claude-opus-4-5
ParentHumanEval
Children
CreatedApr 24, 2026, 7:48 PM
UpdatedApr 24, 2026, 7:48 PM
SyncedApr 24, 2026, 7:48 PM

Record Data

idI4H3zQ7VJK
benchmarkIdvxX2rorgxU
modelIdClaude Opus 4.5(ai-model)
score92
unitpercent
date2025-11-24
sourceUrlautomatio.ai/models/claude-opus-4-5
notesHumanEval - Python function implementation benchmark

Source Check Verdicts

confirmed99% confidence

Last checked: 4/24/2026

Inline sourcing: confirmed

Debug info

Thing ID: I4H3zQ7VJK

Source Table: benchmark_results

Source ID: I4H3zQ7VJK

Parent Thing ID: vxX2rorgxU