top of page

GPQA benchmark leaderboard: Testing LLMs on graduate-level questions

Updated: 21 hours ago

The GPQA benchmark is an important LLM evaluation tool.


It assesses how well LLMs handle complex, domain-specific questions in subjects like biology, physics, and chemistry.


Let’s dive in.



Best LLM for the GPQA benchmark

Comparing the main frontier models on the GPQA benchmark.

GPQA benchmark leaderboard of frontier LLMs

Last updated: December, 2024

Company

Model

Score

Source

Google

Gemini 2.0

62.1%

xAI

Grok-2

56.0%

OpenAI

GPT-4o

53.6%

Meta

Llama 3.1 405B

51.1%

Anthropic

Claude3 Opus

50.4%

While OpenAI's GPT-4o used to lead with 53.6% (as of June 2024), it has now been surpassed by:

  • Google's Gemini 2.0, which achieved the top score of 62.1%, and

  • xAI's Grok-2, which scored 56.0%


The GPQA benchmark leaderboard shows solid capabilities across all models, though there’s room for improvement, as the top score remains at 62.1%.



What is the GPQA benchmark leaderboard?

GPQA stands for graduate-level Google-proof Q&A.


It was introduced by Rein et al. (2023) to evaluate how well LLMs can handle challenging questions that require reasoning and domain expertise.


The test includes 448 questions across:

  • Biology

  • Physics

  • Chemistry


The GPQA test is extremely difficult:

  • Experts achieve an average accuracy of 65%

  • Non-experts with internet access average only 34%


This makes the GPQA benchmark leaderboard a valuable tool for assessing an LLM’s domain-specific reasoning capabilities.


Other LLM benchmarks

At BRACAI, we keep track of how the main frontier models perform across multiple benchmarks.


If you have any questions about these benchmarks, or how to get started with AI in your business, feel free to reach out.

bottom of page