Anthropic's Claude Opus 4.6 recently tied OpenAI's GPT-5.2 for the top score of around 40% on Epoch AI's FrontierMath benchmark Tiers 1-4, a set of exceptionally challenging, unpublished math problems testing frontier AI reasoning capabilities, quadrupling prior Claude performance on Tier 4 alone. This progress, reported in early 2026 evaluations, reflects scaling improvements in long-context thinking tokens. On April 7, Anthropic unveiled the even more advanced Claude Mythos Preview—their most capable large language model to date—dominating benchmarks like SWE-Bench (77-94%) and GPQA Diamond (94.6%), though FrontierMath results remain unreleased amid safety concerns delaying public access. Traders eye potential Mythos deployment or Opus upgrades before the June 30 deadline, amid fierce competition from GPT-5.x and Gemini 3, but model timelines and evaluation uncertainties persist.
Riepilogo sperimentale generato dall'AI con riferimento ai dati di Polymarket. Questo non è un consiglio di trading e non ha alcun ruolo nella risoluzione di questo mercato. · AggiornatoAnthropic Claude score on FrontierMath Benchmark by June 30?
Anthropic Claude score on FrontierMath Benchmark by June 30?
$57,063 Vol.
50%+
77%
$57,063 Vol.
50%+
77%
This market will resolve according to the Epoch AI’s Frontier Math benchmarking leaderboard (https://epoch.ai/frontiermath) for Tier 1-3. Studies which are not included in the leaderboard (e.g. https://x.com/EpochAIResearch/status/1945905796904005720) will not be considered.
The primary resolution source will be information from EpochAI; however, a consensus of credible reporting may also be used.
Mercato aperto: Jan 30, 2026, 12:00 AM ET
Resolver
0x65070BE91...This market will resolve according to the Epoch AI’s Frontier Math benchmarking leaderboard (https://epoch.ai/frontiermath) for Tier 1-3. Studies which are not included in the leaderboard (e.g. https://x.com/EpochAIResearch/status/1945905796904005720) will not be considered.
The primary resolution source will be information from EpochAI; however, a consensus of credible reporting may also be used.
Resolver
0x65070BE91...Anthropic's Claude Opus 4.6 recently tied OpenAI's GPT-5.2 for the top score of around 40% on Epoch AI's FrontierMath benchmark Tiers 1-4, a set of exceptionally challenging, unpublished math problems testing frontier AI reasoning capabilities, quadrupling prior Claude performance on Tier 4 alone. This progress, reported in early 2026 evaluations, reflects scaling improvements in long-context thinking tokens. On April 7, Anthropic unveiled the even more advanced Claude Mythos Preview—their most capable large language model to date—dominating benchmarks like SWE-Bench (77-94%) and GPQA Diamond (94.6%), though FrontierMath results remain unreleased amid safety concerns delaying public access. Traders eye potential Mythos deployment or Opus upgrades before the June 30 deadline, amid fierce competition from GPT-5.x and Gemini 3, but model timelines and evaluation uncertainties persist.
Riepilogo sperimentale generato dall'AI con riferimento ai dati di Polymarket. Questo non è un consiglio di trading e non ha alcun ruolo nella risoluzione di questo mercato. · Aggiornato
Fai attenzione ai link esterni.
Fai attenzione ai link esterni.
Domande frequenti