Google's Gemini models trail OpenAI's GPT-5.4, which leads the FrontierMath benchmark at around 48% overall—38% on the ultra-challenging Tier 4 research problems—as of April 2026, while Gemini 3.1 Pro scores approximately 37% and earlier Gemini 3 Pro Preview hit 19% on Tier 4. Recent launches like Gemini 3 Deep Think in February 2026 have boosted mathematical reasoning for scientific discovery, narrowing the gap through enhanced chain-of-thought processing and error-checking. Competitive pressure from OpenAI's March record has spurred trader optimism for Google's response, especially ahead of Google I/O on May 19-20, where new model previews or capability demos could drive score improvements by June 30 amid rapid AI scaling trends.
Експериментальне резюме, згенероване ШІ з посиланням на дані Polymarket. Це не торгова порада і не впливає на вирішення цього ринку. · ОновленоGoogle Gemini score on FrontierMath Benchmark by June 30?
Google Gemini score on FrontierMath Benchmark by June 30?
$127,679 Обс.
40%+
92%
45%+
41%
50%+
33%
60%+
17%
$127,679 Обс.
40%+
92%
45%+
41%
50%+
33%
60%+
17%
This market will resolve according to the Epoch AI’s Frontier Math benchmarking leaderboard (https://epoch.ai/frontiermath) for Tier 1-3. Studies which are not included in the leaderboard (e.g. https://x.com/EpochAIResearch/status/1945905796904005720) will not be considered.
The primary resolution source will be information from EpochAI; however, a consensus of credible reporting may also be used.
Ринок відкрито: Feb 6, 2026, 6:03 PM ET
Resolver
0x65070BE91...This market will resolve according to the Epoch AI’s Frontier Math benchmarking leaderboard (https://epoch.ai/frontiermath) for Tier 1-3. Studies which are not included in the leaderboard (e.g. https://x.com/EpochAIResearch/status/1945905796904005720) will not be considered.
The primary resolution source will be information from EpochAI; however, a consensus of credible reporting may also be used.
Resolver
0x65070BE91...Google's Gemini models trail OpenAI's GPT-5.4, which leads the FrontierMath benchmark at around 48% overall—38% on the ultra-challenging Tier 4 research problems—as of April 2026, while Gemini 3.1 Pro scores approximately 37% and earlier Gemini 3 Pro Preview hit 19% on Tier 4. Recent launches like Gemini 3 Deep Think in February 2026 have boosted mathematical reasoning for scientific discovery, narrowing the gap through enhanced chain-of-thought processing and error-checking. Competitive pressure from OpenAI's March record has spurred trader optimism for Google's response, especially ahead of Google I/O on May 19-20, where new model previews or capability demos could drive score improvements by June 30 amid rapid AI scaling trends.
Експериментальне резюме, згенероване ШІ з посиланням на дані Polymarket. Це не торгова порада і не впливає на вирішення цього ринку. · Оновлено
Обережно з зовнішніми посиланнями.
Обережно з зовнішніми посиланнями.
Часті запитання