The University Forecasting Challenge asks:

Will a Google large language model (LLM) be ranked first as of 12 December 2025, according to LMArena's "Text Arena"?

Started Oct 03, 2025 05:00PM UTC
Closing Dec 12, 2025 08:01AM UTC

LMArena is an open-source platform for crowdsourced AI benchmarking, created by researchers from UC Berkeley SkyLab (SkyLab). For more information on how the leaderboard is constructed, see LMArena - Blog. The question will be suspended on 11 December 2025 and the outcome determined using the ranks as reported by LMArena at approximately 5:00p.m. ET on 12 December 2025 (LM Arena - Text Arena Leaderboard, see "Rank (UB)"). As of 30 September 2025, Google was ranked first, with its "gemini-2.5-pro" scoring 1456, followed by Anthropic's "claude-opus-4-1-20250805-thinking-16k" scoring 1449. In the event of a tie for first place by LLMs of different organizations, the LLM with the higher "Score" will be considered first, followed by the "Votes" total. If the named source changes the way it presents the data, further instructions will be provided.

Confused? Check our FAQ or ask us for help. To learn more about Good Judgment and Superforecasting, click here.

To learn more about how you can become a Superforecaster, see hereFor other posts from our Insights blog, click here.

Possible Answer Crowd Forecast Change in last 24 hours Change in last week Change in last month
Yes 96.00% +1.00% +1.00% +21.79%
No 4.00% -1.00% -1.00% -21.79%

Sign up or sign in to forecast!

Sign Up Sign In
Files
Tip: Mention someone by typing @username