What organization's AI model will perform best in the 2025 ARC Prize artificial general intelligence competition?

Started Jul 18, 2025 05:00PM UTC
Closed Nov 04, 2025 08:01AM UTC
Challenges
Tags

ARC Prize, Inc. is an organization sponsoring its second competition for AI systems and measuring their accuracy using the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI-2) benchmark (ARCPrize.org - ARC Prize 2025). Per ARC Prize, Inc., "Humans have collectively scored 100% in ARC," while AI systems have scored far lower (Kaggle, ARCPrize.org - ARC-AGI). The question will be suspended on 3 November 2025 and the outcome determined once the winners are announced (ARCPrize.org - Leaderboard, see "ARC-AGI-2" column under "LEADERBOARD BREAKDOWN." Please see the notes under the chart, and you can view only the ARC-AGI-2 results by clicking "ARC-AGI-2" under "DATA" immediately below the leaderboard. If a model is disqualified from the competition for any reason other than declining to fulfill the "Winner's Obligations" regarding licensing, its score will not count (Kaggle - ARC Prize 2025 Rules). If the organizers postpone the deadline, the closing date will be rescheduled accordingly. In the event of a tie, the model with the lower "Cost/Task" will be considered the winner. If there is still a tie, the answer option listed first (top to bottom) among the tied organizations will be considered the winner.

Confused? Check our FAQ or ask us for help. To learn more about Good Judgment and Superforecasting, click here.

To learn more about how you can become a Superforecaster, see hereFor other posts from our Insights blog, click here.

This question has ended, but is awaiting resolution by an admin.

Possible Answer Final Crowd Forecast
Anthropic 9%
Deepseek 6%
Google 17%
Meta 3%
OpenAI 29%
xAI 9%
Another organization 28%
Files
Tip: Mention someone by typing @username