AI Masters Geometry Challenges, Comparable To Human Performance In Olympiad
AlphaGeometry, an artificial intelligence developed by Google DeepMind, demonstrates proficiency in solving geometry questions from the International Mathematical Olympiad (IMO) at a level comparable to top human participants.
The AI tool, the code for which was open sourced on Wednesday, solved 25 Olympiad geometry problems within the standard time limit.
Gregor Dolinar, the IMO president, said, “It seems that AI will win the IMO gold medal much sooner than was thought even a few months ago.”
The IMO, designed for secondary school students, stands out as one of the most challenging mathematical competitions globally. Solving its questions demands a level of mathematical creativity that AI systems have traditionally found challenging.
For instance, despite GPT-4 demonstrating plausible reasoning skills in various areas, it scores zero percent on IMO geometry questions.
Google DeepMind's Thang Luong underscored at a press conference the challenges faced by current AI systems in tasks requiring deep reasoning and planning, highlighting mathematics as a key benchmark in the path to artificial general intelligence.
YouTube Rakes In Millions From Ads On Climate Denial Channels: Report
Click here