Advertisement

Google Claims Its AI Tool Can Beat Math Olympiad Gold Medalists

Google claims the performance of its upgraded AI system had surpassed the level of average gold medalists.

Google Claims Its AI Tool Can Beat Math Olympiad Gold Medalists
Google's second generation of AI mathematics system has taken a huge leap.

Google has developed an artificial intelligence (AI) math system that can outwit human gold medalists at the International Mathematical Olympiad (IMO). AlphaGeometry2, the AI problem solver is capable of solving 84 per cent of geometry problems posed in the IMO where the gold-medal winners can only solve 81.8 per cent of the problems on average. IMO problems are known for their difficulty, and solving them requires a deep understanding of mathematical concepts -- something which the AI models had not been able to achieve up until now.

Engineered by DeepMind, AlphaGeometry managed to perform at the level of silver medallists in January last year when it was unveiled. However, a year later, Google claims the performance of its upgraded system had surpassed the level of average gold-medalists.

To enhance the system's abilities, the California-based company said it extended the original AlphaGeometry language to tackle harder problems involving movements of objects, and problems containing linear equations of angles, ratios, and distances.

"This, together with other additions, has markedly improved the coverage rate of the AlphaGeometry language on IMO 2000-2024 geometry problems from 66 per cent to 88 per cent."

Using its state-of-the-art Gemini AI tool, Google also improved the search process of AlphaGeometry2 to make it better at language modelling.

"The team also introduced the ability for the AI to reason by moving geometric objects around a plane - allowing it to move a point along a line to change the height of a triangle, for instance - and for it to solve linear equations."

Also Read | AI Tools Diminishing Critical Thinking Skills Of Students, Study Finds

Despite achieving an incredible 84 per cent efficiency rate in solving tricky math problems, Google said there is still room for improvement.

"Our domain language does not allow talking about variable number of points, non-linear equations and problems involving inequalities, which must be addressed in order to fully solve geometry," Google explained.

As per DeepMind researchers, the long-term plan regarding AG2 is to deliver full automation of geometry problem-solving, without any errors. They also plan on speeding up the inference process and system reliability.

Track Latest News Live on NDTV.com and get news updates from India and around the world

Follow us: