AI-Powered Drone Beats World-Champion Human Pilots For The First Time

Swift, an artificial intelligence drone pilot, has recently defeated the best human pilots in the world of high-speed drone racing.

Advertisement
Read Time: 3 mins
The Swift model was trained using simulation.

Artificial intelligence (AI) has made significant advancements in various fields and has indeed outperformed humans in specific tasks and domains. Over the past few years, AI has demonstrated its prowess in tasks by effectively leveraging its strengths in extensive data processing, pattern recognition, optimisation, and handling repetitive computations. As a result, it has achieved remarkable levels of task performance across a wide range of fields.

Continuing its trend of surpassing human achievements in various domains, an AI-powered drone recently defeated three world-champion human drone pilots in a high-speed racing competition.

According to The Guardian, Developed by researchers at the University of Zurich, the Swift AI won 15 out of 25 races against world champions and clocked the fastest lap on a course where drones reach speeds of 50mph (80 km/h) and endure accelerations up to 5g, enough to make many people black out.

"Our result marks the first time that a robot powered by AI has beaten a human champion in a real physical sport designed for and by humans," said Elia Kaufmann, a researcher who helped to develop Swift.

First-person view Drone racing involves flying a drone around a course dotted with gates that must be passed through cleanly to avoid a crash. The pilots see the course via a video feed from a camera mounted on the drone.

Writing in journal Nature, Kaufmann and his colleagues describe a series of head-to-head races between Swift and three champion drone racers, Thomas Bitmatta, Marvin Schapper, and Alex Vanover. Before the contest, the human pilots had a week to practice on the course, while Swift trained in a simulated environment that contained a virtual replica of the course.

Swift used a technique called deep reinforcement learning to find the optimal commands to hurtle around the circuit. Because the method relies on trial and error, the drone crashed hundreds of times in training, but since it was a simulation, the researchers could simply restart the process.

Advertisement

"To make sure that the consequences of actions in the simulator were as close as possible to the ones in the real world, we designed a method to optimise the simulator with real data," study first author Elia Kaufmann said.

Featured Video Of The Day
Food Giants Selling Less Healthy Products In Low-And-Middle-Income Countries: Report