Did US Air Force AI drone really kill a human operator?

Jacob Hale
drone over city skyline

With Artificial Intelligence (AI) on the rise, companies around the world are using it for a variety of reasons. It was claimed by a US Air Force colonel that one of their simulations with an AI drone led to the death of a simulated human operator, and it went viral. But what really happened?

While AI can be used for various things, with the rise of ChatGPT and image-creator Dall-E growing increasingly popular on the internet, its usage can expand far beyond basic stuff like that.

However, just because it can be used for more, that doesn’t necessarily mean it should. This was highlighted by Colonel Tucker Hamilton, commander of the 96th Test Wing’s Operations Group and the US Air Force’s Chief of AI test and operations during a presentation at the Royal Aeronautical Society’s Future Combat Air and Space Capabilities Summit.

He spoke of an AI drone killing its human operator during a simulation — but did it actually happen?

U.S. Air Force colonel claims AI drone killed operator

The story of an AI drone killing its simulated human operator has gone viral, with a story from an Air Force colonel raising some serious concerns about the capabilities of AI.

“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat,” said Colonel Tucker Hamilton, commander of the 96th Test Wing’s Operations Group and the US Air Force’s Chief of AI test and operations.

“The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

While the story was perceived to be a real one, it came out not long after that this was all actually fabricated.

U.S. Air Force drone story was fake

In an update provided to Aerospace, Hamilton explained that he “misspoke” when telling the story, saying that the ‘rogue AI drone simulation’ was a hypothetical “thought experiment” from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation.

He said: “We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome … Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI.”

Evidently, there are plans to make use of AI in the armed forces, but until they can be sure these kinds of incidents are not possible, there’s still some way to go until they’re actually usable.

Related Topics

About The Author

Jacob is Dexerto’s UK Editor and Call of Duty esports specialist. With a BA (Hons) in English Literature and Creative Writing, he previously worked as an Editor at Ginx TV. Jacob was nominated for Journalist of the Year at the 2023 Esports Awards. Contact: jacob.hale@dexerto.com.