The Pentagon used Project Maven-developed AI to identify air strike targets

The US military stepped up its use of artificial intelligence tools after Hamas attacks on Israel on October 7, based on a new report from Bloomberg. Schuyler Moore, chief technology officer of US Central Command, told the news agency that machine learning algorithms have helped the Pentagon identify targets for more than 85 airstrikes in the Middle East this month.

U.S. bombers and warplanes carried out these airstrikes against seven installations in Iraq and Syria on February 2, completely destroying or at least damaging rockets, missiles, drone storage facilities and militia operations centers . The Pentagon had also used AI systems to find rocket launchers in Yemen and surface combatants in the Red Sea, which it then destroyed in several airstrikes during the same month.

The machine learning algorithms used to refine targets were developed as part of Project Maven, Google's now-defunct partnership with the Pentagon. To be precise, the project involved the use of Google's artificial intelligence technology by the US military to analyze drone footage and flag the images for further human review. This caused an uproar among Google employees: thousands of people had petitioned the company to end its partnership with the Pentagon, and some even renounced its involvement completely. A few months after this employee protest, Google decided do not renew his contract, which ended in 2019.

Moore said Bloomberg that U.S. forces in the Middle East continued to experiment with using algorithms to identify potential targets using drone or satellite imagery, even after Google ended its involvement. The military has tested their use over the past year in digital exercises, she said, but it began using targeting algorithms in live operations after the Oct. 7 Hamas attacks. She said, however, that human workers were constantly checking and verifying the AI ​​systems' target recommendations. It was also the human personnel who suggested how to organize the attacks and what weapons to use. “There is never an algorithm that just works, comes to a conclusion and then moves on to the next step,” she said. “Each step involving AI is completed by a human who checks in at the end.”

Source link

Leave a Comment