Google has faced widespread public backlash and employee resignations over the project.
Google will not seek to extend its contract next year with the Department of Defense for artificial intelligence used to analyze drone video, squashing a controversial alliance that had raised alarms over the technological build-up between Silicon Valley and the military.
The tech giant will stop working on its piece of the military's AI project known as Project Maven when its 18-month contract expires in March, a source familiar with Google's thinking told The Washington Post.
Diane Greene, the chief executive of Google's influential cloud-computing business, told employees of the decision at an internal meeting Friday first reported by Gizmodo.
Google, which declined to comment, has faced widespread public backlash and employee resignations for helping develop technological tools that could aid in warfighting. The source said Google would soon release new company principles related to the ethical uses of AI.
The move is a setback for the Pentagon's push to supercharge the military's capabilities with powerful AI that could help process battlefield data or pinpoint military targets. Audricia Harris, a Pentagon spokeswoman, said it "would not be appropriate for us to comment on the relationship between a prime and sub-prime contractor holder."
"We value all of our relationships with academic institutions and commercial companies involved with Project Maven," Harris said. "Partnering with the best universities and commercial companies in the world will help preserve the United States' critical lead in artificial intelligence."
Project Maven was launched last April as a pathfinder project for ways the military could use AI to update its national-security and defense capabilties "over increasingly capable adversaries and competitors," a DoD memo stated. In a pilot effort, AI was deployed to analyze hours of footage from Predator drones and other unmanned aircraft, pinpointing buildings and vehicles and processing video now tagged by human analysts.
But the request of private-sector help from companies such as Google, which develops some of the world's most sophisticated image-recognition software and employs some of the top minds in AI, quickly sparked a firestorm over the potential that the technology could be used to help kill or serve as a stepping stone towards AI-coordinated lethal warfare.
Thousands of Google employees wrote chief executive Sundar Pichai an open letter urging the company to cancel the contract, and many others signed a petition saying the company's assistance in developing combat-zone technology directly countered the company's famous "Don't be evil" motto.
Bob Work, the former deputy secretary of defense who launched Project Maven last year, called Google's decision not to renew the contract "troubling" and worried it could discourage others in Silicon Valley from working with the military on autonomous technologies that could assist in foreign conflicts and national defense.
The decision, he said, "seems motivated by an assumption that any use of artificial intelligence in support for the Pentagon is a bad thing. But what about using artificial intelligence to power robots that defuse bombs or IEDs? Or using AI to prevent cyber attacks on our electrical grid?" said Work, a senior fellow at the Center for a New American Security, a Washington think tank. "All of these would save the lives of our people, or protect our networks or society. That would seem like things employees of Google might be proud to do."
"Not being able to tap into the immense talent at Google to help DoD employ AI in ethical and moral ways is very sad for our society and country," he added. "It will make it more difficult to compete with countries that have no moral or ethical governors on AI in the national security space."
Google had responded to earlier criticism that the company's involvement in Project Maven was limited to the "non-offensive" use of open-source, publicly available software "intended to save lives and save people from having to do highly tedious work."
But Greene, who leads Google Cloud, told employees that the company had endured considerable backlash and pursued the work at a time when the company was more interested in military contracts, according to Gizmodo.
Several Google AI employees had told The Post they believed they wielded a powerful influence over the company's decision-making: The advanced technology's top researchers and developers are in heavy demand, and many had organized resistance campaigns or threatened to leave.
The sudden announcement Friday was welcomed by several high-profile employees. Meredith Whittaker, an AI researcher and the founder of Google's Open Research Group, tweeted Friday, "I am incredibly happy about this decision, and have a deep respect for the many people who worked and risked to make it happen. Google should not be in the business of war."
Google's change will likely do little to slow the military's building march for AI. Google's contract accounted for only a small part of Project Maven's technical ambitions, and other companies work on similar image-recognition software that could potentially be deployed as an alternative.
The Pentagon has said AI is a top priority and it is moving aggressively to develop a "Joint Artificial Intelligence Office" that Defense Secretary Jim Mattis said in April would involve AI production and prototyping.
The "momentum for AI and autonomy is picking up inside the DoD," Work said. "I don't think there's any way you can ban the use of AI in military applications, and the Chinese are intent on using AI to try to achieve technological superiority over the United States."
The military, Work said, wants to further develop AI "computer vision" techniques that can process the video from powerful sensor programs such as Gorgon Stare, a drone technology that can capture full-motion video of an entire city. AI could also be used to pinpoint submarines using sonar data.
Funding for Project Maven, which is also known as the Algorithmic Warfare Cross-Function Team, grew to $131 million in the federal budget signed in March by President Donald Trump.
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)
The tech giant will stop working on its piece of the military's AI project known as Project Maven when its 18-month contract expires in March, a source familiar with Google's thinking told The Washington Post.
Diane Greene, the chief executive of Google's influential cloud-computing business, told employees of the decision at an internal meeting Friday first reported by Gizmodo.
Google, which declined to comment, has faced widespread public backlash and employee resignations for helping develop technological tools that could aid in warfighting. The source said Google would soon release new company principles related to the ethical uses of AI.
The move is a setback for the Pentagon's push to supercharge the military's capabilities with powerful AI that could help process battlefield data or pinpoint military targets. Audricia Harris, a Pentagon spokeswoman, said it "would not be appropriate for us to comment on the relationship between a prime and sub-prime contractor holder."
"We value all of our relationships with academic institutions and commercial companies involved with Project Maven," Harris said. "Partnering with the best universities and commercial companies in the world will help preserve the United States' critical lead in artificial intelligence."
Project Maven was launched last April as a pathfinder project for ways the military could use AI to update its national-security and defense capabilties "over increasingly capable adversaries and competitors," a DoD memo stated. In a pilot effort, AI was deployed to analyze hours of footage from Predator drones and other unmanned aircraft, pinpointing buildings and vehicles and processing video now tagged by human analysts.
But the request of private-sector help from companies such as Google, which develops some of the world's most sophisticated image-recognition software and employs some of the top minds in AI, quickly sparked a firestorm over the potential that the technology could be used to help kill or serve as a stepping stone towards AI-coordinated lethal warfare.
Thousands of Google employees wrote chief executive Sundar Pichai an open letter urging the company to cancel the contract, and many others signed a petition saying the company's assistance in developing combat-zone technology directly countered the company's famous "Don't be evil" motto.
Bob Work, the former deputy secretary of defense who launched Project Maven last year, called Google's decision not to renew the contract "troubling" and worried it could discourage others in Silicon Valley from working with the military on autonomous technologies that could assist in foreign conflicts and national defense.
The decision, he said, "seems motivated by an assumption that any use of artificial intelligence in support for the Pentagon is a bad thing. But what about using artificial intelligence to power robots that defuse bombs or IEDs? Or using AI to prevent cyber attacks on our electrical grid?" said Work, a senior fellow at the Center for a New American Security, a Washington think tank. "All of these would save the lives of our people, or protect our networks or society. That would seem like things employees of Google might be proud to do."
"Not being able to tap into the immense talent at Google to help DoD employ AI in ethical and moral ways is very sad for our society and country," he added. "It will make it more difficult to compete with countries that have no moral or ethical governors on AI in the national security space."
Google had responded to earlier criticism that the company's involvement in Project Maven was limited to the "non-offensive" use of open-source, publicly available software "intended to save lives and save people from having to do highly tedious work."
But Greene, who leads Google Cloud, told employees that the company had endured considerable backlash and pursued the work at a time when the company was more interested in military contracts, according to Gizmodo.
Several Google AI employees had told The Post they believed they wielded a powerful influence over the company's decision-making: The advanced technology's top researchers and developers are in heavy demand, and many had organized resistance campaigns or threatened to leave.
The sudden announcement Friday was welcomed by several high-profile employees. Meredith Whittaker, an AI researcher and the founder of Google's Open Research Group, tweeted Friday, "I am incredibly happy about this decision, and have a deep respect for the many people who worked and risked to make it happen. Google should not be in the business of war."
Google's change will likely do little to slow the military's building march for AI. Google's contract accounted for only a small part of Project Maven's technical ambitions, and other companies work on similar image-recognition software that could potentially be deployed as an alternative.
The Pentagon has said AI is a top priority and it is moving aggressively to develop a "Joint Artificial Intelligence Office" that Defense Secretary Jim Mattis said in April would involve AI production and prototyping.
The "momentum for AI and autonomy is picking up inside the DoD," Work said. "I don't think there's any way you can ban the use of AI in military applications, and the Chinese are intent on using AI to try to achieve technological superiority over the United States."
The military, Work said, wants to further develop AI "computer vision" techniques that can process the video from powerful sensor programs such as Gorgon Stare, a drone technology that can capture full-motion video of an entire city. AI could also be used to pinpoint submarines using sonar data.
Funding for Project Maven, which is also known as the Algorithmic Warfare Cross-Function Team, grew to $131 million in the federal budget signed in March by President Donald Trump.
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)