![Google Drops Its Promise To Not Use AI For Weapons Google Drops Its Promise To Not Use AI For Weapons](https://c.ndtvimg.com/2025-02/ca1n75h8_google_625x300_04_February_25.jpeg?im=FeatureCrop,algorithm=dnn,width=773,height=435)
Last week, Google quietly abandoned a long-standing commitment to not use artificial intelligence (AI) technology in weapons or surveillance. In an update to its AI principles, which were first published in 2018, the tech giant removed statements promising not to pursue:
- technologies that cause or are likely to cause overall harm
- weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people
- technologies that gather or use information for surveillance violating internationally accepted norms
- technologies whose purpose contravenes widely accepted principles of international law and human rights.
The update came after United States President Donald Trump revoked former President Joe Biden's executive order aimed at promoting safe, secure and trustworthy development and use of AI.
The Google decision follows a recent trend of big tech entering the national security arena and accommodating more military applications of AI. So why is this happening now? And what will be the impact of more military use of AI?
The growing trend of militarised AI
In September, senior officials from the Biden government met with bosses of leading AI companies, such as OpenAI, to discuss AI development. The government then announced a taskforce to coordinate the development of data centres, while weighing economic, national security and environmental goals.
The following month, the Biden government published a memo that in part dealt with “harnessing AI to fulfil national security objectives”.
Big tech companies quickly heeded the message.
In November 2024, tech giant Meta announced it would make its “Llama” AI models available to government agencies and private companies involved in defence and national security.
This was despite Meta's own policy which prohibits the use of Llama for “[m]ilitary, warfare, nuclear industries or applications”.
Around the same time, AI company Anthropic also announced it was teaming up with data analytics firm Palantir and Amazon Web Services to provide US intelligence and defence agencies access to its AI models.
The following month, OpenAI announced it had partnered with defence startup Anduril Industries to develop AI for the US Department of Defence.
The companies claim they will combine OpenAI's GPT-4o and o1 models with Anduril's systems and software to improve US military's defences against drone attacks.
Defending national security
The three companies defended the changes to their policies on the basis of US national security interests.
Take Google. In a blog post published earlier this month, the company cited global AI competition, complex geopolitical landscapes and national security interests as reasons for changing its AI principles.
In October 2022, the US issued export controls restricting China's access to particular kinds of high-end computer chips used for AI research. In response, China issued their own export control measures on high-tech metals, which are crucial for the AI chip industry.
The tensions from this trade war escalated in recent weeks thanks to the release of highly efficient AI models by Chinese tech company DeepSeek. DeepSeek purchased 10,000 Nvidia A100 chips prior to the US export control measures and allegedly used these to develop their AI models.
It has not been made clear how the militarisation of commercial AI would protect US national interests. But there are clear indications tensions with the US's biggest geopolitical rival, China, are influencing the decisions being made.
A large toll on human life
What is already clear is that the use of AI in military contexts has a demonstrated toll on human life.
For example, in the war in Gaza, the Israeli military has been relying heavily on advanced AI toold. These tools require huge volumes of data and greater computing and storage services, which is being provided by Microsoft and Google. These AI tools are used to identify potential targets but are often inaccurate.
Israeli soldiers have said these inaccuracies have accelerated the death toll in the war, which is now more than 61,000, according to authorities in Gaza.
Google removing the “harm” clause from their AI principles contravenes the international law on human rights. This identifies “security of person” as a key measure.
It is concerning to consider why a commercial tech company would need to remove a clause around harm.
Avoiding the risks of AI-enabled warfare
In its updated principles, Google does say its products will still align with “widely accepted principles of international law and human rights”.
Despite this, Human Rights Watch has criticised the removal of the more explicit statements regarding weapons development in the original principles.
The organisation also points out that Google has not explained exactly how its products will align with human rights.
This is something Joe Biden's revoked executive order about AI was also concerned with.
Biden's initiative wasn't perfect, but it was a step towards establishing guardrails for responsible development and use of AI technologies.
Such guardrails are needed now more than ever as big tech becomes more enmeshed with military organisations – and the risk that come with AI-enabled warfare and the breach of human rights increases.
(Author: Zena Assaad, Senior Lecturer, School of Engineering, Australian National University
This article is republished from The Conversation under a Creative Commons license. Read the original article.)
Track Latest News Live on NDTV.com and get news updates from India and around the world