Google made a major revision to its Artificial Intelligence Principles published in 2018, removing commitments that completely rejected the use of AI in weapons and surveillance technologies. This was one of the biggest revisions to the company’s ethical policies. So why did Google need to change this attitude? The details are in our news…
Google has changed its mind about using artificial intelligence for weapons and surveillance!
According to the US-based newspaper The Washington Post, the section on Google’s website titled “AI Use Cases We Would Not Apply” has been removed. In its place, a new section titled “Responsible Development and Use” has been added, with broader and more flexible wording.

In the new statement, Google says it will “establish appropriate human oversight and feedback mechanisms that are consistent with user goals, social responsibility, and principles of international law.” However, previous explicit commitments not to use AI for weapons or surveillance technologies have been completely abandoned.
Although Google did not directly announce this policy change, DeepMind CEO Demis Hassabis and James Manyika, Head of Google Research and Technology, emphasized in a blog post that democratic countries should be leaders in AI development. The statement reads as follows:
“WE BELIEVE THAT DEMOCRACIES GUIDED BY CORE VALUES OF FREEDOM, EQUALITY AND RESPECT FOR HUMAN RIGHTS SHOULD LEAD AI DEVELOPMENT. COMPANIES, GOVERNMENTS, AND ORGANIZATIONS MUST WORK TOGETHER TO ADVANCE AI IN WAYS THAT PROTECT PEOPLE, SPUR GLOBAL GROWTH, AND SUPPORT NATIONAL SECURITY.”
Following the Project Maven scandal in 2018, Google announced that it would not use artificial intelligence for military purposes. The project, which envisaged the US Department of Defense using AI software to analyze unmanned aerial vehicles (drones) images, caused great controversy. Employees resigned in protest and thousands of people started a petition against the project. Google eventually announced that it would cancel Project Maven and create new ethical principles.
However, it is known that the company is again turning to military contracts starting in 2021, with the US Department of Defense making aggressive bids for the cloud computing project. In early 2024, it was even revealed that Google employees were collaborating with the Israeli Ministry of Defense on the expansion of AI tools.
So no matter what anyone says, Google is leaving the door open to collaborating with governments and military institutions by making the ethical principles of artificial intelligence more flexible. What do you think about this? You can write your opinions in the comments section below…