Google has quietly erased its longstanding commitment not to develop AI for weapons and surveillance, a major shift in policy that aligns the tech giant with President Donald Trump’s aggressive push for military AI expansion. The move comes just weeks into Trump’s second term and signals a dramatic reversal of Google’s previous stance against AI-driven warfare and mass surveillance.
The company’s updated AI Principles, released this week, remove language that had explicitly pledged to avoid developing weapons, surveillance tools, and AI applications likely to cause overall harm. Instead, the new policy emphasizes that AI should be used to “support national security” and that “democracies should lead in AI development”—a marked departure from the commitments Google made after employee protests forced it to drop military contracts in 2018.
Human rights advocates, AI ethicists, and tech industry insiders have sounded the alarm, warning that Google’s decision marks a dangerous step toward AI-powered militarization.
“The removal of the principles is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically it means Google will probably now work on deploying technology directly that can kill people,” said Margaret Mitchell, former co-lead of Google’s ethical AI team.
With Trump pursuing a $500 billion AI military initiative called ‘Stargate’, Google’s move raises serious concerns about Big Tech’s growing collaboration with the government to develop AI-driven warfare technologies.
In 2018, after intense employee protests against Google’s involvement in Project Maven, a Pentagon initiative to develop AI for drone warfare, the company issued a set of AI Principles pledging:
• No development of AI for weapons or mass surveillance
• No AI applications likely to cause “overall harm”
• Commitment to aligning AI use with international human rights law
As a result of employee pushback, Google ended its contract with the Pentagon and even dropped out of a $10 billion cloud computing deal for the U.S. military, citing concerns over ethical alignment. The phrase “Don’t be evil” was also removed from Google’s official code of conduct shortly after these protests.
This week, Google removed all language barring AI from being used for weapons or surveillance from its AI Principles. The new guidelines now state that Google will:
• Work with governments and organizations “that share democratic values”
• Ensure AI is used to “support national security”
• Apply “human oversight” to AI systems, rather than banning military applications outright
The removal of ethical guardrails clears the way for Google to resume defense contracts that it previously abandoned due to public and employee backlash.
Parul Koul, president of the Alphabet Workers Union-CWA, condemned the decision, saying:
“It’s deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public.”
The policy shift places Google in direct alignment with Trump’s AI expansion plans, signaling a new era of cooperation between Silicon Valley and the U.S. military.
Upon taking office, Trump immediately rescinded Biden’s executive order on AI safety, which had required companies to submit AI safety test results to the government before public release. The move eliminated key regulatory guardrails intended to prevent the reckless deployment of AI in sensitive areas such as warfare and law enforcement.
The administration has since launched Stargate, a $500 billion initiative aimed at expanding AI-powered military infrastructure. The program’s objectives include:
• Developing AI-driven surveillance and autonomous combat systems
• Expanding government access to AI research from private tech firms
• Strengthening military collaborations with Google, Amazon, and Microsoft
Google’s ethics reversal now positions the company as a key player in Stargate, marking a stark departure from its previous stance against military AI projects.
Sarah Leah Whitson of Democracy for the Arab World Now condemned Google’s decision, calling the company a “corporate war machine.”
Google’s move follows a broader trend of tech giants aligning themselves with Trump’s AI agenda:
• Amazon, Microsoft, and Palantir have expanded military AI contracts since Trump’s re-election.
• Diversity, Equity, and Inclusion (DEI) programs are being rolled back as tech firms comply with Trump’s executive order eliminating DEI initiatives in the federal government.
• Top tech executives, including Sundar Pichai and Mark Zuckerberg, attended Trump’s inauguration, signaling a growing alignment between Big Tech and the administration.
With Google’s AI ethics rollback, the company is now free to pursue contracts that would have been unthinkable just a few years ago.
Many AI experts and human rights advocates fear that Google’s new policy will result in:
• AI-driven weapons systems for autonomous combat
• Mass surveillance using facial recognition and predictive policing
• AI-assisted military intelligence operations with little transparency
A journalist covering AI ethics posed a chilling question:
“Is this as terrifying as it sounds?”
As Google and other U.S. firms pivot toward military AI, China and Russia are accelerating their own AI-powered defense programs, raising concerns about:
• The lack of international treaties governing AI in warfare
• The potential for unregulated autonomous weapons systems
• AI-driven military decision-making that could escalate global conflicts
With no clear legal or ethical boundaries in place, Google’s AI expansion into defense could set a dangerous precedent for AI-driven militarization worldwide.
Google’s decision to erase its AI ethics pledge marks a turning point in the relationship between Silicon Valley and military power. The move aligns with Trump’s AI expansion agenda, opening the door for Google to develop AI-powered weapons, surveillance systems, and combat technologies—a reality that would have been unthinkable just a few years ago.
COMMENTS