Impact Newswire

Pentagon Wants Military AI. Tech Workers Want Guardrails.

With Google reportedly negotiating to bring Gemini into classified environments and Anthropic resisting demands tied to surveillance and weapons, Silicon Valley finds itself navigating the fault line between patriotism, profit and principle as conflict reshapes Washington’s expectations

Pentagon Wants Military AI. Tech Workers Want Guardrails.

In the hours after American strikes hit Iran and the Pentagon blacklisted a leading artificial intelligence company, a different kind of mobilization began inside Silicon Valley.

Employees at Google, OpenAI and other technology firms started circulating letters, calling on their employers to draw firmer boundaries around how their artificial intelligence tools are used by the military. What had been a simmering debate over ethics and national security quickly intensified, spilling into company forums, internal message boards and public petitions.

One open letter, titled “We Will Not Be Divided,” swelled from a few hundred signatures on Friday to nearly 900 by Monday. Almost 100 of the names came from OpenAI, and close to 800 from Google. The letter criticized the Defense Department’s treatment of Anthropic, an artificial intelligence start up that declined to allow its technology to be used for mass surveillance or fully autonomous weapons.

“They’re trying to divide each company with fear that the other will give in,” the letter reads. “That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.”

The surge in signatures came after the Trump administration on Friday designated Anthropic a “supply chain risk” and blocked certain dealings with the company. Hours later, U.S. combat operations began in Iran. Administration officials said the military action was necessary to neutralize “imminent threats” from Iran’s nuclear and missile programs.

For many technology workers, the sequence of events underscored how tightly their work is now bound to the machinery of national defense.

Tensions between the tech industry and the federal government have been building for months. Immigration enforcement actions and the killings of two American citizens in Minnesota earlier this year heightened anxiety within the industry. Employees have increasingly demanded transparency about how their companies’ cloud services and A.I. systems are used by government agencies, particularly the Defense Department, the Department of Homeland Security and Immigration and Customs Enforcement.

At Google, the scrutiny arrives at a delicate moment. The company is reportedly in discussions with the Pentagon about deploying its Gemini A.I. model on a classified system, reopening an internal struggle that first erupted nearly a decade ago over military applications of artificial intelligence.

On Friday, the advocacy group No Tech For Apartheid, long critical of cloud contracts between technology giants and the federal government, issued a joint statement titled “Amazon, Google, Microsoft Must Reject the Pentagon’s Demands.”

The coalition urged the leading cloud providers to refuse Defense Department terms that could enable mass surveillance or abusive uses of A.I. It also called for greater clarity around contracts involving the military and domestic security agencies.

The statement singled out Google, citing the possibility of a Pentagon agreement that could resemble one allowing the Defense Department to deploy Grok, the A.I. model developed by Elon Musk’s company xAI, “in classified environments — as far as we know, without any guardrails.”

“Our own companies are also on the brink of accepting similar contract terms,” the statement said. “Google is in negotiations with the Pentagon to deploy Gemini, its own frontier model, for classified uses.”

While Anthropic and OpenAI have publicly discussed aspects of their dealings with the Defense Department, Alphabet, Google’s parent company, has declined to comment.

In a separate show of support for Anthropic, hundreds of technology workers signed another open letter urging the Pentagon to withdraw its designation of the company as a “supply chain risk.” The signatories included dozens of OpenAI employees, as well as workers affiliated with Salesforce, Databricks, IBM and the coding start up Cursor.

The letter called on Congress to “examine whether the use of these extraordinary authorities against an American technology company is appropriate,” and argued that private firms should not face retaliation for refusing government demands.

Inside Google, concerns have echoed through the ranks. More than 100 employees working on artificial intelligence reportedly signed a letter to management last week expressing unease about the company’s Defense Department work and asking executives to draw the same red lines as Anthropic.

Jeff Dean, Google’s chief scientist, appeared to acknowledge some of those concerns in a post on X. “Mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression,” he wrote, adding that such systems are “prone to misuse for political or discriminatory purposes.”

The debate has deep roots at the company.

In 2018, Google faced an employee revolt over Project Maven, a Pentagon initiative that used artificial intelligence to analyze drone footage. After thousands of workers protested, Google allowed the contract to expire and later published its “AI Principles,” outlining acceptable uses of its technology.

Yet the issue has resurfaced repeatedly. In 2024, Google dismissed more than 50 employees following protests over Project Nimbus, a $1.2 billion cloud contract with the Israeli government. Company executives said the deal did not violate its A.I. Principles, though documents and reports indicated the agreement permitted the provision of tools such as image categorization and object tracking, including to state owned weapons manufacturers.

A New York Times investigation later reported that, months before signing the Nimbus contract, some Google officials had worried it could damage the company’s reputation and that “Google Cloud services could be used for, or linked to, the facilitation of human rights violations.”

Early last year, Google revised its A.I. Principles and removed language that had explicitly prohibited “building weapons” or “surveillance technology.”

Now, as American bombs fall in Iran and the Pentagon exerts pressure on artificial intelligence providers, those revisions have taken on new weight.

For a generation of engineers who once viewed their work as detached from geopolitics, the boundary between code and conflict has grown harder to see. The letters circulating this week suggest that many inside the industry no longer want that boundary to remain undefined.

Get the latest news and insights that are shaping the world. Subscribe to Impact Newswire to stay informed and be part of the global conversation.

Got a story to share? Pitch it to us at info@impactnews-wire.com and reach the right audience worldwide


Discover more from Impact Newswire

Subscribe to get the latest posts sent to your email.

"What’s your take? Join the conversation!"

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top

Discover more from Impact Newswire

Subscribe now to keep reading and get access to the full archive.

Continue reading