Big Tech’s Legal Battles Signal Shift in Military AI Oversight
Alexander Blanchard is Senior Researcher in the Governance of AI Programme at the Stockholm International Peace Research Institute (SIPRI) in Sweden. He warns recent legal and military developments have major governance implications. Filmogaz.com reports on two US court verdicts and their wider relevance to military AI oversight.
Court verdicts that shifted the debate
In the final week of March, juries in two US trials found Meta and Google legally responsible for harms to young people. One verdict in Los Angeles focused on Instagram and YouTube design features. Another, in New Mexico, held Meta liable for exposing children to sexually explicit material and predators.
Plaintiffs argued platforms used features designed to increase engagement. They cited infinite scroll, recommender systems, vanishing content, and autoplay. Juries found these choices encouraged compulsive behaviour and upheld negligence claims.
Rather than targeting user content, lawyers focused on product design. That tactic sidestepped Section 230 protections, the 1996 law shielding intermediaries. Both companies said they plan to appeal.
Why these rulings matter for military AI
The verdicts reveal how platform design decisions can carry legal risk. They also expose a broader problem for states adopting military AI. Many governments lack capital and expertise to build advanced systems in-house.
As a result, armed forces increasingly partner with technology firms. This trend has produced a growing military-tech complex. Silicon Valley firms remain central to defense relationships.
Big tech companies and defence work
Microsoft, Alphabet (Google), Amazon, and Meta supply key infrastructure and services. Google was an early contributor to Project Maven, the US Army’s AI targeting support effort. Meta released its Llama models for defence use in 2024 and partnered with Anduril to deliver VR and AR gear to US forces.
Platform firms often control cloud and hardware layers. Journalistic scrutiny showed the Israeli military once used Microsoft’s cloud for mass surveillance of Palestinians. That program later moved to Amazon’s cloud.
Accountability challenges
Accountability is central to current military AI governance efforts. It requires clear lines of answerability between states and vendors. Documentation and scrutiny of development and procurement are essential.
Both trials produced internal company evidence. Reports and memos suggested deliberate choices to maximize engagement. In 2024, The New York Times reported Google had built a culture of concealment over many years.
From human frailty to engineered dependency
Policy debates often attribute problems to automation bias or over-trust. That view can overlook deliberate design choices. Courts found platform features engineered dependency in these cases.
This shows governance must examine product design, not only operator behaviour. Military systems configured with engagement-driven practices could shape commander decisions.
Design culture and operational risk
Silicon Valley’s focus on removing friction drives product optimization. That mindset prioritizes scale, growth, and user attention. Once practices prove successful, they often become entrenched.
Those same practices can migrate into defence settings. When design aims remain opaque, accountability risks being displaced rather than achieved.
These developments underline a broader point. Big Tech’s Legal Battles Signal Shift in Military AI Oversight and demand new scrutiny of platform companies. Governance frameworks must address socio-technical design as much as human operators.