Written By: Lucie Kapustynski, Taimoor Khawaja, Ilyass Mofaddel, and Isabella Pliska
April 1, 2026. The LexAI Journal
Introduction
By: Ilyass Mofaddel
Every century or so, technology changes what war is. Gunpowder ended the era of castle and armoured knight, redistributing power from fortified nobility to whoever could manufacture enough of it. The airplane transformed conflict from a contest of territory into one of total exposure. Nuclear weapons collapsed the logic of victory itself, making large-scale war between great powers too costly to execute. Each of these technologies was, in its time, described as the advance that would finally make war more precise, more limited, more rational, or potentially end the concept of war itself for ever. Each of them, in practice, expanded the scale of destruction beyond what anyone had anticipated.
Artificial intelligence is being introduced with the same promises. It will reduce civilian casualties through precision targeting, accelerate decision-making and improve the quality of military intelligence. It will, in the language of those deploying it, make war more humane. The war in Iran — where AI-assisted targeting systems identified over a thousand strike coordinates within the first twenty-four hours of operations — is the first large-scale test of that claim.
The future only can tell us how this system will end up affecting war. However, it seem evident that there is something structurally different about AI, that separates it from previous invention. cannon does not recommend a target. A bomber does not prioritize its own payloads, nor does a nuclear warhead generate an intelligence briefing. Yet, AI does it all, and does it faster than the humans receiving its outputs can meaningfully evaluate them.
This piece examines four of the consequences of this affirmation. First, it examine how AI has redistributed accountability in military command, creating gaps that no legal or ethical framework currently fills. It then explore how the promise of algorithmic precision obscures the reality of algorithmic scale. How the systems making these decisions inherit the biases of those who built them. And how, beyond the battlefield, AI-generated imagery has opened a second front — one waged against the civilians whose democratic consent is supposed to constrain the use of military force in the first place.
The technology is new. The questions it raises are old. What has changed is the urgency with which they demand an answer.
Accountability
By: Taimoor Khawaja
Traditional warfare operates within a relatively clear chain of responsibility where political leaders authorize action, military commanders plan operations, and soldiers execute them. This hierarchy allows for accountability to be traced when errors occur. However, AI inserts an additional layer between human intention and military action: data is processed by algorithms, the system identifies or prioritizes targets, and humans may approve its recommendations rather than independently generate them.
Reliance on AI-generated recommendations can influence human judgement and redistribute control. In such cases, humans function less as independent authority and more as a validator of machine suggestions. This creates an accountability gap. If a strike hits the wrong target, who is ultimately responsible? The commander who approved the strike? The engineers who designed the model? The institution that deployed it? Or the algorithm itself, which cannot be held morally or legally accountable? AI conveniently provided governments with a “backdoor escape” where they can easily attribute errors to system outputs rather than human judgement.
The key issue at hand is not that AI literally makes the final decision on its own. Instead, the issue is that AI can reshape and steer the human decision so heavily that responsibility can be diluted. The humans could simply say “the AI made me do it!” AI is this opaque decision-making layer between intention and action and no one can really tell why it makes certain decisions. For all we know, it could be influenced by some deep patterns found in its training data.
Illusion of Precision
Governments often present AI warfare as more precise and calculated, and therefore more “ethical” and efficient in minimizing collateral damage. But upon digging deeper, this narrative is extremely misleading.
Historically, drone warfare had reinforced this belief, with officials describing such systems as being more precise than conventional bombing. However, precision at the level of individual strikes does not mean an overall reduction in harm. AI can drastically increase the frequency at which targets are identified which means that more strikes can be carried out, and, even if the individual strikes themselves are less harmful, the overall result is still a net positive harm.
Moreover, claims of precision often rely on selective metrics. Military assessments may focus on targeting accuracy or operational efficiency while overlooking the broader consequences on civilian populations. This presents a narrative that AI is a humanitarian advancement while obscuring its potential to intensify violence through scale and frequency. Hence, precision becomes less a measurable outcome and more a rhetorical tool that legitimizes the expansion of AI-driven warfare without fully accounting for all its effects.
Algorithmic Bias in War
Beyond questions of responsibility and scale lies another critical issue: the data on which AI is trained on. As mentioned before, machine learning models do not operate in a vacuum, they inherit the assumptions, biases, and limitations embedded in their training data. In civilian contexts, this has already been chosen to produce discriminatory outcomes in areas such as policing and hiring. However, in warfare, the consequences are far more serious.
Algorithms are not neutral. They reflect the data they are trained on, the categories used to label people and objects, the political priorities of the institutions deploying them, and the worldview of those designing and funding them. Training datasets reflect years of historical patterns, surveillance, geopolitical priorities, or cultural assumptions that disproportionately focus on certain regions or populations. When these biases are encoded into targeting systems, they risk systemically misidentifying threats or representing particular groups as potential targets. For example, infrastructure that resembles military assets in training data may be incorrectly classified, increasing the likelihood of civilian cites being flagged for attack.
Errors are not isolated in AI systems. They can be replicated and amplified across multiple operations if not fixed. A biased model does not make a single mistake, it produces a pattern of mistakes that can potentially harm entire populations. The opacity of AI systems further makes these biases difficult to detect and challenge. This lack of transparency complicates accountability and correction, further reinforcing the risks outlined elsewhere in this article.
All these issues taken together point to a fundamental tension at the heart of AI warfare. While these systems promise greater efficiency, speed, and precision, they simultaneously blur the lines of responsibility, obscuring the true impact of military actions.
Decision-Support Systems and The Humans in The Loop
By: Lucie Kapustynski
The Maven Smart System is a U.S Department of Defence (DoD) automated system that functions as decision-support by analyzing data and generating recommendations to help operational planners and commanders make decisions. The U.S commander directing the war on Iran, Brad Cooper, has confirmed the use of AI tools and states that humans make the final decision as their evidence of ethical accountability when using those systems (U.S. Central Command, 2026); however, recent studies exploring the effects of AI in decision-making complicate this claim as reliance on generated intelligence accelerates and influences human judgment, redistributing control in decision-critical environments.
Cooper emphasizes the operational advantages of AI systems in warfare, stating that they accelerate the speed of commanders’ decisions (U.S. Central Command, 2026). This acceleration of decision-making, often referred to as decision compression, condenses processes that once unfolded over weeks for human operators to consider, monitor, and complete into generated seconds, forcing rapid judgments (Booth & Milmo, 2026). Decision compression in warfare transforms how military operations are planned and executed, and ultimately widens the psychological distance between decision makers and those experiencing the lived reality of their decisions.
Psychologists observe that humans working alongside automated systems exhibit automation bias, leacal evaluation (Courier, 2025). This tendency grows in fast-paced environments as individuals assume that the system generates information they lack and become reluctant to challenge what appears to be “sophisticated technology” (Courier 2026). In effect, the use of AI in military command encourages a shift to depend on machine-generated recommendations rather than human-centred understanding and thinking.
Traditionally, militaries use the OODA loop (Observe-Orient-Decide-Act): a decision-making framework that focuses on navigating uncertainty. This concept emphasizes the orientation phase ding individuals to accept flawed or incomplete information produced by the system without criti(analyzing and understanding information in context) as central to increasing awareness and reducing errors, which can lead to more informed judgment (Crowley, 2021); however, this structured approach becomes constrained with the use of automated systems.
Automated systems limit operators’ critical interpretation of information in the “orient” phase and, consequently, affect the assessment of risks and weighing of alternatives during the “decide” stage (Oktenli 2026). When automated systems are used to guide attacks, these elements destabilize human-decision makers as proof of ethical oversight defended by those in power, as decision compression limits critical evaluation of the system’s outputs.
Similarly to large-scale facial recognition systems, automated systems use AI for image processing to suggest and prioritize “targets” (Jones 2026). Yet on February 28, 2026, a U.S. missile strike on a school in Iran took the lives of around 170 people, mostly children (Amnesty International, 2026). A U.S. investigation attributes this devastation to outdated intelligence, while AI’s role remains unclear (Lane, 2026). If AI provided decision support, labeling schools as “targets” exposes its unreliability; if not, the U.S’ response implies that automated machines are trained on the same outdated intelligence. These outcomes urge for stronger safety measures on automated, decision support systems that risk outsourcing life-and-death judgments.
High-Value Contract Failures and The Chain of Accountability
The main software platform behind Maven is Palantir, which used Anthropic’s tools and code to build Maven, until recent contract negotiations (Clark, 2026). Due to the ethics surrounding AI use for mass surveillance and lethal autonomous weapons, the CEO of Anthropic refused a demand to remove safety restrictions prohibiting this use, stating it “cannot in good conscience” comply with the request, despite pressure from the DoD and a $200 million contract to adapt it for military purposes (2026). A few days later, OpenAI released its agreement, and while the statement reads as a refusal of mass surveillance, direct autonomous weapons, and high-stakes automated decisions, it states that “The Department of War may use the AI System for all lawful purposes,”(2026); the agreement that Anthropic denied due to threatening “serious, novel risks to our fundamental liberties” (2026). These considerations have sparked a boycott movement, “quitGPT”, encouraging users to look into alternatives such as Claude, Gemini, and Perplexity AI, that are not helping to “build killer robots” and seek to maintain a society that protects “life—automatically and at massive scale” (Bregman, 2026), (Anthropic, 2026).
Conclusion
By: Isabella Pliska
The erosion of human judgment in military decision-making does not end at the command level. As AI systems reshape how commanders process intelligence and authorize strikes, a similar issue occurs in the information environment surrounding conflict, one that targets not soldiers or commanders, but civilian populations scrolling through their phones. Take the most recent example in our current climate, the war in Iran. AI- generated imagery has become its own theatre of operations, and the supposed accuracy and speed with which governments have learned to exploit it should alarm anyone who believes that an informed public is a prerequisite for democratic accountability over war. Within the first two weeks of the conflict, the New York Times identified over 110 unique AI-generated images and videos found on X, TikTok, and Facebook, viewed collectively millions of times (NYT, 2026). More specifically, the breakdown is as follows: 37 images falsely depicting active combat, 8 falsely depicting destruction, 5 falsely depicting war preparation, and 43 overt propaganda pieces (NYT, 2026). Importantly, this is not mere background noise; it is a structured information campaign.
Many scholars have been researching these events, as they are becoming historically distinct due to the volume of synthetic content, but their quality. As Marc Owen Jones, associate professor of media analytics at Northwestern University in Qatar, stated: “Even compared to when the Ukraine war broke out, things now are very different. We’re probably seeing far more A.I.-related content now than we ever have before” (NYT, 2026). Unfortunately, for the average viewer, detection itself has become nearly impossible. Hany Farid, Professor of digital forensics at UC Berkeley, argues that tips for spotting AI fakery from even months ago are now obsolete, as the telltale errors that once betrayed generated images have largely disappeared from current tools (CNN, 2026). When this occurs, the public loses its most basic tool for forming political opinion: the ability to trust what it sees.
This is precisely why synthetic imagery is so dangerous as a weapon. The Carnegie Endowment for International Peace has observed that “wartime contexts are defined by competition for narrative control among conflict participants, and fragile peace processes can be easily disrupted by rapidly spreading rumours, with consequential decisions taken based on whatever information is available” (WITNESS, 2025). The key phrase here is whatever information is available, and in the current conflict, much of what is available is ultimately fabricated. Iran’s near-total internet blackout created an information void that AI content rushed to fill, with the majority of AI videos pushing pro-Iranian narratives: burning Gulf cities, besieged American aircraft carriers, and fabricated footage of US soldiers in apparent surrender. This imagery then allows Iran to manufacture “a sense that this war is more destructive and maybe more costly for America’s allies than it might actually be” (NYT, 2026). It is becoming increasingly evident that the underlying goal is not to report the war, but rather to make the war feel unwinnable to those watching from abroad, and to harden resolve among those watching from within Iran.
Another aspect of this issue is social media platforms’ responses, specifically how the measures taken are inadequate by design. Though AI tools can embed watermarks labelling content as synthetic, the Times found that “those are easy to remove or obscure,” and only a few videos identified in their review contained them (NYT, 2026). X announced that accounts posting unlabelled AI imagery of armed conflict would lose revenue-sharing eligibility for ninety days – a policy that sounds responsive but is structurally toothless, given that the most influential Iranian-linked accounts “appeared far more focused on spreading their messages than making money” (NYT, 2026). What’s more, financial disincentives do not deter actors whose currency is influence, not income. Researchers studying cognitive warfare have warned that “as generative AI tools become more sophisticated and accessible, they will contribute to the rapid and uncontrollable creation of convincing false narratives, making it increasingly difficult for the public to discern fact from falsehood” (Lahmann et al., 2025). Platforms are treating a structural problem as a conduct violation, and the gap between those two framings is where the real damage occurs.
Lastly, the consequences also extend beyond opinion formation. The Modern War Institute at West Point has documented how a single AI-generated image of a false Pentagon explosion “caused a rapid and dramatic drop in the US stock market,” demonstrating that synthetic content can destabilize critical systems well beyond the battlefield (Coombs, 2024). We are not facing two separate problems, but rather two expressions of the same shift, as AI narrows the window for human judgment at the leadership level, and at the public level, it seems to pollute the information through which citizens exercise democratic oversight of those commands. Thus, until governments and platforms treat this as the structural crisis it is, rather than a moderation problem or a conduct violation, the synthetic battlefield will continue to shape the real one.
References
Amnesty International. (2026, March 18). USA/Iran: Those responsible for deadly and unlawful us strike on school that killed over 100 children must be held accountable. https://www.amnesty.org/en/latest/news/2026/03/usa-iran-those-responsible-for-deadly-and-unlawful-us-strike-on-school-that-killed-over-100-children-must-be-held-accountable/
Booth, R., & Milmo, D. (2026, March 3). Iran War Heralds era of AI-powered bombing quicker than “speed of thought.” The Guardian. https://www.theguardian.com/technology/2026/mar/03/iran-war-heralds-era-of-ai-powered-bombing-quicker-than-speed-of-thought
Bregman, R. (2026, March 4). Quit CHATGPT: Right now! your subscription is bankrolling authoritarianism | Rutger Bregman. The Guardian. https://www.theguardian.com/commentisfree/2026/mar/04/quit-chatgpt-subscription-boycott-silicon-valley
Burak, Oktenli (2026, February 21) Decision Compression and Escalation Risk in AI-Enabled Military Command and Control: An Operational Analysis of the ERAM Framework
CNN. (2026, March 11). Fake, AI-generated images and videos of the Iran war are spreading on social media. CNN Politics.
Crowley, B. (2021). The OODA Loop. The Decision Lab. Retrieved March 27, 2026, from https://thedecisionlab.com/reference-guide/computer-science/the-ooda-loop
Jones, N. (2026, March 5). How ai is shaping the war in Iran – and what’s next for future conflicts. Nature News. https://www.nature.com/articles/d41586-026-00710-w
Lane, P. (2026, March 26). Iran school strike result of weakened civilian protections: Opinion. The Tennessean. https://www.tennessean.com/story/opinion/contributors/2026/03/26/hegseth-iran-school-strike-civilian-harm-mitigation/89227859007/
Linvill, D., Magelinski, T., Doroshenko, L., & Warren, P. (2025). Generative propaganda: Evidence of AI’s impact from a state-backed disinformation campaign.
Madras Courier. (2026, March 24). The illusion of human control in AI-Accelerated Warfare. https://madrascourier.com/opinion/the-illusion-of-human-control-in-ai-accelerated-warfare/
Our agreement with the Department of War | Openai. (2026, February 28). https://openai.com/index/our-agreement-with-the-department-of-war/
PMC. (2025). The fundamental rights risks of countering cognitive warfare with artificial intelligence. European Journal of Law and Technology.
Schafer, B., & Ziegler, S. (2026, March 29). “Cascade of A.I. Fakes About War with Iran Causes Chaos Online”. The New York Times. https://www.nytimes.com/interactive/2026/03/14/business/media/iran-disinfo-artificial-intelligence.html
Statement from Dario Amodei on our discussions with the Department of War. Anthropic. (2026, February 26). https://www.anthropic.com/news/statement-department-of-war
The Decision Lab. (2025). Untitled. The Decision Lab. Retrieved March 27, 2026, from https://thedecisionlab.com/biases/automation-bias
U.S. Central Command (2026, March 11) “Update from CENTCOM Commander on Operation Epic Fury” [Video]. YouTube. https://www.youtube.com/watch?v=xlTyju2XC3E
WITNESS. (2025). Iran-Israel AI war propaganda is a warning to the world. Carnegie Endowment for International Peace. https://carnegieendowment.org