Written By: Sarah Naveed
November 9, 2025. The LexAI Journal
Military institutions have historically operated within the framework of international politics that centers on sovereignty, state actors, and treaties. Modernization suggests a new element that has encompassed military thinking and operations, and that is the implementation of technology. “Colossus was built in 1944 to crack Nazi codes. By the 1950s computers were organising America’s air defences. In the decades that followed, machine intelligence played a small part in warfare” (The Economist, 2024) which only increased due to the rise of military artificial intelligence led by big tech companies like Palantir, a fundamental shift in how power is built and exercised. The enlistment of “tech bros” into the US military sparked great controversy amongst the public. Meta’s chief technology officer has called it “the great honor of my life” (Booth, 2025) to be enlisted in a new US army corps that defence chiefs set up to better integrate military and tech industry expertise, including senior figures from top tech firms that also include Palantir and OpenAI. This questions the standard of international code surrounding warfare activity and ethics, and the new kind of governance necessary for its moderation.
Palantir and AI Reshaping the Standard of Warfare
Questions arise of how states can deal with the inevitable, the digitalization of human affairs and practices, while also conceptually considering the maintenance of ethics on the international stage. This includes examples of the United States and Silicon Valley’s Palantir, challenging their claims of AI responsibility and etiquette, Lavender as a vital example of active AI tools used in a real-time war setting, and the transformation of global politics by the erosion of traditional state-based control over warfare, along with redefining the boundaries of military power.
Named one of Silicon Valley’s most valuable companies, Palantir focuses on AI usage and integration in governance and the military. Through an Americanist perspective, the core mission at Palantir focuses on maintaining and securing a high level of intelligence and defence systems for the US and its allies through the “belief that the United States, its allies, and partners should harness the most advanced technical capabilities for their defense and prosperity” (Palantir, 2025). Priding themselves on the accessibility of data and connection on the battlefield that allows soldiers to efficiently operate on the field and minimize any expended energy with “agile decision-making… to out-think and out-pace the adversary” (Palantir, 2025), their operations operate as the connective tissue between the army’s personnel, data, and resources. When looking at the pro-combative nature of its operations, there are two concerns that arise: who defines the adversary, and what data informs the system’s outputs? An overreliance on AI substituting for fast thinking or reflex skills may also degrade the quality of conventional intelligence officers and strategists. Even now, aspects of data bias through political affiliation, or prejudice against racial or religious groups, and countries relying on the generative aspects of AI, such as images or videos, could potentially mislead or provoke opposing state leaders, posing the challenge of cognitive warfare. An example is Russia, and how they “used AI to mislead and deceive Americans during the 2020 US presidential election, and are reportedly attempting to do so again during the 2024 election” (Lushenko and Carter, 2024).
The use of AI in this context delegates public discourse to stay relevant to its purpose. Therefore, with Palantir’s Americanist perspective, they actively shape public and governmental perceptions of military AI, framing it in ways that favor their interests and resist oversight. The ability for tech platforms to alter or even have any influence at all on the information users engage with can draw similarities to how big tech companies in the military sector frame their operations as for the benefit of the state, when in reality, the benefits are privatized.
With this consideration, the mass production of misinformation for deception and confusion becomes easily accessible for states, which ultimately will complicate international order and inter-state relations. One may propose the question of whether the quality of military intelligence has the potential to suffer at the hands of an AI-reliant military.
The militarization of artificial intelligence, driven by private tech companies like Palantir and legitimized by tech bro fantastical culture and cultural and political influence, is transforming global politics by eroding traditional state-based control over warfare and redefining the boundaries of military power. These entrepreneurs “stand to profit immensely from militaries’ uptake of AI-enabled capabilities” (Lushenko and Carter, 2024), which creates a divide between the upholding of state ideals and private self-interest. The puzzle shifts into finding ways to address the necessity of AI governance on an international scale. With the probable case that the means to militarize AI become the standard globally, a disordered and automated world would become the new norm. Therefore, the state actors or even developers should not be solely responsible, but the lack of governance in this budding sector means that if it goes unmonitored, any AI-powered conflict will push international law to the margins (The Economist, 2024), creating international discord.
Lavender as a Contemporary Example of AI Militarization
Lavender, an Israeli artificial intelligence-based program, is relied upon as a commanding officer, with a program that includes the mass generation of targets, with outputs that are seamlessly accepted without the necessary contemplation or the “requirement to thoroughly check why the machine made those choices” (Abraham, 2024). What is being seen with AI militarization is the gradual erosion of military standards where soldiers are getting used to the option of AI to generate their targets and execute high-stakes decisions “as if it were a human decision” (Abraham, 2024) for their ease. The second technology is created that encourages fast-pace data collection, decision making and execution without the consulting of experts in the local area or a conscious effort to form a second opinion, it is a prominent example of the “slowing down” of military personnel who lack an understanding of the tools they are using.
In cases where “AI is designed to perform higher-order cognitive tasks, like reasoning, that are normally reserved for humans” (Abraham, 2024), one may question how effective it would be for other considerations, such as dealing with classified information, monitoring how high in the decision chain AI should be used, and more. A gap exists with active states relying on AI and technology in its armed forces. Objectively, the global stage has yet to see how AI-militarization can potentially be a good thing. While the only way is to observe various states in a state of war to address the influence of bad actors on technology, the logical next step is to observe international regulation before that happens. However, current affairs only introduce the point of erosion of international military politics and expectations, where “scholars claim that AI is the sine qua non, absolutely required for victory” (Abraham, 2024), and boundaries are redefined at the hands of artificial intelligence.
Where to go from here: Two Proposed Solutions
There are two attitudes to resolving the presence of the militarization of AI on an international scale. While one encourages the preservation of state legitimacy and the logic in adhering to an international order, with the effective governance of AI defined as “international cooperation” with countries collaborating “to establish global AI usage standards and norms” (Marwala, 2023). The other encourages the standardization of AI for setting a standard of fairness and consistency. Cole et al. offer reasoning for why an international order is necessary as a resolution with the consideration of a “democratized access to destructive power.” This empowers “state and non-state actors alike to access and wield technologies—such as drone swarms and suicide drones—that were once reserved for only the most advanced militaries” (Cole et al. 2025), therefore blurring the lines of who can or should have access to these kinds of technologies.
Marwala illustrates the opposing perspective of standardizing AI as it becomes more integrated into society, regardless of whether it is a good thing or not. With “standardization ensuring AI systems’ consistency, dependability, and fairness while fostering innovation and competition” (Marwala, 2023), society can rid AI of its mystical taboo factor and turn it from an autonomous tool to one we can govern and manage. However, in the context of the armed forces, broad accessibility may not be more pertinent than establishing an international code. In the case of mass generalization, where AI tools are given to states to defend themselves against bad actors, is the global stage supposed to simply hope that they will not perform similar practices? This perspective seems to be too idealistic for the catastrophic reality of warfare. One may argue that on the precipice of the possibility of major destruction and corruption of military and state relations, state actors cannot afford to stray from the importance of building a regulatory code for safe and effective interpretation and usage of AI.
With states at the forefront of this innovation, like the US and Israel, and companies like Palantir pledging protection to their allies, it creates the narrative of complacency amongst their allies. This shift introduces new forms of digital colonialism, inefficiency in military personnel, the weakening of international norms, and the threat to the overall stability of global power relations. Since this paper mainly addresses ethics and some policy reforms, the establishment of an international order seems to be a given; however, one may question the functionality of that standard and ways in which states may ethically use AI in warfare to influence the general attitude of AI as a tool in human affairs. Some examples could be on-demand counselling to safeguard the mental and physical capacities of military personnel, or an in-ear supervising body ready to assist each member.
Conclusion
Palantir’s history and its self-portrayal of ethical AI highlight the gaps between corporate claims of responsibility and the realities of capitalizing on privatized profits. This is shown through Lavender, and state use of AI technologies in conflict, demonstrating the overreliance and decline of active human thinkers. This has resulted in the transformation from traditional state-centered control to private actors gaining control of military power due to the newfound accessibility through AI technologies.
References
- Abraham, Yuval. 2024. “’Lavender’: The AI machine directing Israel’s bombing spree in Gaza.” +972 Magazine. https://www.972mag.com/lavender-ai-israeli-army-gaza/.
Booth, Robert. 2025. “Meta boss praises new US army division enlisting tech execs as lieutenant colonels.” The Guardian. https://www.theguardian.com/technology/2025/jun/25/meta-exec-us-army-enlistment.
Cole Leah, Sam Weiss, Dauren Kabiyev, and Sarah Lober. 2025. “AI and Geopolitics: Global Governance for Militarized Bargaining and Crisis Diplomacy.” Belfer Center. https://www.belfercenter.org/research-analysis/ai-and-geopolitics-global-governance-militarized-bargaining-and-crisis-diplomacy.https://www.theguardian.com/technology/2025/jun/25/meta-exec-us-army-enlistment.
Lushenko, Paul, and Keith Carter. 2024. “A new military-industrial complex: How tech bros are hyping AI’s role in war.” Bulletin of the Atomic Scientists. https://thebulletin.org/2024/10/a-new-military-industrial-complex-how-tech-bros-are-hyping-ais-role-in-war/.
Marwala, Tshilidzi. 2023. “Militarization of AI Has Severe Implications for Global Security and Warfare.” United Nations University. https://unu.edu/article/militarization-ai-has-severe-implications-global-security-and-warfare.
Palantir. 2025. “Palantir AI Ethics.” Palantir. https://www.palantir.com/pcl/palantir-ai-ethics/.
- The Economist. 2024. “AI will transform the character of warfare.” The Economist. https://www.economist.com/leaders/2024/06/20/war-and-ai