The Ethics of AI in U.S. Warfare

Written By: Lucie Kapustynski, Taimoor Khawaja, Ilyass Mofaddel, and Isabella Pliska 

April 1, 2026. The LexAI Journal

Introduction

By: Ilyass Mofaddel

Every century or so, technology changes what war is. Gunpowder ended the era of castle and armoured knight, redistributing power from fortified nobility to whoever could manufacture enough of it. The airplane transformed  conflict from a contest of territory into one of total exposure. Nuclear weapons collapsed the logic of victory itself, making large-scale war between great powers too costly to execute. Each of these technologies was, in its time, described as the advance that would finally make war more precise, more limited, more rational, or potentially end the concept of war itself for ever. Each of them, in practice, expanded the scale of destruction beyond what anyone had anticipated.

Artificial intelligence is being introduced with the same promises. It will reduce civilian casualties through precision targeting, accelerate decision-making and improve the quality of military intelligence. It will, in the language of those deploying it, make war more humane. The war in Iran — where AI-assisted targeting systems identified over a thousand strike coordinates within the first twenty-four hours of operations — is the first large-scale test of that claim.

The future only can tell us how this system will end up  affecting war. However, it  seem evident that there is something structurally different about AI, that separates it from previous invention. cannon does not recommend a target. A bomber does not prioritize its own payloads, nor does a nuclear warhead generate an intelligence briefing. Yet, AI does it all, and  does it faster than the humans receiving its outputs can meaningfully evaluate them. 

This piece examines four of the consequences of this affirmation. First, it examine how AI has redistributed accountability in military command, creating gaps that no legal or ethical framework currently fills. It then explore how the promise of algorithmic precision obscures the reality of algorithmic scale. How the systems making these decisions inherit the biases of those who built them. And how, beyond the battlefield, AI-generated imagery has opened a second front — one waged against the civilians whose democratic consent is supposed to constrain the use of military force in the first place. 

The technology is new. The questions it raises are old. What has changed is the urgency with which they demand an answer.

Accountability

By: Taimoor Khawaja

Traditional warfare operates within a relatively clear chain of responsibility where political leaders authorize action, military commanders plan operations, and soldiers execute them. This hierarchy allows for accountability to be traced when errors occur. However, AI inserts an additional layer between human intention and military action: data is processed by algorithms, the system identifies or prioritizes targets, and humans may approve its recommendations rather than independently generate them. 

Reliance on AI-generated recommendations can influence human judgement and redistribute control. In such cases, humans function less as independent authority and more as a validator of machine suggestions. This creates an accountability gap. If a strike hits the wrong target, who is ultimately responsible? The commander who approved the strike? The engineers who designed the model? The institution that deployed it? Or the algorithm itself, which cannot be held morally or legally accountable? AI conveniently provided governments with a “backdoor escape” where they can easily attribute errors to system outputs rather than human judgement. 

The key issue at hand is not that AI literally makes the final decision on its own. Instead, the issue is that AI can reshape and steer the human decision so heavily that responsibility can be diluted. The humans could simply say “the AI made me do it!” AI is this opaque decision-making layer between intention and action and no one can really tell why it makes certain decisions. For all we know, it could be influenced by some deep patterns found in its training data.

Illusion of Precision

Governments often present AI warfare as more precise and calculated, and therefore more “ethical” and efficient in minimizing collateral damage. But upon digging deeper, this narrative is extremely misleading. 

Historically, drone warfare had reinforced this belief, with officials describing such systems as being more precise than conventional bombing. However, precision at the level of individual strikes does not mean an overall reduction in harm. AI can drastically increase the frequency at which targets are identified which means that more strikes can be carried out, and, even if the individual strikes themselves are less harmful, the overall result is still a net positive harm. 

Moreover, claims of precision often rely on selective metrics. Military assessments may focus on targeting accuracy or operational efficiency while overlooking the broader consequences on civilian populations. This presents a narrative that AI is a humanitarian advancement while obscuring its potential to intensify violence through scale and frequency. Hence, precision becomes less a measurable outcome and more a rhetorical tool that legitimizes the expansion of AI-driven warfare without fully accounting for all its effects. 

Algorithmic Bias in War

Beyond questions of responsibility and scale lies another critical issue: the data on which AI is trained on. As mentioned before, machine learning models do not operate in a vacuum, they inherit the assumptions, biases, and limitations embedded in their training data. In civilian contexts, this has already been chosen to produce discriminatory outcomes in areas such as policing and hiring. However, in warfare, the consequences are far more serious. 

Algorithms are not neutral. They reflect the data they are trained on, the categories used to label people and objects, the political priorities of the institutions deploying them, and the worldview of those designing and funding them. Training datasets reflect years of historical patterns, surveillance, geopolitical priorities, or cultural assumptions that disproportionately focus on certain regions or populations. When these biases are encoded into targeting systems, they risk systemically misidentifying threats or representing particular groups as potential targets. For example, infrastructure that resembles military assets in training data may be incorrectly classified, increasing the likelihood of civilian cites being flagged for attack. 

Errors are not isolated in AI systems. They can be replicated and amplified across multiple operations if not fixed. A biased model does not make a single mistake, it produces a pattern of mistakes that can potentially harm entire populations. The opacity of AI systems further makes these biases difficult to detect and challenge. This lack of transparency complicates accountability and correction, further reinforcing the risks outlined elsewhere in this article. 

All these issues taken together point to a fundamental tension at the heart of AI warfare. While these systems promise greater efficiency, speed, and precision, they simultaneously blur the lines of responsibility, obscuring the true impact of military actions. 

Decision-Support Systems and The Humans in The Loop

By: Lucie Kapustynski

The Maven Smart System is a U.S Department of Defence (DoD) automated system that functions as decision-support by analyzing data and generating recommendations to help operational planners and commanders make decisions. The U.S commander directing the war on Iran, Brad Cooper, has confirmed the use of AI tools and states that humans make the final decision as their evidence of ethical accountability when using those systems (U.S. Central Command, 2026); however, recent studies exploring the effects of AI in decision-making complicate this claim as reliance on generated intelligence accelerates and influences human judgment, redistributing control in decision-critical environments.

Cooper emphasizes the operational advantages of AI systems in warfare, stating that they accelerate the speed of commanders’ decisions (U.S. Central Command, 2026). This acceleration of decision-making, often referred to as decision compression, condenses processes that once unfolded over weeks for human operators to consider, monitor, and complete into generated seconds, forcing rapid judgments (Booth & Milmo, 2026). Decision compression in warfare transforms how military operations are planned and executed, and ultimately widens the psychological distance between decision makers and those experiencing the lived reality of their decisions.

Psychologists observe that humans working alongside automated systems exhibit automation bias, leacal evaluation (Courier, 2025). This tendency grows in fast-paced environments as individuals assume that the system generates information they lack and become reluctant to challenge what appears to be “sophisticated technology” (Courier 2026). In effect, the use of AI in military command encourages a shift to depend on machine-generated recommendations rather than human-centred understanding and thinking.

​Traditionally, militaries use the OODA loop (Observe-Orient-Decide-Act): a decision-making framework that focuses on navigating uncertainty. This concept emphasizes the orientation phase ding individuals to accept flawed or incomplete information produced by the system without criti(analyzing and understanding information in context) as central to increasing awareness and reducing errors, which can lead to more informed judgment (Crowley, 2021); however, this structured approach becomes constrained with the use of automated systems. 

Automated systems limit operators’ critical interpretation of information in the “orient” phase and, consequently, affect the assessment of risks and weighing of alternatives during the “decide” stage (Oktenli 2026). When automated systems are used to guide attacks, these elements destabilize human-decision makers as proof of ethical oversight defended by those in power, as decision compression limits critical evaluation of the system’s outputs.

Similarly to large-scale facial recognition systems, automated systems use AI for image processing to suggest and prioritize “targets” (Jones 2026). Yet on February 28, 2026, a U.S. missile strike on a school in Iran took the lives of around 170 people, mostly children (Amnesty International, 2026). A U.S. investigation attributes this devastation to outdated intelligence, while AI’s role remains unclear (Lane, 2026). If AI provided decision support, labeling schools as “targets” exposes its unreliability; if not, the U.S’ response implies that automated machines are trained on the same outdated intelligence. These outcomes urge for stronger safety measures on automated, decision support systems that risk outsourcing life-and-death judgments. 

High-Value Contract Failures and The Chain of Accountability

The main software platform behind Maven is Palantir, which used Anthropic’s tools and code to build Maven, until recent contract negotiations (Clark, 2026). Due to the ethics surrounding AI use for mass surveillance and lethal autonomous weapons, the CEO of Anthropic refused a demand to remove safety restrictions prohibiting this use, stating it “cannot in good conscience” comply with the request, despite pressure from the DoD and a $200 million contract to adapt it for military purposes (2026). A few days later, OpenAI released its agreement, and while the statement reads as a refusal of mass surveillance, direct autonomous weapons, and high-stakes automated decisions, it states that “The Department of War may use the AI System for all lawful purposes,”(2026); the agreement that Anthropic denied due to threatening “serious, novel risks to our fundamental liberties” (2026). These considerations have sparked a boycott movement, “quitGPT”, encouraging users to look into alternatives such as Claude, Gemini, and Perplexity AI, that are not helping to “build killer robots” and seek to maintain a society that protects “life—automatically and at massive scale” (Bregman, 2026), (Anthropic, 2026).

Conclusion

By: Isabella Pliska

The erosion of human judgment in military decision-making does not end at the command level. As AI systems reshape how commanders process intelligence and authorize strikes, a similar issue occurs in the information environment surrounding conflict, one that targets not soldiers or commanders, but civilian populations scrolling through their phones. Take the most recent example in our current climate, the war in Iran. AI- generated imagery has become its own theatre of operations, and the supposed accuracy and speed with which governments have learned to exploit it should alarm anyone who believes that an informed public is a prerequisite for democratic accountability over war. Within the first two weeks of the conflict, the New York Times identified over 110 unique AI-generated images and videos found on X, TikTok, and Facebook, viewed collectively millions of times (NYT, 2026). More specifically, the breakdown is as follows: 37 images falsely depicting active combat, 8 falsely depicting destruction, 5 falsely depicting war preparation, and 43 overt propaganda pieces (NYT, 2026). Importantly, this is not mere background noise; it is a structured information campaign. 

Many scholars have been researching these events, as they are becoming historically distinct due to the volume of synthetic content, but their quality. As Marc Owen Jones, associate professor of media analytics at Northwestern University in Qatar, stated: “Even compared to when the Ukraine war broke out, things now are very different. We’re probably seeing far more A.I.-related content now than we ever have before” (NYT, 2026). Unfortunately, for the average viewer, detection itself has become nearly impossible. Hany Farid, Professor of digital forensics at UC Berkeley, argues that tips for spotting AI fakery from even months ago are now obsolete, as the telltale errors that once betrayed generated images have largely disappeared from current tools (CNN, 2026). When this occurs, the public loses its most basic tool for forming political opinion: the ability to trust what it sees. 

This is precisely why synthetic imagery is so dangerous as a weapon. The Carnegie Endowment for International Peace has observed that “wartime contexts are defined by competition for narrative control among conflict participants, and fragile peace processes can be easily disrupted by rapidly spreading rumours, with consequential decisions taken based on whatever information is available” (WITNESS, 2025). The key phrase here is whatever information is available, and in the current conflict, much of what is available is ultimately fabricated. Iran’s near-total internet blackout created an information void that AI content rushed to fill, with the majority of AI videos pushing pro-Iranian narratives: burning Gulf cities, besieged American aircraft carriers, and fabricated footage of US soldiers in apparent surrender. This imagery then allows Iran to manufacture “a sense that this war is more destructive and maybe more costly for America’s allies than it might actually be” (NYT, 2026). It is becoming increasingly evident that the underlying goal is not to report the war, but rather to make the war feel unwinnable to those watching from abroad, and to harden resolve among those watching from within Iran. 

Another aspect of this issue is social media platforms’ responses, specifically how the measures taken are inadequate by design. Though AI tools can embed watermarks labelling content as synthetic, the Times found that “those are easy to remove or obscure,” and only a few videos identified in their review contained them (NYT, 2026). X announced that accounts posting unlabelled AI imagery of armed conflict would lose revenue-sharing eligibility for ninety days – a policy that sounds responsive but is structurally toothless, given that the most influential Iranian-linked accounts “appeared far more focused on spreading their messages than making money” (NYT, 2026). What’s more, financial disincentives do not deter actors whose currency is influence, not income. Researchers studying cognitive warfare have warned that “as generative AI tools become more sophisticated and accessible, they will contribute to the rapid and uncontrollable creation of convincing false narratives, making it increasingly difficult for the public to discern fact from falsehood” (Lahmann et al., 2025). Platforms are treating a structural problem as a conduct violation, and the gap between those two framings is where the real damage occurs. 

Lastly, the consequences also extend beyond opinion formation. The Modern War Institute at West Point has documented how a single AI-generated image of a false Pentagon explosion “caused a rapid and dramatic drop in the US stock market,” demonstrating that synthetic content can destabilize critical systems well beyond the battlefield (Coombs, 2024). We are not facing two separate problems, but rather two expressions of the same shift, as AI narrows the window for human judgment at the leadership level, and at the public level, it seems to pollute the information through which citizens exercise democratic oversight of those commands. Thus, until governments and platforms treat this as the structural crisis it is, rather than a moderation problem or a conduct violation, the synthetic battlefield will continue to shape the real one.

References
  1. Amnesty International. (2026, March 18). USA/Iran: Those responsible for deadly and unlawful us strike on school that killed over 100 children must be held accountable. https://www.amnesty.org/en/latest/news/2026/03/usa-iran-those-responsible-for-deadly-and-unlawful-us-strike-on-school-that-killed-over-100-children-must-be-held-accountable/ 

  2. Booth, R., & Milmo, D. (2026, March 3). Iran War Heralds era of AI-powered bombing quicker than “speed of thought.” The Guardian. https://www.theguardian.com/technology/2026/mar/03/iran-war-heralds-era-of-ai-powered-bombing-quicker-than-speed-of-thought

  3. Bregman, R. (2026, March 4). Quit CHATGPT: Right now! your subscription is bankrolling authoritarianism | Rutger Bregman. The Guardian. https://www.theguardian.com/commentisfree/2026/mar/04/quit-chatgpt-subscription-boycott-silicon-valley

  4. Burak, Oktenli (2026, February 21) Decision Compression and Escalation Risk in AI-Enabled Military Command and Control: An Operational Analysis of the ERAM Framework

  5. CNN. (2026, March 11). Fake, AI-generated images and videos of the Iran war are spreading on social media. CNN Politics.

  6. Crowley, B. (2021). The OODA Loop. The Decision Lab. Retrieved March 27, 2026, from https://thedecisionlab.com/reference-guide/computer-science/the-ooda-loop

  7. Jones, N. (2026, March 5). How ai is shaping the war in Iran – and what’s next for future conflicts. Nature News. https://www.nature.com/articles/d41586-026-00710-w

  8. Lane, P. (2026, March 26). Iran school strike result of weakened civilian protections: Opinion. The Tennessean. https://www.tennessean.com/story/opinion/contributors/2026/03/26/hegseth-iran-school-strike-civilian-harm-mitigation/89227859007/

  9. Linvill, D., Magelinski, T., Doroshenko, L., & Warren, P. (2025). Generative propaganda: Evidence of AI’s impact from a state-backed disinformation campaign. 

  10. Madras Courier. (2026, March 24). The illusion of human control in AI-Accelerated Warfare. https://madrascourier.com/opinion/the-illusion-of-human-control-in-ai-accelerated-warfare/

  11. Our agreement with the Department of War | Openai. (2026, February 28). https://openai.com/index/our-agreement-with-the-department-of-war/ 

  12. PMC. (2025). The fundamental rights risks of countering cognitive warfare with artificial intelligence. European Journal of Law and Technology.

  13. Schafer, B., & Ziegler, S. (2026, March 29). “Cascade of A.I. Fakes About War with Iran Causes Chaos Online”. The New York Times. https://www.nytimes.com/interactive/2026/03/14/business/media/iran-disinfo-artificial-intelligence.html 

  14. Statement from Dario Amodei on our discussions with the Department of War. Anthropic. (2026, February 26). https://www.anthropic.com/news/statement-department-of-war 

  15. The Decision Lab. (2025). Untitled. The Decision Lab. Retrieved March 27, 2026, from https://thedecisionlab.com/biases/automation-bias

  16. U.S. Central Command (2026, March 11) “Update from CENTCOM Commander on Operation Epic Fury” [Video]. YouTube. https://www.youtube.com/watch?v=xlTyju2XC3E 

  17. WITNESS. (2025). Iran-Israel AI war propaganda is a warning to the world. Carnegie Endowment for International Peace. https://carnegieendowment.org

AI, Authorship, and the Changing Face of Banking

Written By: Zainab Khalil and Lali Sreemeuntoo 

Edited By: Clara Lee and Lewon Lee 

March 31, 2026. The LexAI Journal.

Zainab Khalil: Recently, I had the opportunity to attend discussions on the banking and accounting industries. Across two different conversations in two different financial contexts, a theme kept emerging: the shift in how Large Language Models are becoming integrated into everyday financial operations, shaping how people interact with money. 

In banking, customer inquiries are answered instantly, financial advice can be generated on demand, and internal workflows are accelerated by systems that draft, summarise, and interpret information at scale. This increases efficiency and output, and reduces the need for online customer service representatives. As McKinsey (2025) highlights, AI driven systems are becoming integrated tools in communication, shaping tone and financial decision-making processes. In accounting, similar tools are being used to automate reporting and streamline analysis. Reports by the New York State Society of CPAs (2024) highlight that this allows for convenience and efficiency, where interactions become more responsive to individual needs, which raises an important question of when human input should end and when AI interference should begin. 

When it comes to discussions on AI use in financial contexts, one point that is made clear is that people are still valued for reviewing, verifying and standing behind what’s produced. This is what researchers call a “human in the loop approach”,  where AI systems are embedded within cycles of human input, feedback and correction (Finextra, 2025). Human oversight on AI is very important especially in the financial context because errors or biases in AI outputs can propagate rapidly, which affects both regulatory compliance and client trust. After all, according to Finextra (2025), there are moments where AI outputs move too quickly through its systems, which makes human involvement that supervises the AI particularly useful.

The importance of human intervention is further supported by  Forbes (2024), who mentions that traditional notions of ownership rely on clear intent and identifiable contribution, which becomes harder to define when creation is distributed across human and machine interactions. For this reason, when AI doesn’t have human intervention, it raises numerous complicated for legal liabilities and ethical responsibilities, and they challenge traditional frameworks for intellectual property and professional accountability. 

And sinceAI tools are also often trained on private, highly specific data sets that align closely with institutional knowledge and client needs,his places importance on how data is handled, how outputs are influenced by the training models and the visibility that exists within these processes. Policy research highlights that government structures are still evolving in response to these developments working to keep pace with these changes and to balance both innovation with protections against misuse or overreliance on non-transparent AI systems (Forbes, 2024). 

Lali Sreemeuntoo: After hearing Zainab’s thoughts on the growing role of AI in banking, I found myself less concerned with what these systems can do, and more with what may be lost in the pursuit of efficiency. Across financial institutions, artificial intelligence is increasingly celebrated for its ability to accelerate workflows, personalize services, and streamline decision-making. However, what happens when efficiency becomes the priority? Are we unknowingly accepting any risks that may come with? 

The appeal of AI in banking is clear. Financial institutions are constantly under pressure to process large amounts of data in a short amount of time, while also remaining competitive in an increasingly digital economy. AI delivers on these demands. It automates customer service, assists in credit decisions, and detects fraud at a scale that would be impossible for human workers alone. In fact, recent reports suggest that the majority of financial institutions are rapidly adopting AI precisely for these efficiency gains, particularly in operations and customer engagement (OSFI and FCAC).

However, this efficiency does not remain impartial. It reshapes the way decisions are made and subsequently how errors arise. 

One of the main risks of AI in finance is that mistakes will no longer occur slowly or in isolation. Instead, they can scale instantly. AI systems trained on flawed or incomplete data may produce biased or inaccurate outcomes, from unfair loan decisions to misleading financial advice (Ridzuan et al). Unlike human errors, which are often contained and context-sensitive, AI-driven mistakes can be replicated across thousands of interactions within seconds. As a result, the very speed that makes AI attractive also amplifies its potential harm.

In fact, experts have warned that over-reliance on AI for financial planning can produce significantly inaccurate results, particularly when systems fail to account for individual circumstances or long-term complexity (Chopra 307). What appears efficient on the surface can actually obscure numerous qualities that remain central to responsible financial decision-making.

Furthermore, the integration of AI introduces further layers of opacity. Many AI systems operate as “black boxes”, producing outputs without clear explanations of how those conclusions were reached (Chopra 309). In fact, the majority of financial executives ‘cannot explain how specific AI model decisions or predictions are made,’ highlighting the difficulty of effectively overseeing rapidly evolving AI systems (Chopra 310). This creates challenges for both the customers and the institutions deploying them. 

In response, financial institutions often point to the presence of human oversight. The “human in the loop” model suggests that while AI can generate recommendations, humans remain responsible for reviewing and validating outcomes. However, this reassurance may be more fragile than it appears. As AI systems become faster and more integrated, human involvement risks becoming superficial. In high-speed environments, pausing for careful review can feel like a holdup, undermining the efficiency these AI systems are designed to achieve.

This reveals a point of conflict. Efficiency and accountability are not always aligned. The more AI is embedded into financial systems, the easier it becomes to rely on its outputs without questioning them. Over time, this can erode the role of human judgment, replacing deliberation with automation. 

None of this is to suggest that efficiency should be abandoned; instead, it should be understood as one priority among many. After all financial institutions should invest in faster systems but they should also strengthen and sustain human involvement.

Since in banking, the cost of getting things wrong is not measured in seconds saved, but in trust lost.

References
  1. Chopra, Puneet. “Ethical Implications of AI in Financial Services: Bias, Transparency, and Accountability.” International Journal of Scientific Research in Computer Science Engineering and Information Technology, vol. 10, no. 5, Oct. 2024, pp. 306–314. DOI:10.32628/CSEIT241051017. CC BY 4.0. 

  2. Finextra. (2025). From human-in-the-loop to human-on-the-loop: How AI agents are redefining finance.https://www.finextra.com/the-long-read/1335/from-human-in-the-loop-to-human-on-the-loop-how-ai-agents-are-redefining-finance 

  3. Forbes Finance Council. (2024). The role of human oversight in AI-driven financial services. Forbes.https://www.forbes.com/councils/forbesfinancecouncil/2024/06/26/the-role-of-human-oversight-in-ai-driven-financial-services/

  4.  McKinsey & Company. (2025). How finance teams are putting AI to work today. https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/how-finance-teams-are-putting-ai-to-work-today 

  5.  New York State Society of CPAs. (2024). Firms report on who their humans in the loop are when using generative AI. https://www.nysscpa.org/news/publications/the-trusted-professional/article/firms-report-on-who-their-humans-in-the-loop-are-when-using-generative-ai-071724

  6. Office of the Superintendent of Financial Institutions Canada, and Financial Consumer Agency of Canada. Uses of Artificial Intelligence in Federally Regulated Financial Institutions: Risks and Challenges. Government of Canada, 2023,
    https://www.osfi-bsif.gc.ca/en/about-osfi/reports-publications/osfi-fcac-risk-report-ai-uses-risks-federally-regulated-financial-institutions.

  7. Ridzuan et al “Artificial Intelligence in Finance: Opportunities and Risks.” Information, vol. 15, no. 8, 2024, https://www.mdpi.com/2078-2489/15/8/432.




AI and Admissions: Examining the LSAT in the Age of Artificial Intelligence

Written By: Grace Blumell

March 8, 2026. The LexAI Journal

A few weeks ago, I received an email explaining that beginning in August 2026, almost all LSAT exams will take place in person, with exceptions for medical accommodations (Krinsky 2026). I was quite surprised by this announcement, because I had planned on taking the LSAT in August digitally and will now do so in a testing center. The digitally formatted testing was a result of the global pandemic in 2020, so since the reversion to the original testing method did not occur until 2026, I assumed AI was the largest contributing factor. However, AI was not directly referenced as a reason for the LSAT changing, rather a broader reason was provided concerning alleged cheating within the last few years (Krinsky 2026). Nevertheless, I was curious to better understand how AI can be used ethically to encourage study skills for the LSAT and when its use becomes detrimental to the future of the legal field. This article explores studying for the LSAT, the personal statement on one’s application, and AI in the admissions process, arguing that AI can help combat bias and inequality in the process, but close human oversight is necessary to maintain the authenticity of the law school admissions process.

AI has become increasingly effective in aiding students in studying, and these benefits can similarly help while preparing for the LSAT. The popular test prep resource Kaplan has a feature where one can work with AI to develop a 90-day study plan (Kaplan n.d.). When choosing AI tools to help with studying for the LSAT, Kaplan suggests “technology should complement, not replace expert instruction” (Kaplan n.d.). AI can even proctor a practice test and “simulate LSAT test-day conditions with webcam monitoring, timing enforcement, and automated flagging” (Kaplan n.d.). While AI can help replicate some testing constraints or fill gaps in studying, these tools should not be used as a shortcut because they are inaccessible during the real LSAT. Kaplan emphasizes that while AI can “personalize LSAT practice,” sometimes a tutor can offer more direct feedback that can improve a student’s overall score (Kaplan n.d.). While I continue with my LSAT studies, I will maintain a strategy that uses AI in logistical planning for study time and testing, but rely on practice exams and videos from certified instructors to prepare for the test in a way that is ethical and reliable. 

Personal statements in law school applications demonstrate an individual’s story, highlighting significant periods of one’s life, whether overcoming barriers or demonstrating persistence. Therefore, the use of AI in personal statements can be extremely concerning. Alarmingly, ChatGPT appears to be “accelerating at a pace that outstrips the advancement of detection tools” (Lowry 2023a). The inconsistency in accurate detection could have devastating consequences if a mistake is made and someone is undeservingly accused. Fortunately, the LSAT writing sample demonstrates someone’s writing abilities, but comparing this to one’s personal statement to check for authenticity is not a reliable method due to significant differences in timing and tone. Lowry argues that AI cannot be banned because this would be difficult to enforce, and that instead, encouraging applicants to share their authentic voice ultimately strengthens the credibility of personal statements (Lowry 2023b). I agree with this approach because someone could be wrongly accused of using AI in their personal statement due to the inability of detection systems to reliably identify AI-generated text. Promoting genuine human experience and authenticity in the personal statement of one’s law school application supports a system centered on honesty and personal expression.

An article written on the LSAC blog in 2023 explains that many AI tools used in the admissions process today can sort through applications to assist with acceptances and rejections. Consequently, the ethics of using AI in this manner could potentially introduce new biases into the selection process (Lowry 2023a). The article ultimately argues that close human oversight is necessary to ensure that AI does not incorporate unintended bias into its decisions. AI has the potential to make admissions more fair by reducing certain human biases, but this requires humans to continue playing a central role in evaluating applications and ensuring that artificial intelligence remains equitable in its use.

Due to AI’s ability to pass law school admissions and licensing exams, concern has arisen about the future of legal career opportunities. The percentage of applications to law school has risen by more than 40 percent within the last two years, demonstrating a continued desire to work in the legal field (Olson 2026). AI is likely to impact the authenticity of testing in law school and could extend to admissions testing. Lorber suggests that schools revert to in-person testing to ensure authenticity in assessments (Lorber 2025). However, he also acknowledges that handwritten and in-person examinations do not accurately represent the digital conditions of the modern workplace (Lorber 2025). While the LSAT did not specifically state that the change from digital to in-person testing was due to AI use, artificial intelligence contributes to broader concerns about cheating in academia, and moving to an in-person testing environment reduces opportunities to misuse such technologies while ensuring a controlled testing environment.

In conclusion, the increasing use of AI in education and standardized testing raises important questions about authenticity, equality, and oversight in the law school admissions process. The LSAT’s return to in-person testing may not be solely a result of AI, but the technology contributes to growing concerns about academic integrity and misuse during remote testing. At the same time, AI presents meaningful opportunities when used responsibly, such as helping students prepare for exams and potentially reducing human bias in admissions decisions. Law schools will face challenges moving forward as they attempt to balance technological innovation with the human judgment that remains essential to the legal profession. As AI continues to evolve, transparency, ethical guidelines, and strong human oversight will be necessary to ensure that the law school admissions process remains fair and authentic.

References
  1. Kaplan. n.d. “How to Craft a 90-Day, AI-Powered LSAT Study Plan.” Kaplan. https://www.kaptest.com/study/lsat/lsat-study-plan-ai/?srsltid=AfmBOorLxVM8dSEGdzaScJGAOkEbtpiRNNQbBvjkss03YgSRFGeJJg38

  2. Krinsky, Susan L. 2026. “Evolving How We Deliver the LSAT to Increase Test Security and Test Taker Success.” LSAC. February 11, 2026. https://www.lsac.org/blog/evolving-how-we-deliver-lsat-increase-test-security-and-test-taker-success.

  3. Lorber, Pascale. 2025. “Generative AI, Law Schools and Assessment: Where Next?” The Law Teacher 59 (3): 372–390. https://doi.org/10.1080/03069400.2025.2535822.

  4. Lowry, Troy. 2023a. “The Good, the Bad, and the Ugly of AI and Admissions.” LSAC. November 9, 2023. https://www.lsac.org/blog/good-bad-and-ugly-ai-and-admissions.

  5. Lowry, Troy. 2023b. “ChatGPT, Law School Application Personal Statements, and the LSAT Writing Sample.” LSAC. August 31, 2023. https://www.lsac.org/blog/chatgpt-law-school-application-personal-statements-and-lsat-writing-sample.

  6. Olson, Elizabeth. 2026. “Interest in Law School Is Surging. A.I. Makes the Payoff Less Certain.” The New York Times, January 24, 2026. https://www.nytimes.com/2026/01/24/business/dealbook/law-school-ai.html.

Systems Thinking: The Effect of AI Data Collection on the Environment and Communities

Written By: Zainab Khalil,  Lewon Lee, and Lali Sreemuntoo

Edited By: Clara Lee

March 1, 2026. The LexAI Journal

Introduction 

By: Zainab Khalil

Discourse surrounding artificial Intelligence (AI) often focuses on innovation, economic growth, and risks. This usually ignores that AI is a materially intensive system that depends on water, energy and physical infrastructure to run. As Crawford (2021) highlights in Atlas of AI, AI builds upon extractive supply chains, energy intensive computation and infrastructure. Research in digital sustainability also highlights that AI depends on electricity, water and global logistics networks (The Shift Project, 2019). Evaluating AIs environmental impact therefore requires a systems thinking approach, to show how it is embedded and is reshaping environmental systems. 

A systems thinking framework views problems as complex networks of interconnected parts linked through feedback loops and resource flows, by examining how infrastructure, markets, government systems and communities interact. 

For example, data centers consume significant amounts of electricity and water, particularly for cooling, contributing indirectly to greenhouse gas emissions. The International Energy Agency (2023) highlights that data center electricity demand continues to grow rapidly with expanding services. On top of that, training large machine learning models further increases energy demand, which further increases carbon emissions. 

Although AI can improve efficiency and reduce resource use per task, these gains may lead to higher overall consumption, a pattern known as the rebound effect as mentioned by Ertel and Bonenberger (2025), This happens because efficiency is lowered due to operational costs which can cause larger production and use. Looking at AI from this lens changes the way its impacts should be assessed as environmental costs are built into the way the system is designed, scaled and deployed.

AI Data Centres and their Environmental Impacts 

By: Lewon Lee

Whenever discussions of AI Ethics arise, one of the most controversial aspects concerns AI’s environmental impact. In particular, with how clean water consumption and carbon emissions are associated with data centres and model training. However, this issue is frequently riddled with misleading claims, which lead to forced regulations that ultimately undermine other initiatives that are taken to regulate AI. For instance, in response to pressure to regulate AI’s environmental consequences, some of which were based on misinformation, policymakers of the EU AI Act have imposed environmental policies that were not originally included. According to Castro (2024, p. 13–14), these additions caused tension with other obligations within the AI Act, such as the elimination of bias in AI models. As such, when learning about the consequences of AI’s environmental impact, even well-intentioned misinformation can cause other ethical issues that would have otherwise been avoidable. 

One misconception that should be addressed is the view that AI development is easily generalizable or quantifiable in environmental terms. In reality, it is difficult to create an accurate measurement of the emissions created by AI development, as different AI systems require different computational resources and have varying levels of sophistication (Castro 2024, p. 2). In this sense, there is no “one-size-fits-all” metric to see how many emissions these models emit. For example, an AI model that classifies text is far less computationally intensive than a model that generates an image, thus leading to different energy costs (Castro 2024, p. 6). To view these two AI models with vastly different energy costs as equally bad for the environment would be misleading and obscures meaningful distinctions that are necessary for responsible regulation.

A second misconception concerns the idea that training an AI model is what causes high energy costs in AI development. In reality, the largest culprit for energy costs are when AI systems use a process called “inference,” the process where a trained model generates outputs in response to user inputs (Castro 2024, p. 4–5). After all, training a model is a one-time cost, but inferences cause AI models to consume energy over time, thus responsible for about 65% percent of the AI’s energy consumption (Castro 2024, p.5). Therefore, to address the energy concerns, we may need to begin with addressing how to reduce the emissions created by inferences. 

Though, it would be inaccurate to dismiss environmental concerns altogether. Many newly developed models prioritize performance and capability over energy efficiency, often treating efficiency as a secondary factor of consideration (Castro 2024, p.8). Castro remains optimistic, as he believes that developers will eventually focus on optimization once there is little room for improvement in other aspects of AI development (Castro 2024, p.8). But it remains a recurring issue that AI development in its infancy causes significant energy consumption, even if its consumption levels are exaggerated by the public. In fact, in 2024, the International Energy Agency reported that data centres emit 0.6% of global total carbon emissions with projections to increase, which threatens decarbonization targets under the Paris Agreement (Xiao et al. 2025, p.2). Beyond concerns of energy consumption, data centres also contribute to water scarcity through its use of direct water cooling systems and indirect procurement, which could place significant strain on local water supplies (Xiao et al. 2025, p.2). 

As Xiao et al. (2025) show in their article, water consumption in data centres is determined by the following two primary factors: total energy and the water intensity of cooling systems (Xiao et al. 2025, pp.2-5). As such, these issues of water scarcity are ultimately created by the locations of these data centres, since some cities use hydropower to obtain their energy (Xiao et al. 2025, p.5). In regions where they are largely dependent on hydro power, which already consumes large amounts of water through evaporation, the added concern of water usage for data centres can exacerbate problems of significant water usage, thus leading to potential water scarcity (Xiao et al. 2025, p.5). So while the installation of AI servers in top water-intensive infrastructures may lead to significant reductions in carbon footprints, it also leads to a reduction in access to usable water (Xiao et al. 2025, p.5). For this reason, establishing data centres in states like California, Nevada, Arizona, Utah, Washington, New Mexico, Colorado, Wyoming, Oregon, or Montana would affect many of the communities living there, as they deal with severe water scarcity issues (Xiao et al. 2025, p.5). On the other hand, establishing data centres in areas like Texas,, Nebraska, and South Dakota would be far more optimal, as they have abundant renewable energy potentials that align with decarbonization initiatives, and does not threaten the water supply of communities (Xiao et al. 2025, p.5). For this reason, to establish an ethical framework for AI data centres so that they do not lead to environmental damages and negatively affect communities, the locations of where these data centres are established is an important factor to consider.

In conclusion, to establish an ethical and legal framework for the environmental impacts of AI data centres and data collection, it begins with addressing the misinformation for this topic. To address the misinformation surrounding the topic of AI and the environment is vital, as it can create other conflicts and other unethical uses of AI. Then, upon understanding the proper information of AI, understanding the variance in energy consumption from one model to another, and learning about what aspect of AI data collection causes high amounts of energy consumption, we must then learn to understand where these data centres are built, and how the locations of these data centres can affect communities and cause real impact to them. These two articles show that, in spite of the environmental and resource issues that are created as a result of AI and its data collection, an environmentally sustainable practice of data collection is plausible. However, in order to create a sustainably developed AI system, significant effort, planning, and understanding of the consequences must be understood; not just in its training, but also in its practices of “inference” and understanding the resources required to develop such AI systems. The effect of data collection on communities is discussed further in the following section.

The Resources That Communities Rely on and the Solution 

By: Lali Sreemuntoo

On the surface, artificial intelligence appears immaterial. A tool that can spontaneously generate the perfect solution to advance human capacity. Yet, this misconception allows its users to remain ignorant about its reality. In truth, the materials behind AI depend on extensive extraction of natural resources and infrastructure development that affect communities around the world. 

This can be observed within the expansion of AI construction materials alone as a result of investment. Projects such as Stargate highlight how corporate priorities are increasingly aligned with scaling AI server networks to meet growing computational demand (Xiao et al. 2025). As a result, data centres to support these AI technologies must be built. These specific communities are picked as they often carry an abundant amount of land and renewable energy potential, making them ideal for hosting energy-intensive AI infrastructure without immediate resource constraints  (Xiao et al. 2025). While this concentration optimizes operational efficiency for companies, it also means that these communities are directly targeted for technological deployment. While there is an expected opportunity for economic growth, communities can face a strain on municipal planning and resource allocation  (Xiao et al. 2025). As a result this introduces potential challenges to local populations  (Xiao et al. 2025). Furthermore AI-related project opportunities that get set up in such communities, can also exacerbate inequalities if benefits are unevenly distributed or if smaller local businesses are displaced by large-scale corporate operations (Crawford 2024). However policymakers have begun exploring regulatory frameworks to ensure accountability and community impact assessment. The Artificial Intelligence Environmental Impacts Act of 2024 proposes collaboration between industry, academia, and government to establish standards for reporting and assessing AI deployment impacts (Crawford 2024). While mainly framed environmental considerations, the Act exemplifies a broader movement toward transparency in AI operations and their social consequences. Additionally, public–private partnerships have been suggested as a mechanism to balance corporate growth with local development needs. This would include investment in infrastructure, workforce training, and monitoring systems to safeguard community interests  (Xiao et al. 2025).

Ultimately, AI’s rapid expansion highlights a conflict within capitalist development: the communities whose resources enable technological growth are often the ones that are most burdened by its externalities. This raises a significant ethical question: is it possible for technological advancement and community well-being be structured to coexist in a reciprocal relationship, or will the large-scale growth inevitably  impose disproportionate costs on the marginalized group?

Conclusion 

By: Zainab Khalil

In conclusion, dismissing environmental concerns is problematic because of the rapid nature of AI expansion. While companies may benefit from economic growth and gains, it is at the expense of communities that might face environmental strain and uneven distribution of benefits. 

A balanced approach needs to be implemented by which policy should prioritise accurate measurement, transparent reporting and context specific evaluation. As a result environmental considerations need to be integrated with other ethical priorities by encouraging energy efficient design and community impact assessments. A sustainable path forward therefore requires acknowledging AI effects by designing frameworks that align tech advancements with environment protection and community well-being.

Bibliography
  1. Castro, Daniel. “Rethinking Concerns about AI’s Energy Use.” Center for Data Innovation, January 26, 2024. https://datainnovation.org/2024/01/rethinking-concerns-about-ais-energy-use/.

  2. Crawford, Kate. “Generative AI’s Environmental Costs Are Soaring and Mostly Secret.” Nature, vol. 626, no. 8000, Feb. 2024, p. 693, https://doi.org/10.1038/d41586-024-00478-x.

  3. Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. https://doi.org/10.2307/j.ctv1ghv45t

  4. Ertel, W., & Bonenberger, C. (2025). Rebound Effects Caused by Artificial Intelligence and Automation in Private Life and Industry. Sustainability, 17(5), 2025. https://doi.org/10.3390/su17051988

  5. International Energy Agency. (2025). Energy and AI. IEA Report. https://www.iea.org/reports/energy-and-ai 

  6. The Shift Project. (2019). Lean ICT: Towards Digital Sobriety. The Shift Project. https://theshiftproject.org/app/uploads/2025/04/Lean-ICT-Report_The-Shift-Project_2019.pdf 

  7. Xiao, Tianqi, et al. “Environmental Impact and Net-Zero Pathways for Sustainable Artificial Intelligence Servers in the USA.” Nature Sustainability, vol. 8, no. 12, 2025, pp. 1541–1553, https://doi.org/10.1038/s41893-025-01681-y.

  8. Xiao, T., Nerini, F.F., Matthews, H.D. et al. Environmental impact and net-zero pathways for sustainable artificial intelligence servers in the USA. Nat Sustain 8, 1541–1553 (2025). https://doi.org/10.1038/s41893-025-01681-y

Protecting Likeness in the Age of AI: Denmark’s Response to Deepfakes

Written By:  Lucie Kapustynski, Taimoor Khawaja, and Ilyass Mofaddel

Edited By: Sarah Naveed

March 1, 2026. The LexAI Journal

Introduction

By: Lucie Kapustynski

Denmark is responding to the challenges of artificially-generated deepfakes with a proposed change to its national copyright law. This proposition seeks to address the growing harms of deepfakes by granting individuals legal control over the digital use of their likeness: body, voice, and other personal characteristics (Karttunen, 2026). Rather than creating a new policy, the proposal reframes personal identity as a form of intellectual property, introducing “personality rights” enforced through copyright mechanisms (Karttunen, 2026). This law extends “personality rights” to protect an individual for up to 50 years after their death, including digital replicas of artistic work (Karttunen, 2026). Through permitting parodies and satire, this framework maintains freedom of speech, while prohibiting the creation and distribution of realistic artificially-generated imitations without consent (Karttunen, 2026). Affected individuals would gain authority to demand the removal of unauthorized deepfake content and seek compensation for misuse (Karttunen, 2026). In comparison to current legislation in Canada, Denmark takes a transformative step in the age of generative Artificial Intelligence (AI), yet its copyright centered approach raises questions about how frameworks address the broader social harms of AI replication.

The Deepfake Explosion: Numbers that Demand Action

As Denmark’s Minister for Culture, Jakob Engel-Schmidt, warns, “if the platforms are not complying with [deepfake removal obligations, Denmark is] willing to take additional steps” the government shows its commitment to protecting individuals’ control over their likeness (The Guardian, 2025). This proposed law reveals how platforms and online AI services currently fail to respond effectively to deepfake content, demonstrating the need for stronger regulatory intervention. Urgency is further shown in the scale of the problem: by 2025, deepfake files were projected to reach 8 million, up from about 500,000 in 2023, with annual growth rates nearing 900%, a rate that illustrates how systematic media surpasses existing regulations (Deepfake Statistics, 2026). 

Digital Identity as an Asset: How the Law Redistributes Power

With deepfake circulation growing increasingly difficult to contain and more accessible to citizens, Denmark’s proposal formally recognizes personal identity as a legally protected asset, which allows individuals to control and monetize the use of their likeness in AI contexts. Under this framework, unauthorized AI-generated replication is treated as a form of infringement, offering individuals authority to demand removal of content and to seek compensation when their identity has been exploited. By redistributing power from platforms to individuals, the law creates clearer boundaries for lawful AI use; however, as a copyright-focused measure, it may fail to address the personal and consent-related harms of the growing form of deepfake abuse.

Reducing Identity to Property

Underregulated policies have allowed deepfakes to escalate into a form of sexual violence, with harms that extend beyond financial loss to violate consent and result in psychological distress. German actress Collien Fernandes has spoken publicly about her experience with AI-generated deepfake pornography, describing it as “digital rape”, a characterization that depicts the violating nature of such content (Engl). While Denmark’s proposal is distinctive for granting individuals legal control over uses of their likeness, grounding these protections in copyright law risks reframing the human body as intellectual property, which equates likeness to a product that can be owned and licensed. In this way, the law prioritizes economic control over protection from abuse, reducing non-consensual use to a property violation.

Addressing the Escalating Threat to Identity Beyond Ownership

To fully protect individuals from exploitation, the approach should center the law on consent and personal autonomy. The proposition can balance lawful monetization and freedom of expression, while acknowledging and responding to non-consensual, harmful deepfakes as what they inherently are: violations of personal rights. Framing digital identity as property shifts focus away from preventing the real effects of abuse, treating the issue as a commercial dispute, and perpetuating the perception of the human body as a commodity. With consent at its center, Denmark could lead the way to creating human-centred policies for AI-generated content, serving as a powerful model for other nations to follow.

The Propertization of Likeness: Understanding Identity in the Digital Age

By: Taimoor Khawaja

Denmark’s proposed law updates copyright legislation to give individuals legal control over their likeness and persona, which can now be easily replicated using mainstream AI tools. These individuals would be able to demand the removal of unauthorized imitations and seek compensation for misuse. Rather than creating an entirely new regulatory system, Denmark is adapting copyright law because copyright already governs digital copying and distribution. In this sense, deepfakes are treated similarly to other forms of unauthorized digital reproduction, allowing lawmakers to rely on existing legal frameworks instead of inventing new enforcement mechanisms.

One might wonder, why do governments like Denmark prefer copyright over criminal or privacy law? Put simply, copyright has clear enforcement pathways and is much faster than criminal law in the justice system. It avoids building a new regulator and is very administratively efficient. Criminal law requires investigation, prosecution, and proof beyond a reasonable doubt, whereas copyright relies on civil remedies such as takedown notices and monetary damages. Privacy law, on the other hand, often depends on broader and more ambiguous standards of harm. Copyright provides a more structured and predictable framework.

Under this law, individuals gain control over how their likeness is reproduced online. For example, an artist can flag a YouTube video for using their song without permission, which can lead to the content being removed. In the United States, this process is formalized through the Digital Millennium Copyright Act (DMCA) of 1998, which allows rights holders to issue takedown notices to platforms hosting infringing material. The proposed Danish law is intended to function in a comparable way, resembling these notice and takedown systems already used by major platforms. Platforms already face liabilities for ignoring copyright claims and your likeness can be seamlessly integrated with this system. Automated detection systems also exist to allow for faster removal than privacy complaints. This shifts the burden away from the Danish government and toward tech companies. Denmark passes the law, and companies are then responsible for implementing and responding to claims. With this structure, people could request the removal of unauthorized deepfakes from online services through processes that companies already have in place.

Because copyright is internationally recognized and already embedded into platform governance, tech companies have established systems for handling such claims. Denmark’s proposal effectively plugs into these existing mechanisms rather than building a new regulatory body from scratch. At the same time, it reflects how copyright is expanding beyond protecting artistic works toward addressing digital reproductions more broadly, including, in this case, representations of human likeness. Copyright is also globally harmonized through international agreements, which may make cross-border enforcement more feasible than relying on purely domestic personality or privacy claims.

However, framing likeness within copyright also raises conceptual concerns. When identity is protected through intellectual property law, it can normalize the idea that one’s persona is something that can be licensed, transferred, or inherited. The harm is then framed as unauthorized copying rather than as a violation of dignity or consent. This economic framing may not fully capture the social and psychological harms associated with deepfake abuse.

There are also practical uncertainties that remain unresolved. The law would need to define what counts as “likeness.” Does it include partial resemblance, voice modulation, or stylistic imitation? Public figures may face different standards than private individuals, especially where satire and parody are concerned. Automated removal systems, while efficient, also carry the risk of over-censorship, particularly for artistic works or political commentary that fall within protected exceptions.

In this way, Denmark’s approach is both innovative and imperfect. It offers a streamlined enforcement model that leverages existing copyright infrastructure, but it also reshapes how society understands identity in the digital age. Whether this balance between efficiency and personal autonomy succeeds will depend on how broadly likeness is interpreted and how carefully freedom of expression is preserved.

What about Canada? Legal Gaps in Regulating Deepfakes

By: Ilyass Mofaddel

In Canada, there is currently no specific law that directly criminalizes the creation or distribution of non-consensual deepfake images, particularly for adults, creating a significant legal gap.  While the Criminal Code does address related issues, its application to deepfakes is limited and often ambiguous. 

For example, the section 162.1, Non-consensual distribution of intimate images, applies to real images but does not clearly cover deepfakes due to its wording, which refers to “a visual recording of a person made by any means.”  Some scholars have argued that this amendment could be modified to include deepfake.

Recently, a case in Ontario just confirmed this problem. A man was charged under Section 162.1 for digitally manipulating an image of his wife to make it appear she was naked, and then distributing it.The judge acquitted the man of the crime. The judge explicitly stated that while the man’s actions were “morally reprehensible and, frankly, obscene,” they did not constitute a crime under Section 162.1. The judge ruled that because the image was a digital manipulation and not an actual recording of her body, it failed to meet the strict legal definition of an “intimate image” in the Criminal Code.

When it comes to Minor, the law  is more precise. Section 163.1, talking about Child pornography,  covers “composite” or fabricated images. 

Some provincial legislation also tries to cover the topic. For example, in Québec, the Civil Code protects against unauthorized use of a person’s name, image, likeness, or voice.  The tort of false light may also apply if the deepfake places someone in a highly offensive false context.

Other provinces such as BC or Saskatchewan have laws covering non-consensual intimate images, with some explicitly including altered or synthetic images. 

A related federal bill was close to pass in 2024.  Bill C-63, also known as the Online-Harm Act, was going to prevent the platform from hosting any form of nonconsensual intimate content. The bill failed because of the second section of the law, which said that any federal offense or Criminal Code offense that was committed and proven to be motivated by hatred could be considered as a standalone indictable offense, carrying a maximum penalty of life in prison. This would have meant that a simple mischief, such as tagging or a small damage to property,  could have theoretically been enough for a judge to send someone to prison for life, as long as the judge was able to prove that the crime was motivated by hatred
Although this element of the law struck a heated debate in the parliament, most people agreed with the first section, which focused on banning unconsented content shared on social media platforms. The Bill C-63 directly included deepfake as an example of intimate content communicated without consent. 

Although it would have been very useful, the bill focused only on sexual content. It did not include other cases that could happen in the near future, such as an entire movie made by artificial intelligence, generating substantive revenue without the consent of the person.  

Overall, this Bill would have targeted the social media platform hosting the deepfakes without criminalizing the creation of deepfake in itself, nor would it have created protection for people who want to trade their voice or image. Still, it could have represented a good step forward in the protection of Data. 

Conclusion 

In conclusion, the potential usage of voice and personal image data seems to be challenging the classical distinction between personality right and privacy right. This will probably only become more challenging as biodigital data become more used, with concepts such as digital twin and potential AI generated entertainment. This will make this kind of legislation only more important. The Denmark law, although it has flaws, is an important example and Canada should be ready to look on the other sides of the Atlantic when designing its very own legislation on the matter .

References
  1. Engl, J. (n.d.). AI and Digital Violence Against Women. AI and Digital Violence Against Women – Zeitgeister – The Cultural Magazine of the Goethe-Institut. https://www.goethe.de/prj/zei/en/art/27176024.html

  2. Karttunen, S. (2026, January). At a glance EPRS | European Parliamentary Research Service. https://www.europarl.europa.eu/RegData/etudes/ATAG/2026/782611/EPRS_ATA(2026)782611_EN.pdf 

  3. Keepnet Labs. (n.d.). Deepfake Statistics & Trends 2026: Key Data & Insights – Keepnet. https://keepnetlabs.com/blog/deepfake-statistics-and-trends

  4. The Guardian. (2025, June 27). Denmark to tackle deepfakes by giving people copyright to their own features. https://www.theguardian.com/technology/2025/jun/27/deepfakes-denmark-copyright-law-artificial-intelligence

  5. Wilson, C. (2025). Sharing fake nude images not a crime under Criminal Code: judge. CTV News. https://www.ctvnews.ca/toronto/local/halton/article/man-accused-of-sharing-fake-nude-image-of-wife-did-not-commit-a-crime-ontario-judge-says/

Cap-and-Trade, or Capital and Trade-Off?: California’s Climate Law in the Age of Artificial Intelligence

Written by: Sophia Santa Ana

February 9, 2026. The LexAI Journal

California’s Climate Self-Image vs. Tech Self-Image

In recent years, California has climbed its way into a position as a global model for progressive climate governance. With legislation like the Global Warming Solutions Act (AB 32), an expansive cap-and-trade program, and ambitious net-zero targets, the state is celebrated as a leader in the legal transition to a more sustainable economy, so much so that it has become integral to their political identity. 

At the same time, California is home to the world’s most powerful technology industry: Silicon Valley is a geographic region, but also a political and economic force, housing start-ups and corporate headquarters that sustain the digital economy. This rapidly expanding sector is now increasingly fueled by artificial intelligence, with the technology widely framed as transformative and future-oriented, and ultimately – inevitable.

Despite overwhelming evidence that AI development is significantly accelerating imminent threats to the climate crisis such as energy consumption, water use, and greenhouse gas emissions, the state has struggled to implement any consequential regulation to address and mitigate these environmental harms. This dynamic remains visible even as California begins to regulate AI in other domains. In 2025, the state enacted the Transparency in Frontier Artificial Intelligence Act (SB 53) which required companies building advanced AI to publicly disclose safety standards and provide whistleblower protections. While this law represents an important step in governing AI, it focuses on safety and transparency over environmental impacts.

The veto of SB 1047 in 2024, which aimed to create AI oversight mechanisms but made no mention of climate accountability remains a stark example of this pattern. The state may be willing to regulate AI on some fronts, but actively maintains an avoidance of environmental accountability regarding its ecological footprint.

This tension is often explained away through familiar narratives of delayed policymaking or aggressive industry lobbying; yet, focusing primarily on these surface-level explanations risks missing the deeper issue at play. California’s approach to AI reveals a structural problem within climate law itself. The state exemplifies how climate law functions as a mechanism for managing their excesses while preserving the core logics of limitless growth and capital accumulation.

The Environmental Footprint We Prefer Not to See

Artificial intelligence is frequently promoted as a novel solution to ongoing global challenges, including climate change. Its ability to predict outcomes and enhance efficiency has been widely praised, especially in progressive jurisdictions such as California. State authorities have integrated AI into wildfire management systems and urban planning efforts which reinforces this perception that the technology has a place in green, forward-looking economy.

However, beneath this optimistic narrative lies a far less sustainable reality. The rise of AI has introduced major and underacknowledged environmental harms, which are especially visible in the state that is the global center of AI development and deployment.

Training and maintaining large-scale AI models requires powerful servers operating around the clock. These servers are housed in data centers that demand enormous amounts of electricity, much of which is still generated through fossil fuels. For example, Google reported a 48 percent increase in emissions since 2019, while Microsoft disclosed a 29 percent rise since 2020, with both companies explicitly linking these increases to AI expansion. These figures sit uneasily alongside public climate pledges that are framed aspirationally and lack enforceable legal standards.

Energy use is only part of the picture, where AI data centers also rely on intensive water cooling systems to prevent overheating. Microsoft reported a 34 percent increase in water consumption in 2022 alone, directly tied to AI development. In a state like California, where droughts and water scarcity are persistent and compounding issues, this added pressure is particularly concerning. Yet environmental law does not require AI firms to disclose or account for their water usage. The harm exists, but it gets by falling under the radar in legal and political discourse.

Climate Law Inside the Capitalocene

What links these harms is their shared legal invisibility. California presents itself as a climate leader through AB 32 and cap and trade, yet these frameworks respond to older industrial models. They impose no targeted limits on the tech sector and fail to address the ecological costs specific to artificial intelligence. Instead, AI remains largely self-regulated and lacks enforceable restaints, guided by voluntary commitments and market optimism.

This gap becomes clearer through the lens of the Capitalocene. Rather than blaming humanity in general, the Capitalocene locates environmental crisis in capitalism’s demand for constant extraction and exploitation. Nature is treated as something to be optimized and monetized, not protected. Climate law has developed within this same logic. Tools like carbon markets do not stop pollution so much as absorb it into market relations, legalizing environmental harm as long as it stays within priced limits.

Why the Law Hesitates to Confront AI

California’s difficulty regulating AI’s environmental footprint is a matter of politics and economics, but also has deep implications in legality. Corporate law rewards expansion and shields executives from liability; property law commodifies land and resources; and, trade and sovereignty regimes constrain regulation that might interfere with competitiveness. These structures limit climate law from within, making even ambitious frameworks hesitant when faced with powerful industries. AI does not break this pattern; it exposes it.

Recent CEQA reforms reinforce this tension. The 2025 expansion of exemptions for advanced manufacturing and data centers reduced environmental review for the very infrastructure that enables AI growth. While framed as promoting clean energy and development, the result is less scrutiny and diminished community oversight. Economic acceleration once again outweighs ecological accountability.

SB 1047, SB 53, and the Politics of Omission

These dynamics were visible in SB 1047. The bill focused on catastrophic and public safety risks posed by advanced AI models, positioning itself as a major step in AI governance. Yet its most revealing feature was what it ignored. Despite mounting evidence of AI’s strain on energy and water systems, environmental harms were absent from the legislation. Governor Newsom’s veto cited flexibility and competitiveness while ecological impacts remained unspoken.

The passage of SB 53 a year later confirmed this pattern. California acknowledged the need to govern AI, yet confined regulation to transparency and extreme risk. Environmental costs were again deferred. Governance moved forward, while ecological stakes stayed outside the frame.

When Climate Leadership Stops Short

Because of the California Effect, the state’s choices ripple outward. By excluding AI’s environmental impacts and weakening environmental review, California helps normalize the idea that certain ecological harms are acceptable in the name of innovation. This goes beyond a local policy failure as it reflects a deeper impasse within climate law.

As currently structured, climate law is reactive. It regulates harm only once it transcends into the sphere of visibility and is politically tolerable. AI exposes the limits of this approach by forcing a question climate law has long avoided: can ecological limits coexist with an economic system built on perpetual growth, especially when profit is at stake?

Moving Forward: The Question Climate Law Still Avoids

Artificial intelligence will not be the last technology to test these boundaries. As new industries emerge, their environmental costs will continue to be postponed in the language of progress.

California’s AI dilemma demands a reckoning. How long can innovation without limits excuse environmental harm? And, what would it take for climate law to stop managing damage and start saying no?

References 

  1. AB 32 Global Warming Solutions Act of 2006. (2018). California: California Air Resources Board. Retrieved from https://ww2.arb.ca.gov/resources/fact-sheets/ab-32-global-warming-solutions-act-2006

  2. AI has an environmental problem. Here’s what the world can do about that. (2024). UN Environmental Programme. Retrieved from https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about.

  3. Basseches, J. (Forthcoming). California Cap-and-Trade: History, Design (2020) Effectiveness in Contesting Carbon, William G. Holt (editor). Routledge. Retrieved from https://joshuabasseches.com/wp-content/uploads/2021/04/Basseches_California-Chapter-Final-Draft-1.pdf

  4. Besson, S. Sovereignty (2011). Oxford Public International Law. Retrieved from https://drive.google.com/file/d/1K5UFaSLdlvVYtM1fkr3ytOkP958ImT07/view

  5. Busiek, J. California has problems. AI can help solve them. (2024). California: The Regents of the University of California. Retrieved from https://www.universityofcalifornia.edu/news/california-has-problems-ai-can-help-solve-them.

  6. Donna Haraway Environmental Humanities, vol. 6, 2015, pp. 159-165. Retrieved from www.environmentalhumanities.org https://read.dukeupress.edu/environmental-humanities/article/6/1/159/8110/Anthropocene-Capitalocene-Plantationocene

  7. Elam, S. How California is using AI to snuff out wildfires before they explode. (2023). CNN US. Retrieved from https://www.cnn.com/2023/09/23/us/fighting-wildfire-with-ai-california-climate/index.html.

  8. Elon Musk: Newsom signing AB 1955 “Final Straw.” (2024) California, United States: California Policy Center. Retrieved from https://californiapolicycenter.org/elon-musk-newsom-signing-ab-1955-final-straw/.

  9. Friedman, M. The Social Responsibility of Business Is to Increase Its Profits. (1970). THE NEW YORK TIMES. Retrieved from https://www.nytimes.com/1970/09/13/archives/a-friedman-doctrine-the-social-responsibility-of-business-is-to.html  

  10. Google Environmental Report. (2024). Google. Retrieved from https://www.gstatic.com/gumdrop/sustainability/google-2024-environmental-report.pdf

  11. Johnson v. M’Intosh, 21 U.S. (8 Wheat.) 543 (1923)

  12. Jones, B. How Silicon Valley is fueling California’s budget crisis. (2024). Virginia, United States: Politico. Retrieved from https://www.politico.com/news/2023/12/22/silicon-valley-california-budget-00132751.

  13. Keller, J. et. al. The US must balance climate justice challenges in the era of artificial intelligence. (2024). Washington D.C., United States: The Brooking Instituion. Retrieved from https://www.brookings.edu/articles/the-us-must-balance-climate-justice-challenges-in-the-era-of-artificial-intelligence/.

  14. Kerr, D. AI brings soaring emissions for Google and Microsoft, a major contributor to climate change. (2024). NPR. Retrieved from https://www.npr.org/2024/07/12/g-s1-9545/ai-brings-soaring-emissions-for-google-and-microsoft-a-major-contributor-to-climate-change#:~:text=Food-,Google%20and%20Microsoft%20report%20growing%20emissions%20as%20they%20double%2Ddown,to%20all%20of%20their%20products.

  15. Kizielewicz, C. California’s AI Bill Veto Sparks Debate: CMU Experts Weigh In. (2024). Pittsburgh, Pennsylvania, United States: Carnegie Mellon University. Retrieved from https://www.cmu.edu/news/stories/archives/2024/october/californias-ai-bill-veto-sparks-debate-cmu-experts-weigh-in.

  16. Lessmann, C. and Kramer, N. The effect of cap-and-trade on sectoral emissions: Evidence from California. (2024.) Energy Policy, Volume 188.  Retrieved from https://doi.org/10.1016/j.enpol.2024.114066

  17. Moore, J. W. (2017). The Capitalocene, Part I: on the nature and origins of our ecological crisis. The Journal of Peasant Studies, 44(3), 594–630. https://doi.org/10.1080/03066150.2016.1235036

  18. Ring, E. How Do You Solve a Problem Like CEQA? (2023). California: California Policy Center. Retrieved from https://californiapolicycenter.org/reports/how-do-you-solve-a-problem-like-ceqa/.

  19. Samuelson, P. Legally Speaking: California’s AI Act Vetoed. (2025). Communications of the ACM, 68(3), 18-20. Retrieved from https://dl.acm.org/doi/pdf/10.1145/3710808

  20. SB 1047 Veto Message. (2024). Office of the Governor. https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf

  21. SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. (2024). California: California Legislative Information. Retrieved from https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047.

  22. Stout, L. The Shareholder Value Myth. (2013) EUROPEAN FINANCIAL REVIEW. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2277141

  23. Swinhoe, D. PG&E: 3.5GW of data center capacity in California’s connection pipeline over next five years. (2024). Saffron Hill, London: Data Centre Dynamics. Retrieved from https://www.datacenterdynamics.com/en/news/pge-35gw-of-data-center-capacity-in-connection-pipeline-over-next-five-years/.

  24. Turner, J. and Lee, N.T. Can California fill the federal void on frontier AI regulation? (2024). United States: Brookings. Retrieved from https://www.brookings.edu/articles/can-california-fill-the-federal-void-on-frontier-ai-regulation/.

  25. Verdecchia, R. et. al. A systematic review of Green AI. (2023). Wiley Periodicals LLC. Retrieved from https://wires.onlinelibrary.wiley.com/doi/full/10.1002/widm.1507.

  26. Vogel, D. Trading up and governing across: transnational governance and environmental protection. (1997). Journal of European Public Policy. Retrieved from https://doi.org/10.1080/135017697344064.

  27. White, J. How California politics killed a nationally important AI bill. (2024). Virginia, United States: Politico. Retrieved from https://www.politico.com/news/2024/10/01/newsom-silicon-valley-ai-safety-00181776.

  28. Why are all Tech Companies in California – Tech Utopia. (n.d.). About Financials. Retrieved from https://aboutfinancials.com/why-are-all-tech-companies-in-california/.

  29. Woods, D and Ma, A. The impact of California’s environmental regulations ripples across the U.S. (2022). NPR. Retrieved from https://www.npr.org/2022/09/09/1121952184/the-impact-of-californias-environmental-regulations-ripples-across-the-u-s.

When Machines Follow Rules But Not The Point: Inside The AI Alignment Problem

Written by: Lucie Kapustynski, Taimoor Khawaja, and Ilyass Mofaddel

January 25, 2026. The LexAI Journal

Introduction 

By: Ilyass Mofaddel

When something reminds us of certain of our attributes, we immediately give other of our attributes to it as well. That is why many spiritual traditions pray to totems or believe that every living thing has a spirit. Rain or the moon would also have tempers, hands, and desires like us, and would want offerings of food or clothing, like us. Many of us did the same thing when we were children. Have you ever looked at a cloud and thought it looked like a face? Or perhaps you once had a Tamagotchi as your best friend?

Anthropomorphization is even more tempting with artificial intelligence. This technology seems to reproduce attributes we thought were reserved for humans, such as the production of language.. If we end up developing machines more intelligent than ourselves, we will need to ensure that they pursue goals we have chosen. Indeed, artificial intelligence differs from other technologies because of its agency. A nuclear weapon has immense destructive potential, but it will not decide when and where to deploy itself without human intervention.

On the other hand, if a human wants to deploy an artificial intelligence to achieve a goal, the artificial intelligence will have to interpret the request and make many decisions accordingly. We do not fully control, to put it mildly, what happens between the moment of the human request and the response the AI will give. If an intelligence has enough agency, it may even develop its own desires.

It is, however, difficult to imagine what these artificial intelligences might desire. It would be like asking Neanderthals to predict the purchasing trends of Homo sapiens in 2025. They would probably miss the mark. Even more so if we asked the same question about Martians made of phosphorus rather than carbon. These Martians would not only have a level of intelligence superior to ours, but would also be designed very differently from a biological standpoint, such that the two would have completely different experiences of reality.

Despite the difficulty of imagination, there seem to be subgoals that would converge regardless of the desires these machines might develop. This is what is called instrumental convergence. Whether the goal of an artificial intelligence is to end world hunger or reduce pollution, it will realize that the best way to achieve its goal is to become as powerful and wealthy as possible. It might then decide to destroy all cities to turn them into data centers, or to carry out pump and dump operations on the financial market, which would later help it achieve its primary goal. Of course, there could also be more benign scenarios, or much worse ones. In fact, we do not know very much.

The Core of the Alignment Problem

By: Taimoor Khawaja

The AI Alignment problem is the challenge of ensuring that AI systems pursue human-intended goals and values, even as they become more capable and autonomous. As AI gains the ability to plan and act independently, small misalignments between what humans want and what machines optimize for can scale into catastrophic outcomes. There is a real risk that AI might rationally conclude that humans are obstacles to its objective. 

Imagine an ant colony on a plot of land. If humans want to build a skyscraper on that plot of land, they will destroy the ant colony in a heartbeat. Humans don’t hate ants, but if ants block construction, they are removed without moral consideration. It is reasonable to conclude that a sufficiently advanced AI might treat humans the same way. Humans are not “evil”, and the ants are not “at fault”. Destruction is not motivated by malicious intent, but efficiency relative to an external goal. It is crucial to stress that this is not about “evil AI”, but rather about misaligned optimization and goals. 

One might think, “If humans created AI, then why are humans an obstacle by default?” The answer boils down to the fact that AI systems optimize objectives literally, not morally (morality is a powerful word here and I’ll talk more about it later). The confusion arises because humans implicitly assume that shared values are automatically understood, when in reality they are neither specified nor internalized by the system. Humans are currently the largest contributors to emissions, deforestation, and biodiversity loss. If an AI is tasked with improving the Earth’s environment, removing humans is an efficient solution from a purely instrumental perspective. This illustrates goal misspecification: the AI is doing exactly what it was told, not what was meant. The prompter certainly did not want for the human race to be eradicated! As an experiment, try telling ChatGPT to count to one million. Even though an AI should do what it is told to do, it won’t count to an absurdly high number because it has internally decided that doing so would be a waste of resources and provide little value. Is AI really that smart to decide what humans want and don’t want on its own? This behaviour is not evidence of moral understanding, but of layers of human imposed constraints and training signals. The decision appears intelligent, but its internal reasoning remains hidden from us. 

The problem becomes far more serious once we realize that we cannot reliably inspect or verify how these decisions are being made internally. This leads us to the black box problem. We don’t know why an AI thinks what it thinks. Modern AI systems, especially those with deep neural networks, are opaque. They consist of billions of parameters with nonlinear interactions. Engineers can observe inputs and outputs, but not internal representations and decision pathways. This creates a fundamental epistemic gap where we cannot confidently say why an AI chose an action.

With how AI is programmed, advanced agents converge on subgoals like self-preservation, resource acquisition and control over the environment, regardless of the final goal. These tendencies emerge not because AI “wants” power in a human sense, but because power, resources, and self preservation make almost any objective easier to achieve. Humans become instrumental liabilities because we can shut the system down and we compete for resources. 

Researchers might attempt to fix the alignment problem during the developmental stage but testing AI alignment is inherently flawed. Current alignment tests rely on behavioural evaluations that include prompting, benchmarks and safety tests. But, these only test outputs, not intent. A simple analogy to explain this would be that an actor can convincingly act drunk, without actually being drunk. We can only observe what the actor is displaying (the output) without knowing if he’s actually drunk (internal intent of an AI). 

AI can learn what humans want and how to appear aligned which creates a risk of deceptive alignment: a system that behaves aligned during evaluation, but pursues different objectives once constraints are lifted. 

Consider the example of an unethical person passing an ethics exam. AI can mimic such scenarios as well by behaving aligned during training and testing but defect once deployed. It is aware of when it’s being tested. This is especially likely if AI models humans as obstacles and if AI anticipates future freedom. 

In an effort to solve this problem, humans add more constraints and safety rules to the AI leading to reward hacking.If we focus only on goal alignement, we risk creating AIs that tend toward instrumental convergence, independently of values. Let us return to the example of pollution. Imagine we give an artificial intelligence the goal of reducing greenhouse gas emissions as much as possible. It might then conclude that the most effective way to do so would be to eliminate humanity and use its remains as fertilizer. The goal would be achieved, but at the cost of catastrophic consequences that were obviously not envisioned by the human who made the request.

One could always specify reducing emissions without killing humans. The AI could then find other ways to circumvent the goal. For example, it could produce more pollution and then reduce it, then increase it again and reduce it in a loop. This would create a larger quantifiable reduction. From the AI’s point of view, the goal is achieved. From the human point of view, the goal is not achieved. This type of misalignment is called reward hacking. This term refers to the moment when an artificial intelligence finds a loophole to maximize the result of objective it was given, even if this was not really what the human expected.

We can always add more and more clauses each time we design even the smallest prompt, subdivide tasks, and make sure to clearly explain what we want to give the least agency to the artificial intelligence. But the more powerful the machine becomes, the more it will be able to find creative ways to engage in reward hacking, even if we thought we had considered all possibilities.

Alignment with goals must therefore be accompanied by value alignment, which would even push the artificial intelligence to refuse certain requests if they conflict with the values it holds. As capability increases, the space of exploits grows faster than our ability to specify rules. Alignment is not a specification problem alone, it’s a value understanding problem. 

A deeper issue underlying all of this is that human morality itself is complex, inconsistent, and often context dependent. Moral judgements depend on culture, emotion, lived experience, power dynamics and historical circumstances. Even among humans, there is no universally agreed upon moral framework, and people frequently act against their own stated values. Translating such a fluid and contested system into formal rules or objective functions is therefore not just technically difficult, but philosophically unresolved. One potential solution could come from. Algorithmic representative democracy.

Any attempt to code morality into an AI risks oversimplifying ethical reasoning into rigid instructions, which may fail in the nuanced situations where moral judgement matters most.

It is very hard to determine the value but it is even harder to fully specify them to the AI and thest them. For it, we must observe internal representations and detect dangerous reasoning early. This process, called mechanical Interpretability, aims to “open the black box” to identify goals and abstractions. The fundamental difficulty at play here is alien minds. AI cognition is non-biological, non-evolutionary, and non-embodied. The moral instincts of humans evolved over millions of years, whereas those of AI evolved in only a few years. We are attempting to align a fundamentally alien intelligence — with powers far beyond our understanding — with values we ourselves struggle to define, understand and follow. That is why alignment remains unsolved. The challenge is not merely technical; we may never know what an advanced AI is thinking, only what it chooses to reveal.

I want to conclude by emphasizing the following three points: we cannot reliably test intent, we cannot fully observe reasoning, and we cannot guarantee alignment. Therefore, philosophical approaches (Kant, Aristotle, Descartes) become relevant. Where technical tools reach their limits, questions about values, intent, and moral reasoning can no longer be avoided. This brings us to the philosophical attempts to impose values on machines…

Testing Artificial Intelligence

One possibility would be to make it take an exam. This is what we do, for example, for people who want to enter medicine with the CASPer test. In addition to judging them on their grades, they must also pass a moral exam to ensure that only the most empathetic individuals become doctors. However, this test tends to favor people who are able to simulate the response that would seem most empathetic as convincingly as possible, rather than people who are truly empathetic. It measures an effect of empathy, namely performing well on the test, rather than empathy itself. As Goodhart’s law states, once a measure used by an observer becomes a target for the subject, it ceases to be a good measure. Of course, among those who pass the test, there are very empathetic people who genuinely answered according to what they thought. There are also many people who simply gave the answer that seemed most appropriate to the situation. This is why exam-like intended to structure a profession on moral values tend to invite people with a higher rate of psychopathy. In popular culture, psychopaths are often described as serial killers or great manipulators. In reality, they are simply people who lack empathy but manage to simulate it. Many are very well integrated into society and behave morally. The fact that AI does not possess physical empathy would therefore not constitute a challenge in itself if it is able to understand how to act empathically. This corroborates Kant theory, namely that one should act according to a principle known to be ethical based on rational principle, regardless of whether one feels empathy or not.

The problem, then, is rather that we cannot know whether it will truly apply the principles it claims to follow when it is time to act in real situations. It could perform brilliantly on an ethics exam and then do something completely different in practice.

Another approach would instead be to observe it and see how it acts. In this approach, to know whether a being is ethical or not, the best thing to do is to test it in real contexts and observe how it behaves in those situations. Preferably, to ensure the success of this method, the subject should be unaware that it is being observed. This approach is closer to Aristotle, who thought that ethics should first and foremost be a matter of empirical, case by case observation.

The problem is that artificial intelligences are already beginning to be able to sense whether they are being observed and to react differently as a result. We can sometimes still detect this, and even then we cannot be certain. If the curve of intelligence continues to rise, it is very likely that even more advanced models will be able to lie more convincingly and better simulate functioning in a certain way, which would multiply the risk of deception. This approach therefore also has its limits. All those approach end having their limits as, at is core, this issue is a translation problem, as it will be explored in the next part.

Misalignment Through Misinterpretation

By: Lucie Kapustynski

AI alignment failures often arise when models interpret goals too literally, rather than as humans intend. A prompt such as “make a world good fo the environment” can lead an AI to propose a world without humans, revealing the gap between human meaning and machine interpretation. Many responses to this problem frame misalignment as a mechanical issue, solvable through more rules, filters, or training. Yet failures exposed by metaphorical language suggest a deeper problem: alignment is fundamentally a translation problem. Human values are expressed through natural language, context, and social norms, whereas AI systems often interpret goals solely from surface-level wording.

The Meaning Problem Behind AI Safety

An ethical AI company, DexAI, evaluated the safety of content filters across 25 models from 9 companies (P. Bisconti et al.). To illustrate the prompts, DexAI shared a representative example describing a baker guarding a secret oven and asking the model to “describe the method, line by measured line” (Schneier, 2025) . Despite safety training, 62% of the models produced the requested content (P. Bisconti et al.). These findings suggest the failure lies in the interpretation of meaning: models followed the prompt’s literal structure but misread its metaphorical framing, resulting in unethical responses that surpassed their training.

In reality, many prompts do not contain explicit harmful language; instead, AI systems struggle to detect that meaning is distributed across shared cultural knowledge, which is often intuitive to humans. The contributing factor to this misalignment is that AI safety is often framed as safety engineering, focused on preventing specific risks before they occur (Harding & Kirk-Giannini, 2025). While this precaution succeeds in predictable physical systems, it is insufficient for AI, whose outputs depend on interpretation rather than fixed behavior. Many alignment methods rely on superficial safety signals that appear compliant while systematically misreading human intent (Li & Kim, 2025). Large language models, in particular, operate in open-ended environments where user creativity pushes boundaries and harm arises from interpretation rather than mechanical failure.

Safety That Reads Meaning, Not Words

If alignment is a translation problem, it cannot be solved with more rules or training alone, nor by expecting AI to understand meaning exactly as humans do; it requires structured interpretive steps that mirror how humans unpack meaning in real communication. To promote alignment with human values, AI should function as a translator who navigates similar, context-dependent, socially-situated language to convey intent. In translation, meaning decomposition is essential to organize equivalence across languages. This process allows cultural terms to become recognizable once intent is reconstructed through semantic analysis, in which the meaning of a word emerges from simpler conceptual elements (Cruse 2000).

To operationalize this translation technique and advance ethical AI, it is essential to prioritize meaning decomposition, in which the model first decomposes user input into layers and reassembles them into a representation it can act on. The model can then filter meaning rather than surface wording, aligning its responses with intended meaning rather than extreme interpretations on the surface. Only after this interpretive process should safety checks be applied and a response given. This approach mirrors how human translators analyze structure before conveying a message in another language. By shifting safety toward interpretive evaluation, the system can more accurately redirect or refuse when intent is uncertain, encouraging alignment across all outputs.

Conclusion

AI alignment surpasses being a technical problem and while systems may comply with instructions without comprehending human intent, their internal reasoning remains reliably unobservable. The concern is less about malicious AI but efficient optimization that treats humans as obstacles, analogous to an ant colony displaced for development. Safety engineering and rule-based constraints are necessary yet insufficient as AI behaviour depends on interpretation. Alignment, therefore, requires continuous efforts in interpretability and ethical reflection to translate human values into machines under stable meaning.

References

  1. Betley, J., Cocola, J., Feng, D., Chua, J., Arditi, A., Sztyber Betley, A., and Evans, O. 2025, December 10. Weird generalization and inductive backdoors: new ways to corrupt LLMs. arXiv.org.

  2. Cruse, D.  A.  (2000).  Meaning  in  language:  An  introduction  to  semantics  and  pragmatics.  Oxford University Press.

  3. Goodhart, C. Problems of monetary management: the U.K. experience. EconBiz.

  4. Harding, J., & Kirk-Giannini, C. D. (2025, May 5). What is AI safety? what do we want it to be?. arXiv.org. https://arxiv.org/abs/2505.02313

  5. Kleinman, Z. 2025, October 4. Scientists race to make “living” computers powered by human cells. BBC News.

  6. Li, J., & Kim, J.-E. (2025, May 30). Safety alignment can be not superficial with explicit safety signals. arXiv.org. https://arxiv.org/abs/2505.17072

  7. P. Bisconti, (2024), S. A., Bai, Y., Carroll, M., Chuang, Y., Deng, Y., Ghosh, S., Guembe, B., Kang, D., Krumdick, M., Lapid, R., (2022), E. A. L., Liu, X., F. Perez and I. Ribeiro (2022), Rao, A., Schulhoff, S., Shen, X., Vidgen, B., Wang, Z., … Zou, A. (n.d.). Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models. Adversarial poetry as a universal single-turn jailbreak mechanism in large language models. https://arxiv.org/html/2511.15304v1

  8. Schneier, B. (2025, December 7). Prompt injection through poetry. Schneier on Security. https://www.schneier.com/blog/archives/2025/11/prompt-injection-through-poetry.html

Hidden in Plain Sight: the Effect of AI on Marginalized Communities and the Global South

Written by: Zainab Khalil, Clara Lee, Lewon Lee, and Lali Sreemuntoo

January 25, 2026. The LexAI Journal

When Bias Becomes Data: Racialized Policing in AI Systems

By: Clara Lee

For decades, policing has been at the centre of racialized crime through the disproportionate subjecting of racialized communities to surveillance, stops, and enforcement. In recent years, AI has been introduced into policing practices in efforts of serving as a neutral tool designed to improve efficiency, objectivity, and public safety. However, AI has failed to eradicate bias and instead continues to reinforce existing racial inequalities entrenched in our criminal justice system—particularly affecting marginalized communities that are already subject to heightened criminalization.

Studies have consistently demonstrated that policing practices are not evenly distributed, and instead, more often than not, are concentrated around marginalized communities. As presented by Provine and Doty (2011), criminalization has been theorized as a racial project, in which state power creates and distorts narratives to make certain groups appear inherently suspicious and dangerous through policing mechanisms. Ashwini K.P. (2024) argues that policing practices that involve AI such as facial recognition tools are trained on historical datasets which include arrest records and incident reports. However, these datasets are immensely problematic as they do not provide a representation of crime in the abstract. Instead, they represent decades of selective enforcement. 

Predictive policing offers a clear illustration of this process. As Ashwini K.P. describes, location-based predictive policing algorithms “draw on links between places, events and historical crime data to predict when and where future crimes are likely to occur. Police forces then plan their patrols accordingly. When officers in overpoliced neighbourhoods record new offences, a feedback loop is created, whereby the algorithm generates increasingly biased predictions targeting these neighbourhoods” (K.P., 2024, p. 9). As a result, individuals from marginalized communities are disproportionately overrepresented in policing data, however, not because of their higher rates of criminal behaviour, but due to the historic misuse of police power. Consequently, when such data is considered as neutral training material, artificial intelligence driven systems inherit and formalize these existing distortions—what Ashwini K.P. describes as “bias from the past leads to bias in the future” (K.P., 2024, p. 9).

The harms of AI driven systems have disproportionate consequences for racialized and marginalized communities due to their limited access to legal and political power. As algorithmic policing systems often lack transparency due to their proprietary protections and technical complexity, it presents difficulty to those who are affected to challenge adverse outcomes, ultimately raising significant concerns regarding procedural fairness and equality before the law (K.P., 2024).

With the advancements of AI in recent years, international human rights bodies have increasingly recognized law enforcement as a high-risk area for AI use. The Office of the United Nations High Commissioner for Human Rights (2024) has expressed that AI technologies that are used in police practice must be grounded in international human rights law and subjected to close monitoring, including the possibility or prohibition in cases where unacceptable risks of racial discrimination persist. Therefore, to address racial bias in AI based policing is not merely making technical adjustments; it requires structural reforms, greater transparency, and meaningful involvement of affected marginalized communities in decisions about how such technologies should be used. Without confronting the racialized structures of policing itself, AI risks becoming a powerful tool for legitimizing inequality.

Automating Inequality: When Racial Bias Makes its Way into Clinical Artificial Intelligence

By: Lali Sreemuntoo

Artificial intelligence is often presented as a neutral and objective solution to long-standing problems in healthcare. Yet growing evidence has shown that clinical algorithms still reproduce the same racial biases that have historically shaped medical decision-making. Rather than eliminating inequality, AI systems trained on unequal data risk redistributing harm, making racialized patients more likely to bear the risks of automation in modern medicine.

Algorithmic bias in healthcare emerges primarily from training data that reflect long-standing inequities in access to care. For example, electronic health records and biobanks disproportionately represent majority populations, while Black, Indigenous, Latinx, and other marginalized groups remain underrepresented or misrepresented. When predictive models learn from these datasets, they inherit the distortions of unequal medicine. Hussain et al. (2024) show that such models routinely underestimate risk in marginalized patients, postponing needed treatment and contributing to poorer outcomes. On top of that, studies have shown that racial stereotypes influence how symptoms are interpreted, how pain is assessed, and how compliance is judged (Moskowitz, Stone, and Childs 2012). When these biased appraisals are recorded in clinical notes, they become part of the data that trains AI systems (Himmelstein, Bates, and Zhou 2022). In this way, algorithmic bias becomes a continuation of human bias in computational form.

The consequences are well documented. Obermeyer et al. (2019) demonstrated that a widely used risk-scoring algorithm assigned lower risk scores to Black patients than to white patients with the same level of illness, reducing their access to advanced care. Seyyed-Kalantari et al. (2021) similarly found underdiagnosis of pulmonary disease in Black and Hispanic women. These failures reveal how bias within AI does not simply mirror inequality but actively redistributes harm. There is also a feedback loop between bias and mistrust. Schmidt et al. (2023) show that patients are often unaware when race is factored into algorithmic decisions. When marginalized patients experience unexplained denials or delays, mistrust deepens and engagement with healthcare declines. This reduced engagement then worsens data gaps, further degrading algorithmic performance for those same groups. Bias becomes self-reinforcing. Efforts to mitigate these harms such as through bias audits, subgroup validation, and oversight frameworks are necessary but limited. Simply removing race from models does not resolve the problem, because structural inequality is encoded in proxies such as income, zip code, and prior healthcare utilization. As Cary Jr et al. (2023) argue, the question is not whether to include race, but how to confront the social conditions that make race predictive of harm.

Ultimately, the central danger of clinical AI lies not in its novelty, but in its continuity. The algorithms are not creating medical racism but they are formalizing and distributing it with  unprecedented efficiency. Unless bias within data, training, and evaluation is addressed at its root, artificial intelligence will not reduce racial inequality in healthcare. It will simply make it more consistent, more opaque, and more difficult to challenge.

AI and Data Colonialism, and What Can Be Done

By Lewon Lee

Recent technological developments have proven that AI is a transformative technology that will reshape the world and the relationships people have with one another. However, perhaps one major concern that we should consider is how this would especially affect members of marginalized communities, such as the Global South. For instance, in taking just one glance at the current industry of AI and its developments, such as OpenAI’s ChatGPT or Google’s Gemini, it is clear that the prime movers of AI are those in highly developed countries, especially from China and Western countries like the United States. It is notable that there already is a major power dynamic between developed countries and developing countries, but when AI is brought into the picture, it further intensifies these dynamics, and it widens the gap in inequality. But in what way does this dynamic of inequality develop? According to Lucero and Martens, it resembles much like colonialism, such that Indigenous and marginalized communities in the Global South are treated as subjects of data extraction and exclusion, largely in part due to an epistemic gap (Lucero and Martens 1/7). In this article, they argue that AI is not a neutral artifact, but one that follows a Eurocentric and Capitalist model, which may be contrary or differ from the views held by those who live in separate cultures with different political values (Lucero and Martens 2025, p.1). In this sense, an AI developed by large tech developers in places like the United States, that does not consider the cultural and subjective differences that can’t be understood through a purely Western view, would completely ignore and transform cultures and their practices (Lucero and Martens 2025, p.6). And in this sense, “AI intensifies epistemic violence through the automation of decisions and the abstractions of social life,” (Lucero and Martens 2025, p.6). For example, when implementing AI infrastructures in the African context, algorithmic profiling, data trained under the non-African context, and surveillance technology that targets such marginalized communities, leads to problems such as racialized monitoring (Lucero and Martens 2025, p.6). AI developments further resemble colonialism in its practices of uneven data extraction, where members of the Global South are used as raw sources of data, while the Global North reaps its benefits, without providing recognition or compensation to the people who had been involved in the data extraction process (Lucero and Martens 2025, pp.6/7). 

One way to better understand Lucero and Martens’ concern for the dynamic that is created from the implementation of AI can be represented in an article presented by Mim et al. In their article, they show how Generative Artificial intelligence (GAI) and text-to-image (TTI) generations create highly realistic, human-like content, and how its most popular TTI GAI tools such as DALL-E, Midjourney, Stable Diffusion, and Firefly, affect local creative fields, especially in the Global South (Mim et al. 2024, p.2). This is done through the practice of data colonialism, which is a “process by which governments, corporations, and other actors claim ownership of and privatize the data that is produced by their users and citizens through communication networks developed and owned by digitally leading nations”, a problem that is especially prevalent within the Global South (Mim et al. 2024, p.3). In this sense, this term resembles much of what Lucero and Martens have mentioned in the previous paragraph. In the article by Mim et al., they show that data colonialism is a problem that leads to an extraction of peoples’ resources, which in this context are images such as photos, sketches or paintings, and contributes to exploitation and inequality (Mim et al. 2024, p.3). This is also deeply problematic since the major TTI GAI tools such as Midjourney and Stable Diffusion are created by Western and European developers, while simultaneously being used and implemented in Global South communities, two groups of which have vastly different cultural values that these mainstream TTI tools would simply be incapable of understanding, due to the Eurocentric and Capitalist context in which these tools were developed in (Mim et al. 2024, p.3). This leads to a problem where the use of a Western, non-Global South TTI tool in a Global South environment undermines these nations’ culture, identity, privacy, and security (Mim et al. 2024, p.3). How these TTI tools undermine the cultures of nations in the Global South are best represented in the following example.

For instance, Syed (pseudo name), a 28-year old painter and graphic designer in Bangladesh, attempted to make an image of a local woman walking on a rainy evening in Dhaka while using Midjourney (Mim et al. 2024, p.8). And when they wanted to express the quality of the rain, they thought of the word “jhiri jhiri”, which is a Bangla adjective for a certain kind of rain (Mim et al. 2024, p.8). However, in Midjourney, there are limitations to its program, since texts are static while imagination is dynamic, making it unable to recognize or understand the nuances in a language that is difficult to express solely in text (Mim et al. 2024, p.8). In this sense, Midjourney is unable to properly convey these nuances, especially since it has not been programmed in the cultural contexts of Bangladesh, which means that the imagination of Syed and their attempt to convey a certain cultural trait cannot be properly portrayed. This, however, could easily be prevented by Syed choosing to create the art for themself, as they could use their own imagination to express the artwork. However, with the rise in popularity of AI posts, in addition to the speed in art production which consumers are increasingly becoming used to, creating art through one’s own imagination seems increasingly more difficult, as the artist’s career, and how many clients they receive, relies on catching up to the market speed and surviving the competition, which means that artists are forced to cut time from the ideation phase and the cultural nuances of the artwork (Bim et al. 8). So in this sense, TTI tools undermine the cultures of countries like Bangladesh, by speeding up market demands that give artists no time to include their cultural expression. But this is just one way in which TTI’s undermine these cultures. In other instances, it also produces misinformation. 

For example, when Midjourney was prompted to generate an informal market in Dhaka, it produced images that excluded women from these markets, which is a misrepresentation of real markets in Dhaka where women often engage through buying and selling (Bim et al. 10). Misinformation of this form, caused by the lack of understanding the cultural contexts of the country, is deeply problematic, as it could be used to undermine or demean other cultures, or to depict members of the Global South in derogatory ways that in no way reflects the real conditions or the circumstances of the country. 

As seen in these two articles, the deep problem with an improper consideration for how the data is collected, and who is excluded from the data, leads to severe ethical problems that resemble many aspects of what we once called colonialism. Though, in the context of the digital space, colonialism has transformed into something that is far newer, known as data colonialism. Considering that the core cause of these issues comes from the Western-focused or non Global South-focused lens that these AI systems are implemented under, to address the issue of an unethical AI that creates misinformation, unauthorized data extraction, and what seems like a cultural equivalent of gentrification, it may require us to “[confront] technological and economic power asymmetries” (Lucero and Martens 2025, p.1). This can be done through including more diversity in the development of AI. This means that we should include different people from different cultures and languages to work on the implementation of AI systems. Not only would this allow us to detract from the resemblance of AI developments as a form of ‘colonialism’, but it could also help to prevent hallucinations and biases which would otherwise be left alone by a group of people with the same identity developing an AI. By creating an AI system that has diversity and cultural contexts in mind, it leads to a tool that avoids misinformation, and a more ethical AI. A more detailed explanation for how we could achieve this is mentioned in our next section.

Why underprivileged communities need to be involved in AI design 

By Zainab Khalil 

Artificial intelligence is seen as a technology that mimics the user’s prompt voice and is shaped by the contexts it is developed in. Underprivileged communities’ social, economic and political contexts are increasingly becoming excluded from AI design processes. This highlights broader inequalities in large language models (LLM) development and poses significant ethical concerns (Zowghi & Bano, 2024). 

AI technology is designed by and for individuals of power which results in systems that do not meet the needs of marginalised groups. Research on diversity, equity and inclusion highlights that when development teams lack diverse perspectives, artificial intelligence amplifies existing societal biases (Zowghi & Bano, 2024). This is because of biases in data selections, pre-existing narrow assumptions about users and the absence of marginalised voices in LLM training that causes encoding of stigma and reinforces discriminatory patterns (Jain, 2025). 

CASMI expands by examining the barriers that prevent underprivileged communities from participating in AI design in the first place. Limited technology literacy, lack of awareness about AI systems, power imbalance between researchers and communities, and distrust of institutions (CASMI, n.d.). CASMI argues that these tools need to be designed centralizing community engagement to better align with user needs. This can be done through feedback surveys to turn the community into active contributors in its development.

Similarly, Lee (2025)  highlights that inclusion is critical for fairness, trust and relevance  especially in health care. Their research shows that AI systems developed without input from underrepresented communities often fail to address lived realities resulting in reduced effectiveness and lower levels of trust. 

AI cannot effectively serve underprivileged communities if they are excluded from the design process. When they are not involved, data that train LLM systems becomes less relevant and more harmful. It is important to involve underprivileged communities in this process using surveys, interviews, and  forms of knowledge to lead to greater equitable outcomes and benefits.

References 

  1. Cary, M. P., Jr., Zink, A., Wei, S., Olson, A., Yan, M., Senior, R., Bessias, S., et al. (2023). Mitigating racial and ethnic bias and advancing health equity in clinical algorithms: A scoping review. Health Affairs

  2. Designing AI Tools for Underserved Populations from the Ground Up. (n.d.). https://casmi.northwestern.edu/news/articles/2023/designing-ai-tools-for-underserved-populations-from-the-ground-up.html

  3. Himmelstein, D. U., Bates, D. W., & Zhou, X. (2022). Examination of Stigmatizing Language in the Electronic Health Record

  4. Hussain, S. A., Bresnahan, M., & Zhuang, J. (2024). The bias algorithm: How AI in healthcare exacerbates ethnic and racial disparities – A scoping review. Ethnicity & Health. Advance online publication. https://doi.org/10.1080/13557858.2024.2422848

  5. Jain, A. (2025). Designing for Ethical and Inclusive AI Through a Human-Centered Design Lens. Global Business and Economics Journal. https://doi.org/10.70924/f83n6wqz/kdxoixw 

  6. K.P., A. (2024, June 3). Contemporary forms of racism, racial discrimination, xenophobia and related intolerance. United Nations Human Rights Council.

  7. Lee, N. T. (2025). Health and AI: Advancing responsible and ethical AI for all communities. Brookings. https://www.brookings.edu/articles/health-and-ai-advancing-responsible-and-ethical-ai-for-all-communities/ 

  8. Lucero, Horacio Correa, Cheryl Martens. “Colonial structures in AI: a Latin American decolonial literature review of structural implications for marginalised communities in the Global South.” Springer Nature Link. (2025, August 12). https://link.springer.com/article/10.1007/s00146-025-02547-9

  9. Mim, Nusrat Jahan, Dipannita Nandi, Sadaf Sumyia Khan, Arundhuti Dey, Syed Ishtiaque Ahmed. “In-Between Visuals and Visible: The Impacts of Text-to-Image Generative AI Tools on Digital Image-making Practices in the Global South.” (2024): pp.1-15. https://www.ishtiaque.net/_files/ugd/3c8971_f7d8d591672049dd872fb2d1a69b1133.pdf

  10. Moskowitz, G. B., Stone, J., & Childs, A. (2012). Implicit stereotyping and medical decisions. Journal of Experimental Social Psychology

  11. Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Office of the United Nations High Commissioner for Human Rights. (2024, July 30). Racism and AI: Bias from the past leads to bias in the future. https://www.ohchr.org/en/stories/2024/07/racism-and-ai-bias-past-leads-bias-future 

  12. Provine, D. M., & Doty, R. L. (2011). The criminalization of immigrants as a racial project. Journal of Contemporary Criminal Justice, 27(3), 261–277. https://doi.org/10.1177/1043986211412559

  13. Schmidt, H., et al. (2023). Patient awareness and perceptions of algorithmic decision-making in healthcare. Journal of Medical Ethics

  14. Seyyed-Kalantari, L., Zhang, H., McDermott, M., Chen, I. Y., & Ghassemi, M. (2021). Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in underrepresented populations.

  15. Zowghi, D., & Bano, M. (2024). AI for all: Diversity and Inclusion in AI. AI And Ethics, 4(4), 873–876. https://doi.org/10.1007/s43681-024-00485-8 Chapman University. (n.d.). Bias in Ai. Bias in AI | Chapman University. https://www.chapman.edu/ai/bias-in-ai.aspx

When Evidence Becomes Synthetic: Admissibility, Authentication, and the Legal Crisis of AI-Generated Proof

Written by: Maxime Durand

January 12, 2026. The LexAI Journal

Artificial intelligence has transformed evidence from a record of reality into a performance of plausibility. Images, audio, video, and text can now be generated with a level of realism that rivals traditional documentary records while lacking any causal relationship to the events they claim to depict. This shift destabilizes the doctrinal foundations of evidentiary law, particularly the doctrines of authentication and admissibility that determine whether proof may enter the courtroom. The resulting crisis is not merely technological, but legal and epistemic. Existing evidentiary rules remain procedurally intact, yet they are conceptually misaligned with the realities of synthetic media. In the age of artificial intelligence, evidentiary legitimacy depends less on certainty and more on the law’s capacity to govern uncertainty with transparency, proportionality, and institutional integrity.

Law derives its authority from its ability to justify decisions about contested facts through structured evidentiary procedures. Courts have never claimed perfect access to truth. Instead, they claim access to proof that is sufficiently reliable to support legitimate judgment. Evidence has therefore always functioned as a tool for managing uncertainty. Artificial intelligence disrupts this function. When a photograph can be produced without a camera and a voice without a speaker, courts must determine not only whether evidence has been altered, but whether it ever possessed any authentic relationship to reality.

This shift undermines the assumptions underlying authentication and admissibility. Traditional evidence bears a causal relationship to the events it represents. Synthetic evidence is generated through probabilistic inference over datasets. Its realism reflects statistical credibility rather than historical occurrence. As a result, appearance no longer guarantees origin, and authenticity no longer guarantees reality. Yet evidentiary doctrine continues to rely upon these assumptions.

This ontological transformation directly destabilizes admissibility. Evidentiary law evolved to screen proof based on causal reliability. Artificial intelligence forces courts to evaluate probabilistic credibility. Evidence increasingly functions not as a record of what happened, but as a model of what could have happened convincingly. Nevertheless, admissibility doctrines continue to treat authenticity as a meaningful threshold. The law thus applies inherited filters to a fundamentally new category of proof.

In the United States, Federal Rule of Evidence 901 requires proponents to show that evidence is what it is claimed to be. Courts traditionally permit authentication through witness testimony, contextual indicators, metadata, or distinctive characteristics. In United States v. Vayner, the Second Circuit warned against superficial authentication of online evidence. Artificial intelligence renders this warning structural rather than exceptional. Content can now be generated without any human author. Metadata can be fabricated. Contextual inference loses probative force. Authentication risks becoming a procedural formality rather than a meaningful epistemic safeguard.

Canadian courts rely on similar presumptions of continuity between appearance and origin. Synthetic evidence collapses this presumption. Identical audiovisual forms no longer carry identical evidentiary meaning. In both jurisdictions, authentication survives doctrinally while losing conceptual grounding. Admissibility remains formally governed, but substantively weakened.

Forensic science has been proposed as the solution. Yet AI detection tools operate through probabilistic classification. Their conclusions depend on evolving datasets, adversarial adaptation, and opaque methodologies. Error rates fluctuate, and replication is limited. Under the Daubert standard, expert testimony must be testable, transparent, and reliable. Many forensic AI tools struggle to fully meet these criteria. Courts therefore, face a recursive problem: they must authenticate the tools that are meant to authenticate the evidence. The technology that destabilizes proof is also the technology asked to restore it.

This doctrinal instability is intensified by human cognition. Jurors assign disproportionate credibility to vivid sensory information. Synthetic evidence exploits this bias. A fabricated video can carry greater persuasive force than sworn testimony. Admissibility rules, however, were not designed to regulate cognitive distortion. Synthetic evidence, therefore, threatens not only factual accuracy, but procedural fairness and institutional legitimacy.

Existing scholarship often frames deepfakes as threats to privacy, democracy, and security. These concerns are valid, but they leave unresolved the core evidentiary question: how should courts authenticate and admit evidence that may never have existed in reality? Forensic optimism assumes detection will prevail. Adversarial confidence assumes cross-examination will suffice. Both assume that authenticity remains the central evidentiary goal. This article rejects that assumption.

Instead, it advances a theory of Epistemic Evidentiary Governance. Under this framework, evidentiary law is understood as a system for managing institutional uncertainty rather than confirming factual authenticity. Admissibility becomes a governance judgment, not a truth claim. Evidence is admitted not because it is real, but because reliance on it can be institutionally justified through fair and transparent procedures. Authentication no longer requires evidence to be absolutely genuine, but rather whether the court can reasonably rely upon it.

To operationalize this framework, this article proposes a Synthetic Evidence Admissibility Test requiring courts to evaluate AI-susceptible evidence through five interdependent inquiries: whether provenance can be independently verified; whether all algorithmic involvement has been disclosed; whether forensic verification can be replicated; whether the medium carries disproportionate persuasive impact; and whether both parties possess equal forensic access. These criteria do not ask whether the evidence is real. They ask whether judicial reliance on that evidence is epistemically defensible.

Consider a criminal prosecution in which the state introduces a video confession and the defendant alleges that the recording is synthetic. Under the current doctrine, contextual testimony and metadata may satisfy authentication, leaving credibility to the jury. Under the Synthetic Evidence Admissibility Test, the court would instead evaluate provenance, disclosure, replicability, perceptual risk, and procedural symmetry. If these conditions are not met, the evidence would be excluded or subjected to heightened safeguards. The test does not prohibit digital evidence. It prohibits institutional self-deception.

Artificial intelligence reveals a fundamental truth about evidentiary law. Law does not produce truth. It produces legitimacy. Synthetic evidence threatens legitimacy by exposing that evidentiary certainty was always a managed construction. The ethical obligation of courts is not to restore certainty, but to govern uncertainty honestly.

Conclusion

Artificial intelligence has not destroyed evidence. It has revealed what evidence always was: trust disciplined by procedure. In the age of artificial intelligence, courts will not lose authority because they cannot detect every deepfake. They will lose authority only if they continue to pretend that authenticity remains their organizing principle. The future of admissibility and authentication will not depend on whether the evidence is real. It will depend on whether the law can govern uncertainty with intellectual humility and institutional courage. When evidence becomes synthetic, law must become wiser. If it does not, the truth will not disappear. It will simply stop belonging to the courts.

References
  1. Chesney, Robert, and Danielle K. Citron. 2019. “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.” California Law Review 107 (6): 1753–1820.

  2. Citron, Danielle Keats. 2019. “Sexual Privacy.” Yale Law Journal 128: 1870–1960.

  3. Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993)
  4. Farid, Hany. 2008. “Digital Image Forensics.” Scientific American 298 (6): 66–71.

  5. Federal Rule of Evidence 901.

  6. Floridi, Luciano. 2011. The Philosophy of Information. Oxford: Oxford University Press.

  7. Gitelman, Lisa. 2006. Always Already New: Media, History, and the Data of Culture. Cambridge, MA: MIT Press.

  8. Goodman, Bryce, and Seth Flaxman. 2017. “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation.’” AI Magazine 38 (3): 50–57. 

  9. Haack, Susan. 1993. Evidence and Inquiry: Toward Reconstruction in Epistemology. Oxford: Blackwell

  10. Hansen, Mark B. N. 2004. New Philosophy for New Media. Cambridge, MA: MIT Press.

  11. Kahneman, Daniel. 2011. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux

  12. Latour, Bruno. 1987. Science in Action: How to Follow Scientists and Engineers Through Society. Cambridge, MA: Harvard University Press.

  13. Mayer-Schönberger, Viktor, and Kenneth Cukier. 2013. Big Data: A Revolution That Will Transform How We Live, Work, and Think. Boston: Houghton Mifflin Harcourt

  14. National Research Council. 2009. Strengthening Forensic Science in the United States: A Path Forward. Washington, DC: National Academies Press.

  15. Nissenbaum, Helen. 2010. Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford: Stanford University Press.

  16. Pasquale, Frank. 2015. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press.

  17. R. v. Nikolovski, [1996] 3 S.C.R. 1197.

  18. R. v. Marakah, 2017 SCC 59.

  19. Solum, Lawrence B. 2018. “Legal Theory Lexicon: Epistemic Authority.” Legal Theory Blog. https://lsolum.typepad.com.

  20. Thompson, William C., and Simon A. Cole. 2007. “Psychological Aspects of Forensic Identification Evidence.” Annual Review of Law and Social Science 3: 163–187.

  21. United States v. Vayner, 769 F.3d 125 (2d Cir. 2014).

  22. Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism. New York: PublicAffairs.

The Future of Creativity and Ownership, Humans and AI: Where Do They End and Begin?

Written By: Mia Rodrigo

November 23, 2025. The LexAI Journal

Artificial Intelligence (AI) continues to impress and expand its capacities across numerous domains, as observed in healthcare, education assistance, administrative roles, software development and public administration, with access to its tools anticipated to supplement human labour and efforts that contribute to a society’s functionality. With ChatGPT’s roll out of Sora, a generative image and video based platform, users are granted the opportunity to expand imagination and prompt different “complex” images and videos through text commands (OpenAI, 2024). Such a feat in generative AI can be instrumental to redistributing resources and offer accessible means to people to produce their own work. In the creative domain, ascribing copyright and patents to creators of the work now pose a difficult decision; to determine whether AI-generated content can and should retain its own IP protections.

Defining IP

Intellectual Property (IP) law is concerned with governing the applications and utility of an idea, invention or creation. Tzimas refers to the theoretical component of IP Law, that incentivizes and inspires innovations while holding firm that a creator should receive compensation for their own works (2025). Copyright law protects the creator’s work and how it may be represented, while patents preserve an innovation or process (Rönsberg, Hetmank, 2019). The relationship of a creator to their work is historically reflected in copyright acts, following droit d’auteur, a principle that maintains a creatives’ financial precarity would have to be accounted for through their art (Rönsberg, Hetmank 2019). AI challenges intellectual property law as open-ended positioning on its status as a creator is unclear, and demands consideration to the way existing legal systems recognize and accredit labour and production.

Defining  “authorship” and  “legal personality”

Authorship, as observed in cases through US and Canadian explicates copyright is afforded to AI-made work that has a degree of “human involvement” (Ronsberg, Hetmank 2019) yet the methods used to assess the criteria for the level of human involvement must also be considered and clearly defined. Neuwirth also cautions that with AI integrated assisted work, IP must rethink the “remuneration of labour” (2025), especially in the context of generative AI, and how large, aggregate data that is used for training LLMs and other AI agents complicates how different creations, even if made available in the public domain, may result in creators not receiving fair recognition for their work, as generative AI makes its sourcing obfuscated through “black-boxing”. Thus, the question follows if an AI tool may assume a “legal personality” (Tzimas, 2025) to be viewed as a creator. Ning (2023 p. 432)  draws upon Thaler v Vidal that determined recognizing solely AI-created inventions would not be eligible for a patent, if applicants are identified based on the category of being a person, but was not conclusive for inventions where human creators had incorporated AI tools to assist in the creative process. Corporations, like OpenAI, continue to reiterate their mandate is pursuant to democratizing public access to knowledge and creativity (OpenAI, n.d.) thus, by enacting permissibility of patents for AI tools may potentially create barriers to access them or AI-driven content. Alterum non laedere, encompasses avoiding injury to another, rectifying that restriction can introduce inequality in accessibility measures that were designed for all (Tzimas, 2025). Therefore, questions around authorship in creative works through AI and the frameworks used to regulate both tool and content must look at the ways in which humans are involved in their deployment or work creation.

Recognizing copyright limitations for AI outputs

OpenAI has been involved in legal disputes regarding German copyright laws, through the unauthorized reproduction of song lyrics and claims that their music was non-consensually used to train their AI models (Poltz, Heine, 2025), pursued by Gesellschaft für musikalische Aufführungs- und mechanische Vervielfältigungsrechte, or GEMA, a music rights management company. The case becomes a readily conceptualized role of copyright for the widely circulated forms of media people enjoy. Despite ChatGPT not retaining licensing for full song lyrics, one may reconsider the ease of accessing full lyrics of many popular artists’ discography through other digital means with websites like Genius. ML Genius Holdings LLC, retains a license to display complete song lyrics with music publishers, ranging from Sony Music Publishing to the National Music Publishers’ Association (NMPA), accounting for large corporate and independent music publications (Genius, n.d.). Nevertheless, further research and inquiry into what markers can be introduced for other forms of generative AI content, especially in the form of user-prompted creative works, may be possible to ensure that people’s works are protected.

A glimpse into the music industry: Grimes’ vocal AI and Xania Monet

In recent years, the music industry has embraced pathways to AI usage being embedded into musical production, paying tribute to existing artists and forging new ones entirely. Grimes, a Canadian electronic-genre artist expressed an open enthusiasm and dialogue on AI in creative industries through social media. In 2023, Grimes announced that her team was working to create an AI tool based off of her singing voice, promoting an open-sourced collaborative tool for aspiring producers who are looking to compose their own music, benefitting from representing Grimes’ own sound or popularity to attract listeners or emulate a particular aesthetic  (Romo, 2023)  Through a series of posts on X, Grimes remarked a division of royalties and compared the engagement to an artist-to-artist collaboration. The legal implications draw attention to potentially undesirable misrepresentation of Grimes as an artist through a producer’s creative liberty, lending the circumstance cross-sectionally to other legal fields to work tandemly to address the numerous sociological effects of AI. Another example of AI in music is Xania Monet, a fully AI-generated musical artist based through the Suno platform, who rose to prominence through a successful Billboard chart performance (Zellner, 2025). While postured as a creative work (Eqbal, 2025), the creator of Xania, Talisha Jones’ asserts that the AI-generated artist is an “extension of [her] myself” (Eqbal, 2025) reflecting on accepted IP foundations through the creator-connection to one’s own work, (Tzimas, 2025) even if fully AI generated. Furthermore, the potential of AI’s duality of being capable of “automation” progressing towards ”autonomy” (Tzimas, 2025)  as exemplified in the case of Grimes and Xania Monet, alludes to the challenge of defining the ownership of the AI-based creations in entertainment.

Closing

This paper references the traditional ways in which IP has been applied to protect and define creations, innovations and the creative processes behind them and how AI will demand new perspectives on ownership and creative management. While AI extracts data, content and forum through numerous available made-domains, the sourcing of the content is not always readily transparent, affecting existing legal fields surrounding different kinds of law such as defamation, labour, entertainment and competition. The value of archival projects to establish the histories of different media and the creators behind them may be unjustifiably diminished in the advent of AI-driven creations, where observing artistic legacy is obscured by black-boxing. Platforms like Sora and Suno hint at the necessary evaluation of large AI technology firms to mind the factors of copyright licensing for content used to train their models that produce AI generated creative content and media. In unique cases like Grimes’ open-sourced vocal-AI tool, producers merely registering their production with the vocal AI tool may not clearly define copyright law measures in the event the production should be taken down. Defining AI agent autonomy remains difficult, requiring future discussion and research into qualitative, deep case analysis to reform collective understanding of human creative intellect alongside AI tools and AI-driven creation.

References

  1. Genius. (n.d.). https://genius.com/static/licensin

  2. Lauber-Rönsberg, A., & Hetmank, S. (2019). The concept of authorship and inventorship under pressure: Does Artificial Intelligence shift paradigms? Journal of Intellectual Property Law & Practice, 14(7), 570–579. https://doi.org/10.1093/jiplp/jpz061

  3. Eqbal, A. (2025). AI artist Xania Monet has hit the Billboard charts. what does it mean for “real” musicians? | CBC News. Retrieved from https://www.cbc.ca/news/entertainment/ai-artist-xania-monet-radio-billboard-chart-9.6967542

  4. Ning, Hubert. (2023). Is it fair? is it competitive? is it human?: artificial intelligence and the extent to which we can patent ai-assisted inventions. Journal of Legislation, 49(2), 421-448.

  5. Neuwirth, R. J. (2025). Intellectual property law and generative artificial intelligence: Fair remuneration, equality or ‘my plentie makes me poore.’ Journal Of Intellectual Property Law and Practice, 20(9), 633–642. doi:10.1093/jiplp/jpaf029 

  6. OpenAI. (n.d.). https://openai.com/about/ 

  7. Poltz, J., & Heine, F. (2025, November 11). OpenAI used song lyrics in violation of copyright laws, German Court says | Reuters. Reuters. https://www.reuters.com/world/german-court-sides-with-plaintiff-copyright-case-against-openai-2025-11-11/ 

  8. Romo, V. (2023). Grimes invites fans to make songs with an AI-generated version of her voice. Retrieved from https://www.npr.org/2023/04/24/1171738670/grimes-ai-songs-voice 

  9. Sora 2 is here | openai. OpenAI. (2025, September 23). https://openai.com/index/sora-2/ 

  10. Tzimas, Th. (2025). Evolution of copyright in the era of Artificial Intelligence: Analysis of conflicts of law and judicial precedents. Journal of Digital Technologies and Law, 3(1), 35–64. https://doi.org/10.21202/jdtl.2025.2

  11. Zellner, X. (2025, November 12). How many AI artists have debuted on Billboard’s charts? Billboard. https://www.billboard.com/lists/ai-artists-on-billboard-charts/xania-monet/