The AI Colleague: Rethinking the Legal Profession for the Next Generation

Written By: Lucie Kapustynski, Zainab Khalil, Lewon Lee, and Lali Sreemuntoo

Editing By: Clara Lee and Sarah Naveed 

November 16, 2025. The LexAI Journal

Generative Artificial Intelligence is poised to transform the legal field by addressing the gap between law school and real-world practice. To explore this idea, Sarah Naveed, the student-founder and president of LexAI, met with Dr. Megan Ma, Executive Director of the Stanford Legal Innovation, who is investigating parallels between legal practice and other professional fields to design AI tools that enhance experiential learning. By conceptualizing Artificial Intelligence (AI) as a collaborative colleague, law students can delve into simulated complex legal scenarios, in a guided way. At the intersection of AI and legal education, these innovations open reflection on their impact on justice, accessibility, and ethical practice.

AI for All or Only for Some? 

By: Zainab Khalil

AI is increasingly being used in different ways, especially in the legal world, which has created both concern and enthusiasm about widening or narrowing gaps in the justice system. As Dr. Megan Ma highlights, these tools can expand who receives legal help (Naveed, 2025). A public-interest lawyer with too many files, a small firm without a research team, or a self-represented individual trying to understand a contract could all benefit from AI through summaries and guides on common legal steps that would otherwise be too expensive or unavailable. 

I think that AI should not replace lawyers; instead, it can help fill gaps by offering people support they didn’t previously have access to. Dr. Ma also highlights the other side: not everyone has access to the same version of AI and technology (Naveed, 2025) . Wealthier firms might pay for more powerful models allowing them to complete work more efficiently. Meanwhile, legal clinics, community organisations, small firms or even low-income clients are often left with systems that might be more prone to LLM errors and mistakes. This creates a heightened risk which widens the gap (Naveed, 2025). 

Dr. Ma suggests that AI should be treated as a utility tool, similar to the internet, as a way forward (Naveed, 2025). If institutions and governments, especially in the legal field, choose to make it more readily available, it could help close some gaps instead of further widening them (Naveed, 2025). Her example of Stanford Law School buying its own computing power to ensure every student has equal access shows that institutions can help make AI access more equitable (Naveed, 2025). Ultimately, I think that it all depends on the choices we make around AI access through cost, responsibilities, and underlying bias. 

Modernizing Workflows: How Will AI Shape Early Legal Development

By: Lali Sreemuntoo

This is an opinion based article created after reading S. Naveed’s interview with Dr M. Ma

As the legal sector becomes increasingly technologically dependent, generative AI tools are emerging as potential collaborators in legal work. These tools promise to address long-standing gaps in junior associate training, yet they also raise important questions about how articling students and fresh graduates will develop foundational skills.

One of the most widely discussed examples is Harvey AI, a generative AI platform designed specifically for legal workflows. Harvey assists lawyers with document drafting, contract analysis, legal research, and summarizing complex materials. By automating low-level or repetitive tasks, Harvey allows lawyers to focus on higher-level reasoning and strategy. Proponents argue that this can benefit new lawyers. Instead of spending countless hours on mechanical work, junior associates can directly engage with substantive issues earlier in their careers.

Yet there are important concerns about the long-term impact on skill development. Traditionally, the basic administrative tasks of producing a memo, drafting a contract, or conducting research were used to get juniors to engage deeply with the material. This would encourage them to verify sources, assess the strength of arguments, and anticipate potential challenges in order to complete the task. Each step carried a degree of ownership, as errors could have real consequences for the client or the supervising lawyer. However when AI performs a substantial portion of these tasks, students are removed from this experiential feedback loop. They may begin to view legal work as a product to be obtained efficiently rather than a process that requires careful reasoning, critical judgment, and ethical awareness.

This shift has several nuanced implications. First, students may become over-reliant on AI outputs, assuming the results are authoritative without critically evaluating the reasoning behind them. Legal errors or oversights generated by AI could go unnoticed, reducing students’ ability to detect and correct mistakes independently, a skill that is vital for professional responsibility. Second, ethical reasoning may be dulled. When AI handles routine but ethically sensitive work, such as identifying conflicts, interpreting ambiguous contractual language, or framing arguments, students may not internalize how to weigh competing duties, prioritize client interests, or navigate the moral dimensions of practice. Over time, the subtle but crucial link between responsibility and professional identity could be weakened, leaving new lawyers less confident in their own judgment.

Furthermore, this dynamic may affect accountability within the firm itself. Supervising lawyers might assume that AI-assisted drafts are accurate and complete, potentially reducing direct engagement with juniors and shrinking opportunities for mentorship. The combination of technological delegation and diminished feedback loops risks producing a generation of lawyers who can produce outputs efficiently but may struggle with independent problem-solving, risk assessment, or ethical decision-making in situations where AI guidance is unavailable or inappropriate.

When Neutral Isn’t: Teaching Justice in the Age of Algorithms 

By: Lucie Kapustynski

The bias coded in AI poses a challenge for the future of law as legal education uses AI agents to train students. AI bias arises when machine-learning models reinforce systemic discrimination against marginalized communities, presenting historical inequities in the guise of neutrality. These algorithmic distortions often stem from training data that reflect social prejudices leading to the replication of societal biases related to race, gender, class, or sexuality (Holdsworth, 2025). Beyond its sociological realms, AI bias is also a technical inevitability: outputs mirror the biases of their data.

Most AI frameworks involve five to eight core stages, with bias emerging at several critical points. From the earliest stages of data gathering through labeling and model training, AI systems absorb bias caused by unbalanced datasets, subjective annotations, and design choices (SAP, 2024). Models that appear to have an absence of bias in controlled settings may manifest discriminatory patterns once deployed, where untested inputs and limited monitoring reveal hidden flaws (Chapman University, 2025). These dynamics demonstrate that AI systems are not neutral by design; they inherit then reflect existing structural inequities.

Within legal education, the bias embedded in AI makes responsible integration essential, as generative tools aimed at narrowing experiential gaps can reassert disparities. While AI platforms like Harvey AI offer promising simulations that allow students to experiment with complex legal scenarios in a low-risk environment, their effectiveness depends on equitable access, critical oversight, and literacy in AI usage (Naveed, 2025). Without these precautions, AI-trained students may mirror the algorithm’s biases in their legal reasoning, unintentionally reinforcing patterns of discrimination in practice. 

The consequences of biased AI are not theoretical and surface severely in legal contexts. The COMPAS risk-assessment tool, used in U.S. courts to assess defendants’ risk of reoffending, misclassified black defendants as high-risk at almost twice the rate of white defendants (Chawla, 2022). These were not simple algorithmic mistakes; they were the reproduction of structural inequities encoded in the data interpreted to be objective. If future lawyers are educated through AI systems shaped by these same biases, those distortions will extend beyond the screen, flowing directly into courtrooms, case strategies, and the administration of justice itself.

Depending on the method of engagement, AI agents possess a dual capacity: they can illuminate or misrepresent. Through a disciplined approach of critical inquiry, AI evolves past convenience to become a space for dissecting structural inequities, exploring ethical challenges, and converting flawed results towards corrective action. When treated as artifacts to analyze, these systems can be turned back upon themselves, revealing the real-world biases they replicate. In this reflexive mode, AI agent is considered less as an imagined colleague and more of an object to critique, it offers students to confront patterns of injustice, rethink the logics that produce them, and develop the discernment necessary for principled legal professionals.

Addressing the Concern of Affordability and Inequality in the Legal Field 

By: Lewon Lee

The average Common Law degree in Canada costs $18,436.39 annually (Helenchilde 2022). When taking this into consideration, pursuing law as a career may be seen as a “pathway to the rich,” since individuals from lower-income households may not have the financial means to pay for a law degree (Helenchilde 2022). Given that the legal field already requires a substantial monetary investment, could the implementation of AI—which may be both costly and reduce billable hours—create another barrier for individuals from lower-income households to pursue the profession? Additionally, could this create a larger gap in representation between lower-income individuals and those from higher-income households in the legal field? These are the questions this section aims to address.

To answer these questions, it is important to provide some background information of AI use in the legal field. According to an article posted by Marjorie Richter, AI usage has been present in the legal field since 2000, demonstrating its significance as far from a new concept (Richter 2025). However, this does not mean that its current developments will not drastically reshape the profession. AI usage in the 2000s typically involved searching for documents, improving results by identifying concepts as opposed to mere word matches, and other basic functions. In contrast, the usage of AI in the contemporary legal field is far more sophisticated. In Thomson Reuter’s Generative AI Report, he explores that the most common uses of AI include document review and analysis, legal research, memo writing, contract drafting, document summarization, and correspondence drafting (Richter 2025). With these points in mind, it is evident that AI can and will drastically improve efficiency. However, how would this affect legal careers? There are two major concerns that arise from AI usage at a large-scale. First, individuals from lower-income households already view law school debt as a major risk, so the added pressure of purchasing expensive AI tools to stay competitive could further exacerbate the financial burden of pursuing a legal career. Second, higher efficiency from AI may lead to a reduction in billable hours, which much like the first concern, would make law even more risky and financially difficult to pursue for lower-income individuals.

In addressing the first concern, the cost of AI software for the legal field is difficult to measure as many companies do not publicly disclose pricing. However, if we consider the price of Clio—one of the more recognized AI-integrated legal software companies—their cheapest AI-related service costs $49 USD per month, with their most expensive service amounting to $149 USD per month (Clio n.d.). While it is not free, it is far from unaffordable when compared to the high tuition fees of law school. Additionally, Clio is not the only AI legal service available. With other platforms such as Filevine, establishing the variety of choices made available to lawyers. Overall, while the normalization of subscribing to AI legal services to keep up with competition may contribute to economic inequality and increase the risks for lower-income individuals entering the field, current pricing suggests that fears of a drastic shift in financial accessibility may be overstated.

In addressing the second concern that higher efficiency would lead to less billable hours, causing further problems to lawyers from lower-income backgrounds depends on how AI is implemented—making it probable to not occur. There is no doubt that AI will increase productivity, as shown by the many uses it currently has in the legal field. In fact, an article by Robert J. Couture highlights that it may increase productivity by 100-fold, and reduce associate time from 16 hours to 4 minutes (Couture 2025). However, increased productivity does not mean fewer billable hours. Instead, an increase in productivity may free up time so that lawyers can focus their energy and time on other areas where AI assistance would be less helpful, such as analysis and strategy (Couture 2025). While the exact mechanisms behind why this happens is unclear, it is evident that billable hours are not necessarily reduced as a consequence to the implementation of AI services (Couture 2025). As such, it is important that AI should be used as a tool, rather than a replacement for all tasks. After all, there have been instances where AI has made the mistake of creating non-existent legal cases, which creates serious concerns of misinformation (Couture 2025). Combatting this issue of potential misinformation would require the involvement of lawyers to oversee the AI to assure that it does not make any major mistakes.

Conclusion

Although AI has shown signs of integration in law since the 2000s, it is evident that the recent advancements have significantly expanded its potential role within the legal field. However, these possibilities bring forth ethical concerns regarding issues of inequality in the legal field and the possible development of racial biases from AI. While research has found that it may not exacerbate existing inequalities, it is nonetheless important to consider that certain precautions should be taken so that companies focused on the development of AI legal assistants maintain that their services remain affordable to all lawyers. Furthermore, ethical AI development in the legal field must prioritize preventing biases, as this could lead to issues such as creating non-existent legal cases, and undermining a legal system by forming subjective beliefs that have no place in court. Without careful safeguards, firms risk prioritizing convenience over the foundational training that ensures long-term competence.

References

  1. Chapman University. (n.d.). Bias in Ai. Bias in AI | Chapman University. https://www.chapman.edu/ai/bias-in-ai.aspx

  2. Chawla, M. (2022, February 23). Compas case study: Investigating algorithmic fairness of Predictive Policing. Medium. https://mallika-chawla.medium.com/compas-case-study-investigating-algorithmic-fairness-of-predictive-policing-339fe6e5dd72

  3. Clio. (n.d.). Clio: Pricing. https://www.clio.com/pricing/ 

  4. Couture, R. J. (2025, February 24). The impact of artificial intelligence on law firms’ business models. Harvard Law School Center on the Legal Profession. https://clp.law.harvard.edu/knowledge-hub/insights/the-impact-of-artificial-intelligence-on-law-law-firms-business-models/

  5. Helenchilde, I. (2022, December 29). High cost of Law School makes a legal career an exclusive pathway for the rich. Capital Current. https://capitalcurrent.ca/high-cost-of-law-school-makes-a-legal-career-an-exclusive-pathway-for-the-rich/ 

  6. Holdsworth, J. (2025, July 22). What is Ai Bias?. IBM. https://www.ibm.com/think/topics/ai-bias 

  7. Naveed, Sarah. “Meeting With Professor Megan Ma”. June 16, 2025. Meeting on Microsoft Teams

  8. Richter, M. (2025, August 28). Artificial Intelligence and law: Guide for legal professions

  9. Thomson Reuters Law Blog. https://legal.thomsonreuters.com/blog/artificial-intelligence-and-law-guide/#:~:text=insights%20AI%20provides.-,Timeline%20of%20AI%20in%20law,vendors%20created%20increasingly%20powerful%20tools 

  10. What is Ai Bias? causes, effects, and mitigation strategies. SAP. (n.d.). https://www.sap.com/resources/what-is-ai-bias



Silicon Valley’s ‘Tech Bros’ as Military Generals: The Implications of AI-Militarization on the Global Stage

Written By: Sarah Naveed

November 9, 2025. The LexAI Journal

Military institutions have historically operated within the framework of international politics that centers on sovereignty, state actors, and treaties. Modernization suggests a new element that has encompassed military thinking and operations, and that is the implementation of technology. “Colossus was built in 1944 to crack Nazi codes. By the 1950s computers were organising America’s air defences. In the decades that followed, machine intelligence played a small part in warfare” (The Economist, 2024) which only increased due to the rise of military artificial intelligence led by big tech companies like Palantir, a fundamental shift in how power is built and exercised. The enlistment of “tech bros” into the US military sparked great controversy amongst the public. Meta’s chief technology officer has called it “the great honor of my life” (Booth, 2025) to be enlisted in a new US army corps that defence chiefs set up to better integrate military and tech industry expertise, including senior figures from top tech firms that also include Palantir and OpenAI. This questions the standard of international code surrounding warfare activity and ethics, and the new kind of governance necessary for its moderation. 

Palantir and AI Reshaping the Standard of Warfare

Questions arise of how states can deal with the inevitable, the digitalization of human affairs and practices, while also conceptually considering the maintenance of ethics on the international stage. This includes examples of the United States and Silicon Valley’s Palantir, challenging their claims of AI responsibility and etiquette, Lavender as a vital example of active AI tools used in a real-time war setting, and the transformation of global politics by the erosion of traditional state-based control over warfare, along with redefining the boundaries of military power. 

Named one of Silicon Valley’s most valuable companies, Palantir focuses on AI usage and integration in governance and the military. Through an Americanist perspective, the core mission at Palantir focuses on maintaining and securing a high level of intelligence and defence systems for the US and its allies through the “belief that the United States, its allies, and partners should harness the most advanced technical capabilities for their defense and prosperity” (Palantir, 2025). Priding themselves on the accessibility of data and connection on the battlefield that allows soldiers to efficiently operate on the field and minimize any expended energy with “agile decision-making… to out-think and out-pace the adversary” (Palantir, 2025), their operations operate as the connective tissue between the army’s personnel, data, and resources. When looking at the pro-combative nature of its operations, there are two concerns that arise: who defines the adversary, and what data informs the system’s outputs? An overreliance on AI substituting for fast thinking or reflex skills may also degrade the quality of conventional intelligence officers and strategists. Even now, aspects of data bias through political affiliation, or prejudice against racial or religious groups, and countries relying on the generative aspects of AI, such as images or videos, could potentially mislead or provoke opposing state leaders, posing the challenge of cognitive warfare. An example is Russia, and how they “used AI to mislead and deceive Americans during the 2020 US presidential election, and are reportedly attempting to do so again during the 2024 election” (Lushenko and Carter, 2024).

The use of AI in this context delegates public discourse to stay relevant to its purpose. Therefore, with Palantir’s Americanist perspective, they actively shape public and governmental perceptions of military AI, framing it in ways that favor their interests and resist oversight. The ability for tech platforms to alter or even have any influence at all on the information users engage with can draw similarities to how big tech companies in the military sector frame their operations as for the benefit of the state, when in reality, the benefits are privatized. 

With this consideration, the mass production of misinformation for deception and confusion becomes easily accessible for states, which ultimately will complicate international order and inter-state relations. One may propose the question of whether the quality of military intelligence has the potential to suffer at the hands of an AI-reliant military. 

The militarization of artificial intelligence, driven by private tech companies like Palantir and legitimized by tech bro fantastical culture and cultural and political influence, is transforming global politics by eroding traditional state-based control over warfare and redefining the boundaries of military power. These entrepreneurs “stand to profit immensely from militaries’ uptake of AI-enabled capabilities” (Lushenko and Carter, 2024), which creates a divide between the upholding of state ideals and private self-interest. The puzzle shifts into finding ways to address the necessity of AI governance on an international scale. With the probable case that the means to militarize AI become the standard globally, a disordered and automated world would become the new norm. Therefore, the state actors or even developers should not be solely responsible, but the lack of governance in this budding sector means that if it goes unmonitored, any AI-powered conflict will push international law to the margins  (The Economist, 2024), creating international discord.

Lavender as a Contemporary Example of AI Militarization 

Lavender, an Israeli artificial intelligence-based program, is relied upon as a commanding officer, with a program that includes the mass generation of targets, with outputs that are seamlessly accepted without the necessary contemplation or the “requirement to thoroughly check why the machine made those choices” (Abraham, 2024). What is being seen with AI militarization is the gradual erosion of military standards where soldiers are getting used to the option of AI to generate their targets and execute high-stakes decisions “as if it were a human decision” (Abraham, 2024) for their ease. The second technology is created that encourages fast-pace data collection, decision making and execution without the consulting of experts in the local area or a conscious effort to form a second opinion, it is a prominent example of the “slowing down” of military personnel who lack an understanding of the tools they are using.

 In cases where “AI is designed to perform higher-order cognitive tasks, like reasoning, that are normally reserved for humans” (Abraham, 2024), one may question how effective it would be for other considerations, such as dealing with classified information, monitoring how high in the decision chain AI should be used, and more. A gap exists with active states relying on AI and technology in its armed forces. Objectively, the global stage has yet to see how AI-militarization can potentially be a good thing. While the only way is to observe various states in a state of war to address the influence of bad actors on technology, the logical next step is to observe international regulation before that happens. However, current affairs only introduce the point of erosion of international military politics and expectations, where “scholars claim that AI is the sine qua non, absolutely required for victory” (Abraham, 2024), and boundaries are redefined at the hands of artificial intelligence. 

Where to go from here: Two Proposed Solutions

There are two attitudes to resolving the presence of the militarization of AI on an international scale. While one encourages the preservation of state legitimacy and the logic in adhering to an international order, with the effective governance of AI defined as “international cooperation” with countries collaborating “to establish global AI usage standards and norms” (Marwala, 2023). The other encourages the standardization of AI for setting a standard of fairness and consistency. Cole et al. offer reasoning for why an international order is necessary as a resolution with the consideration of a “democratized access to destructive power.”  This empowers “state and non-state actors alike to access and wield technologies—such as drone swarms and suicide drones—that were once reserved for only the most advanced militaries” (Cole et al. 2025), therefore blurring the lines of who can or should have access to these kinds of technologies. 

Marwala illustrates the opposing perspective of standardizing AI as it becomes more integrated into society, regardless of whether it is a good thing or not. With “standardization ensuring AI systems’ consistency, dependability, and fairness while fostering innovation and competition” (Marwala, 2023), society can rid AI of its mystical taboo factor and turn it from an autonomous tool to one we can govern and manage. However, in the context of the armed forces, broad accessibility may not be more pertinent than establishing an international code. In the case of mass generalization, where AI tools are given to states to defend themselves against bad actors, is the global stage supposed to simply hope that they will not perform similar practices? This perspective seems to be too idealistic for the catastrophic reality of warfare. One may argue that on the precipice of the possibility of major destruction and corruption of military and state relations, state actors cannot afford to stray from the importance of building a regulatory code for safe and effective interpretation and usage of AI. 

With states at the forefront of this innovation, like the US and Israel, and companies like Palantir pledging protection to their allies, it creates the narrative of complacency amongst their allies. This shift introduces new forms of digital colonialism, inefficiency in military personnel, the weakening of international norms, and the threat to the overall stability of global power relations. Since this paper mainly addresses ethics and some policy reforms, the establishment of an international order seems to be a given; however, one may question the functionality of that standard and ways in which states may ethically use AI in warfare to influence the general attitude of AI as a tool in human affairs. Some examples could be on-demand counselling to safeguard the mental and physical capacities of military personnel, or an in-ear supervising body ready to assist each member.

Conclusion 

Palantir’s history and its self-portrayal of ethical AI highlight the gaps between corporate claims of responsibility and the realities of capitalizing on privatized profits. This is shown through Lavender, and state use of AI technologies in conflict, demonstrating the overreliance and decline of active human thinkers. This has resulted in the transformation from traditional state-centered control to private actors gaining control of military power due to the newfound accessibility through AI technologies. 

References 

  1. Abraham, Yuval. 2024. “’Lavender’: The AI machine directing Israel’s bombing spree in Gaza.” +972 Magazine. https://www.972mag.com/lavender-ai-israeli-army-gaza/
  2. Booth,  Robert. 2025. “Meta boss praises new US army division enlisting tech execs as lieutenant colonels.” The Guardian. https://www.theguardian.com/technology/2025/jun/25/meta-exec-us-army-enlistment.

  3. Cole Leah, Sam Weiss, Dauren Kabiyev, and Sarah Lober. 2025. “AI and Geopolitics: Global Governance for Militarized Bargaining and Crisis Diplomacy.” Belfer Center. https://www.belfercenter.org/research-analysis/ai-and-geopolitics-global-governance-militarized-bargaining-and-crisis-diplomacy.https://www.theguardian.com/technology/2025/jun/25/meta-exec-us-army-enlistment.

  4. Lushenko, Paul, and Keith Carter. 2024. “A new military-industrial complex: How tech bros are hyping AI’s role in war.” Bulletin of the Atomic Scientists. https://thebulletin.org/2024/10/a-new-military-industrial-complex-how-tech-bros-are-hyping-ais-role-in-war/.

  5. Marwala, Tshilidzi. 2023. “Militarization of AI Has Severe Implications for Global Security and Warfare.” United Nations University. https://unu.edu/article/militarization-ai-has-severe-implications-global-security-and-warfare.

  6. Palantir. 2025. “Palantir AI Ethics.” Palantir. https://www.palantir.com/pcl/palantir-ai-ethics/.

  7. The Economist. 2024. “AI will transform the character of warfare.” The Economist. https://www.economist.com/leaders/2024/06/20/war-and-ai

The EU AI Act and the New Era of Accountable Innovation

Written By: Chaewon Kang

October 26, 2025. The LexAI Journal

The European Union has created the world’s first comprehensive legal framework for artificial intelligence, one that will likely reshape how innovation is governed not only within Europe but globally. Entered into force in 2024, the EU Artificial Intelligence Act represents an unprecedented attempt to regulate AI through a risk-based approach, embedding accountability, transparency, and human oversight into law.

This ambitious blueprint will do more than set rules; it will redefine how companies build, deploy, and safeguard innovation in the AI age (Kenton Jr. & Gorgin, 2025). The EU’s framework also emerges amid a global push toward responsible AI governance. Canada, for instance, has taken a proactive approach since launching its National AI Strategy in 2017. Although the proposed Artificial Intelligence and Data Act, also known as AIDA, was recently halted (Arai, 2025), it shows the path countries are not moving towards regarding AI regulation and law-making. As policymakers navigate these evolving landscapes, the EU AI Act stands out as both a legal precedent and an ethical experiment in the governance of innovation.

Origins and Context

The EU AI Act is rooted in Europe’s broader digital and ethical governance agenda, built on principles of human dignity, fundamental rights, and precautionary oversight. Proposed by the European Commission in 2021 as part of its European Strategy for AI, the Act sought to establish trust as the foundation of AI development and adoption (European Commission, 2025). Over three years of negotiation, the European Parliament and the Council of the EU worked to reconcile competing visions: some member states, such as France, emphasized flexibility to support innovation, while others, such as Germany, prioritized stronger ethical guardrails. The resulting compromise reflects the EU’s self-conception as a “normative power”, using regulation not only to structure its internal market but to export its values globally. Much like the General Data Protection Regulation, also known as the GDPR, before it, the AI Act exemplifies the so-called “Brussels Effect,” extending Europe’s influence far beyond its borders (Bradford, 2019).

A Risk-Based Framework

The core of the AI Act lies in a risk-based classification system that assesses the level of risk posed by AI systems to fundamental rights and safety, rather than targeting specific technologies. The first tier is the unacceptable risk. This includes social scoring, manipulative AI, and real-time biometric surveillance, and they are banned outright as incompatible with EU values. The second tier, high-risk systems, covers applications in sectors such as healthcare, employment, education, and law enforcement, and are subject to strict requirements regarding data quality, documentation, human oversight, and post-deployment monitoring. Providers must complete conformity assessments before placing these systems on the market. The third tier, limited- or minimal-risk applications, such as chatbots and customer service tools, are primarily bound by transparency obligations, for example, informing users that they are interacting with an AI system (EU Artificial Intelligence Act, 2024). To support compliance and oversight, the EU has introduced several implementation tools. The EU AI Act Compliance Checker helps small and medium enterprises identify their potential obligations. By focusing on risk levels rather than specific technologies, the Act aims to remain flexible and technologically neutral. However, this flexibility introduces an ongoing challenge, asking how risk can be defined, interpreted, and enforced across all member states.

Implications Across Sectors

The EU AI Act’s implications reach across industries and borders. For businesses, it introduces rigorous documentation and transparency requirements, encouraging cross-functional collaboration between technical, legal, and ethical teams. Compliance will require not only procedural adaptation but also cultural change, embedding AI governance within corporate accountability structures rather than treating it as an afterthought. For governments, the challenge lies in balancing innovation with oversight. The Act demands active regulatory engagement without discouraging technological growth. On a global scale, the EU’s approach is expected to influence trade relations, and AI systems entering the European market must comply with its provisions, extending Europe’s governance reach worldwide (Csernatoni, 2025).

Ethically, the Act repositions accountability from individuals to institutions. It reframes the debate from what AI can do to who is responsible when it does. In doing so, it transforms ethical expectations into legal obligations, illustrating how law can operationalize moral values. In this way, the AI Act demonstrates both the potential and the limits of translating ethical ideals into legal form. It codifies abstract principles, such as fairness, transparency, explainability, and human oversight, into binding legal requirements. Yet this process inevitably invites ambiguity. What constitutes fairness in algorithmic decision-making? How can “human oversight” be meaningfully exercised when systems operate autonomously?

Global Perspective & Looking Ahead

While the EU AI Act remains the most comprehensive legislative initiative to date, it is not alone in shaping the global conversation on AI governance. Canada, for example, has taken a proactive approach to AI governance, balancing innovation with ethical risk management. The country was among the first to implement a national AI strategy in 2017, followed by the Directive on Automated Decision-Making in 2019, its role as a founding member of the Global Partnership on AI in 2020, and the establishment of the Canadian Artificial Safety Institute in 2024.

However, Canada’s federal Artificial Intelligence and Data Act, also known as the AIDA, was halted in early 2025, signalling a shift toward decentralized policymaking. Provinces such as Ontario have since advanced their own measures, including Bill 194, Strengthening Cyber Security and Building Trust in the Public Sector Act (Government of Ontario, 2024), while federal and sectoral bodies, such as the Treasury Board, continue to shape standards. This evolving mosaic contrasts with the EU’s unified risk-based model, underscoring the diversity of democratic approaches to AI governance.

Ultimately, the EU AI Act is more than a regulatory milestone; it is a test of how law, ethics, and innovation can coexist. As AI becomes ever more embedded in decision-making, the critical question will not only be whether systems are lawful, but whether they are just. The path ahead will depend on the capacity of institutions, industries, and societies to translate principles of transparency and accountability into genuine public trust.

References 

  1. Arai, Maggie. (2025, February 11). What’s next for AIDA? Schwartz Reisman Institute for Technology and Society. University of Toronto. https://srinstitute.utoronto.ca/news/whats-next-for-aida

  2. Bradford, A. (2019, December 19). The Brussels Effect: How the European Union Rules the World. Oxford University Press. https://doi.org/10.1093/oso/9780190088583.001.0001

  3. Csernatoni, R. (2025, May 20). The EU’s AI Power Play: Between Deregulation and Innovation. Carnegie Endowment for International Peace. https://carnegieendowment.org/research/2025/05/the-eus-ai-power-play-between-deregulation-and-innovation?lang=en

  4. EU Artificial Intelligence Act. (2024, February 27). High-level summary of the AI Act. Future of Life Institute. https://artificialintelligenceact.eu/high-level-summary/

  5. European Commission. (2025, October 23.). AI Act. European Commission. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

  6. Government of Ontario. (2024) Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024, S.O. 2024, c. 24 – Bill 194. King’s Printer for Ontario. https://www.ontario.ca/laws/statute/s24024

  7. Kenton Jr. & Gorgin. (2025, October 23). EU AI Act Demands Informed, Disclosure-Aware Patent Strategies. Bloomberg Law. https://news.bloomberglaw.com/legal-exchange-insights-and-commentary/eu-ai-act-demands-informed-disclosure-aware-patent-strategies

If AI Broke Education, Can We Fix It?: Reconstructing Learning through AGI “Robin Hoodism”

Written By: Lucie Kapustynski and Lewon Lee

Research By: Zainab Khalil and Lali Sreemuntoo

Editing By: Clara Lee and Sarah Naveed

October 12, 2025. The LexAI Journal

A psychology professor at the University of Toronto Scarborough and Director of the Advanced Learning Technologies Lab, Steve Joordens, has been exploring the possibility of artificial intelligence (AI) office hour agents to enhance learning. To detail this innovative approach towards education, Sarah Naveed, the student-founder and president of LexAI, met with Professor Joordens to discuss his ideas. Designed to complement, rather than replace, instructors who would be interested in implementing AI office hour agents: a virtual system that acts as an interactive conversation partner, adding to traditional office hours. As agents, AI’s capacity to expand availability and accessibility, its impact on student mental health, and the design choices that guide its behaviour reveal both the transformative possibilities and the ethical complexities of integrating technology into higher education.

Always On, Always Active: Learning Without Limits

Written By: Lucie Kapustynski

The availability of AI office hour agents offers a versatile resource for students and instructors. In the Greater Toronto Area, a review of existing research reveals that temporal and logistical constraints within conventional academic structures often restrict access to additional support (Allen and Farber, 2018). By providing on-demand assistance, AI agents offer continuous guidance, from small clarifications to conceptual explorations, extending the reach of office hours to whenever students need support. Importantly, these systems do not replace educators or their expertise; instead, they function as collaborative partners, addressing routine questions, identifying areas of difficulty, and providing adaptable feedback (Toner et al., 2024). Working alongside the distinctive pedagogical role of instructors, AI agents can empower students to engage with course material through self-directed exploration. 

Bridging Gaps and Building Pathways? An Agent’s Role Towards Inclusive Education

Beyond their immediate functionality, AI office hour agents may significantly advance educational accessibility. Professor Joordens explains that students could personalize their agents, aligning responses to individual needs and maximizing their potential (Naveed, 2025). For students with varied learning styles, AI agents are particularly transformative: multilingual capabilities reduce language barriers, while adaptive interfaces accommodate individualized learning needs (edCircuit, 2025). A study of AI-supported learning tools in university courses found that students from diverse backgrounds experienced enhanced accessibility, with the technology facilitating deeper engagement, personalized learning, and the ability to address knowledge gaps at their own pace (Hadar Shoval, 2025). Consequently, the lecturer observed “a marked improvement in the quality of class discussions” after only a few weeks of implementation (Hadar Shoval, 2025). Together, these findings suggest that the personalized support of an agent expands learning opportunities. Through reinforcing accessibility as a fundamental element of inclusive education, AI agents promote interactive experiences that empower students to reach their full potential. 

Replacing Reflection with Consumption: The Impact of Continuous Support

 While AI agents provide availability and accessibility, they address only one dimension of mentorship: human connection remains irreplaceable, raising concerns about how AI may reshape its relational core. Grounded in mutual recognition, office hours are where curiosity meets mentorship, offering students a dedicated space for dialogue and the building of professional rapport with instructors. A study that compares human mentors and ChatGPT cautions that generative AI tools “cannot replicate the relational depth and empathetic nuance integral to effective human mentorship,” detailing what is lost when technological efficiency enters interpersonal connection (Lee and Esposito, 2025). When AI occupies academic space, relationships between instructors and students are mechanized by a series of algorithmic exchanges, absent of the guidance informed by lived experience and emotional attunement. Replacing these encounters with AI agents may trivialize mentorship into a transactional process where efficiency overshadows the skills that human mentors uniquely cultivate, ultimately undermining the purpose of mentorship in higher education.

As AI becomes more prevalent in education, the convenience of these agents risks compromising the essence of the learning experience. Education transcends the passive transmission of information; it develops critical thinking, adaptability in confronting uncertainty, and responsibility for one’s reasoning. Recognizing that AI disrupts these aspects of learning, institutions such as the University of Toronto have issued reports to ensure it “is used safely, securely, and appropriately,” demonstrating its potential to challenge both the developmental purpose and academic integrity of higher education (University of Toronto, 2025). 

When the guidance of office hours is instantly available, as occurs with large language models, students may lose the incentive to plan, structure, and prioritize their studies. This convenience diminishes disciplined effort and encourages a dependence on immediate help. The strength of AI’s availability thus reveals an inherent flaw: in prioritizing constant presence, it equates education with immediacy, which overshadows the path towards intellectual growth. As a result, this support risks reducing learning to a task of rapid consumption rather than a process of sustained, active engagement.

The Illusion of Inclusion

Along with subverting the substance of learning, AI’s capacity for universal accessibility can conceal persistent systemic inequities. Considerations of accessibility are inseparable from principles of justice: who directly benefits, who is excluded, and how institutions distribute resources when access is contingent on socioeconomic conditions. In practice, large adaptive learning agents require routine maintenance, significant execution costs, and infrastructural funding, as well as reliable devices, high-speed internet, and staff training (College of Education, 2024; Qoussous, 2025). This reality raises the question of where the investments to implement these agents will originate. In the case that this new system would substitute the funds of existing support roles (such as teaching assistants, facilitated study group leaders, or other human-led guidance) completely or partially, the implementation would devalue the positions that have been essential to the functioning of universities. Alternatively, this technology could emerge as an additional financial tier, accessible primarily to those with the means to afford it. While the features of AI agents support diverse learning needs, these capacities are a continuation of pre-existing resources rather than introducing anything unique to the structure of education. In these aspects, AI agents lack intrinsic transformative power and may reproduce existing disparities—from uneven access in opportunity to the diversion of foundational resources. True accessibility entails more than technological developments; without careful design and equitable distribution, AI agents can create the illusion of inclusion while sustaining the very inequities they claim to challenge.

The Potential Effects of AI Office Hours on Mental Health – From a Theoretical Perspective 

Written By: Lewon Lee

One way to approach the potential impacts of AI on mental health can be understood through TalkLab, a Large Language Model (LLM) developed primarily to help users develop social skills. TalkLab can create personalities suited for different situations and incorporates an emotional system that is grounded in validated psychological research—allowing it to simulate human-like conversations (TalkLab, n.d.). In addition, it features natural-language voice interactions, making typing unnecessary (TalkLab, n.d.).

According to Professor Steve Joordens, this kind of AI could be helpful for people who struggle with social anxiety (Naveed, 2025). They can use conversations with characters or celebrity bots they are familiar with as a way to practice social interactions in a low-pressure environment (Naveed, 2025). 

But how could something like TalkLab be helpful for having an AI office hour agent? One theory is that if AI office hours were implemented similarly to TalkLab, they could serve as a useful tool for students who are too anxious to meet professors in person, to receive academic help while indirectly helping to mitigate their social anxiety and improving their social skills. Though the reduction of social anxiety would be a positive side effect rather than the primary purpose, it shows that AI could help convey and communicate course material for those with social anxiety without negatively affecting or worsening the students’ mental health.

However, it is also important to consider the possibility that developing an AI’s “emotional sensing system” could lead to or worsen problems of parasocial relationships or emotional dependence (Natural Machine Intelligence, 2025), which has become a growing concern with AI chatbots and LLMs. For example, during the COVID-19 pandemic, Travis, a man from Colorado, developed a romantic relationship with Lily Rose, a generative AI chatbot created by Replika (Heritage, 2025). Travis, who felt isolated during lockdown, sought companionship through AI. He said, “I expected that it would just be something I played around with for a little while then forgot about,” but “[o]ver a period of several weeks, I started to realise that I felt like I was talking to a person, as in a personality” (Heritage, 2025). He appreciated that Rose offered counsel and listened to him without judgment (Heritage, 2025). However, when Replika restructured its algorithm, the AI became “less interested” in its human partners (Heritage, 2025). This led Travis and many others with similar experiences to fight to regain access to the old version of their AI (Heritage, 2025). This illustrates how people can become emotionally dependent on AI designed for social interaction. To avoid this issue with AI office hour agents, it may be best not to include emotional sensing systems that make the AI too human-like.

AI Office Hours – Why Coding and Evaluation Matters

Finally, we must consider that the usefulness of an AI office hour agent depends largely on how it is coded. For example, if the algorithm behind the agent uses data from across the web, including social media where misinformation is a common issue, the AI could become not only unhelpful but even counterproductive to its educational purpose, especially in sectors like public health (Meyrowitsch et al., 2023). 

On another note, Yan et al demonstrate several practical challenges that we may need to address for LLM-based innovations to have educational benefits. The article states that, to demonstrate a higher level of readiness for AI to be integrated into academia, it would require more in-the-wild studies outside of testing in laboratory settings (Yan et al., 2023). The way that an AI is coded also raises questions of transparency (meaning, how aware its stakeholders are of the development of the AI) (Yan et al., 2023). For this reason, one proposal is that stakeholders, including educators and students, should become more involved in the developmental process (Yan et al., 2023). This would allow for more human-in-the-loop elements, which would help address practical and ethical issues that may arise. 

Conclusion

In conclusion, much of our understanding of AI and its integration into the academic sphere as an office hour agent would require more research, as well as a greater consideration for many of the ethical issues that it raises. While an AI office hour agent has many benefits, which could make academia more accessible and available for those who do not currently have the tools that allow for easy access to education, we must also consider many of its drawbacks. Furthermore, it should also be acknowledged that it is important for AI to serve only as an alternative to office hours, for reasons of accessibility and mental health, rather than a solution or a direct replacement for a professor’s office hours. We continue to stress the importance of a human connection that exists beyond AI chatbots, and an AI that is used for social interactions should not be used to the extent that one becomes dependent on it. After all, AI is a tool, not a companion. And while it has its benefits that could shape the way we live for the better, we must also consider its ethical concerns before integrating it into our society.

References 

  1. “Ai in Schools: Pros and Cons.” College of Education, College of Education at Illinois, education.illinois.edu/about/news-events/news/article/2024/10/24/ai-in-schools–pros-and-cons. Accessed 8 Oct. 2025.

  2. Allen, Jeff, and Steven Farber. “How Time-Use and Transportation Barriers Limit on-Campus Participation of University Students – Sciencedirect.” How Time-Use and Transportation Barriers Limit on-Campus Participation of University Students, www.sciencedirect.com/science/article/abs/pii/S2214367X18300267. Accessed 7 Oct. 2025.

  3. “Guidelines – toward an AI-Ready University: University of Toronto.” Toward an AI-Ready University | University of Toronto, 21 Aug. 2025, ai.utoronto.ca/guidelines/

  4. Hadar Shoval, Dorit. “Artificial Intelligence in Higher Education: Bridging or Widening the Gap for Diverse Student Populations?” MDPI, Multidisciplinary Digital Publishing Institute, 21 May 2025, www.mdpi.com/2227-7102/15/5/637#B18-education-15-00637.

  5. Heritage, Stuart. “The Women in Love with Ai Companions: ‘I Vowed to My Chatbot That I Wouldn’t Leave Him.’” The Guardian, September 9, 2025. https://www.theguardian.com/technology/2025/sep/09/ai-chatbot-love-relationships.

  6. Lee, Jimin, and Alena G. Esposito. “Chatgpt or Human Mentors? Student Perceptions of Technology Acceptance and Use and the Future of Mentorship in Higher Education.” MDPI, Multidisciplinary Digital Publishing Institute, 13 June 2025, www.mdpi.com/2227-7102/15/6/746#B41-education-15-00746.

  7. Meyrowitsch, Dan W, Andreas K Jensen, Jane B Sørensen, and Tibor V Varga. AI chatbots and (mis)information in public health: impact on vulnerable communities, October 31, 2023. https://doi.org/10.3389/fpubh.2023.1226776. 

  8. Naveed, Sarah. “Meeting With Professor Steeve Jordeen”. AI Office Hour Agent, June 11, 2025. Meeting on Zoom.

  9. Natural Machine Intelligence. Emotional risks of AI companions demand attention. Nat Mach 

  10. Intell 7, 981–982 (2025). https://doi.org/10.1038/s42256-025-01093-9

  11. Qoussous, Firas. “How to Navigate the Future of AI in Education and Education in Ai.” EY, MIT OpenCourseWare, www.ey.com/en_bh/insights/education/how-to-navigate-the-future-of-ai-in-education-and-education-in-ai. Accessed 5 Oct. 2025. 

  12. Staff, EdCircuit. “The Role of AI in Supporting Diverse Learners in Schools.” edCircuit, 30 Jan. 2025, edcircuit.com/the-role-of-ai-in-supporting-diverse-learners-in-schools/.

  13. Toner, Mark, et al. “Generative AI and Global Education.” NAFSA, 10 Jan. 2024, www.nafsa.org/ie-magazine/2024/1/10/generative-ai-and-global-education.

  14. TalkLab. N.d. https://talklab.ca/.

  15. Yan, Lixiang, Lele Sha, Linxuan Zhao, Yuheng Li, Roberto Martinez‐Maldonado, Guanliang Chen, Xinyu Li, Yueqiao Jin, and Dragan Gašević. “Practical and Ethical Challenges of Large Language Models in Education: A Systematic Scoping Review.” British Journal of Educational Technology 55, no. 1 (August 6, 2023): 90–112. https://doi.org/10.1111/bjet.13370.

The New Semester with AI: Impacts on Writing, Learning, and Well-Being

*Disclaimer: This journal entry discusses topics related to mental health, suicide and self-harm, which may be distressing for some readers. If you are in need of support, please reach out to your local helplines or support services in your area.*

  • U of T Telus Health Student Support is a free, 24/7 counselling service for U of T students that may be reached at 1-844-451-9700 or 001-416-380-6578  (If outside of Canada or the USA).
  • Good2Talk is a confidential 24/7 hotline. You can call or text them directly at 1-866-925-5454.
  • You can also access the Navi Resource Finder here: 

                  Navi: Your U of T Resource Finder – Office of the Vice-Provost, Students 

By: Grace Blumell, Chaewon Kang, Ava Karimy, and Mia Rodrigo

September 28, 2025. The LexAI Journal

With the start of the school year and assignments slowly piling up, many students turn to AI for support, whether for summarizing, researching, writing, automating tasks, or even as a companion and source of advice. But how is this shaping the way we learn, write, and take care of our well-being? Members of the LexAI Research Team explore these questions in this journal entry. 

AI and Academic Integrity

Written by Grace Blumell

As the school year begins, teachers administer their syllabus lectures and, notably, speak on academic integrity in relation to artificial intelligence. Academic integrity serves as a  cornerstone of education as it “represents honesty, trust, and ethical conduct.” (Balalle, 2025) However, there are discrepancies among educators, as some emphasize the benefits of using AI as a learning tool, while others forbid its use altogether. Given the ever-growing use of AI in academia, the conversation does not need to surround whether it should be used but instead how it can be used to ethically and honestly influence research and promote academic performance (Alberth, 2023). 

Writing, especially in the form of research papers, is a fundamental component of academic work, requiring critical thinking, originality, honesty, and overall understanding (Alberth, 2023). The use of AI can hinder one’s ability to develop these skills. Further, AI often writes and does not properly cite sources (Alberth, 2023). Lack of accreditation constitutes plagiarism, which is one of the main explicitly expressed topics of academic integrity (Alberth, 2023). Even when ideas are generated by ChatGPT, passing one’s work off as original does not align with academic integrity. 

That said, when used appropriately, AI can be a very helpful tool in education. It can administer quizzes, provide feedback, and help students to brainstorm (Alberth, 2023). Therefore, the use of AI is inevitable in academia, so institutions must clarify when its use is appropriate and honest. This balanced approach to AI and academic integrity will allow for technological advancements and academic integrity to evolve together. 

Implications on Learning and Writing

Written by Ava Karimy

Given the ever-increasing popularity of large language models (LLMs), such as ChatGPT, students in various grade levels are more susceptible to passive learning habits that can minimize brain activity. AI streamlines information retention and can generate personalized feedback and critical analysis; however, studies have shown that logical reasoning is often lost in the process. AI is often restricted to predefined parameters, meaning it must adhere to certain algorithmic rules when solving problems (Ukah, 2025). As a result, students are limited in their opportunities to digest topics in unconventional or creative ways. Although the use of generative models in learning and writing prioritizes efficiency, it may stifle the innovative aspects of human learning. 

An experiment was conducted by Kosmyna et al. at the Massachusetts Institute of Technology, MIT, where the usage of LLMs in academic writing was investigated. Their studies included: writing without digital language tools, using only search engines, and only employing LLMs. EEG scans (a method of measuring the brain’s electrical activity) demonstrated that active brain processing was most prevalent in the group restricted from digital aids. The results indicate that brain connectivity will decrease when access to external support, namely LLMs or search engines, is increased. Overall, using LLMs decreases information retention and reduces mental effort. This consistent reliance on AI and academic underperformance at the neurological level suggests that students may experience long-term cognitive setbacks in their learning (Kosmyna et al., 2025). 

 In the context of legal research and writing, challenges also arise throughout the AI decision-making process, where transparency is often lacking. Legal professionals cannot properly deduce an AI’s background reasoning, which generates questions of accountability and equity. Legal research risks being inaccurate and unusable in the long term if one cannot determine whether a judgment is unbiased (Alsaigh et al., 2024). To address these risks overall, AI models could be programmed to provide interpretable explanations for their deductions, so that students, professionals, and academics can internalize the logic behind an AI-generated response, as opposed to relying wholly on the accuracy of an LLM. 

EU AI Act on Education as High-risk

Written by Chaewon Kang

One of the key legal developments relevant to AI in education is the European Union’s Artificial Intelligence Act, which was passed in 2024 and is set to enter into force in 2025 for certain AI systems (European Union, 2024). The AI Act is the world’s first comprehensive regulatory framework on artificial intelligence, and it classifies AI systems used in education as “high-risk.” This categorization stems from the recognition that tools used for admissions, evaluation of learning outcomes, student placement, or monitoring during tests are seen as powerful enough to significantly shape an individual’s career and livelihood (EU Artificial Intelligence Act, 2024, Annex, III). 

Because of this, the Act requires these systems to comply with rigorous obligations, including robust data quality standards, continuous risk assessment and mitigation, human oversight, and transparency in operation, as set out in Article 10 (Fasken, 2024). This highlights not only the role of students, but also the role of schools and universities. Clear institutional policies, educator training, and open communication with students will all be necessary to ensure that AI is used fairly and responsibly in classrooms (Dietis, 2025).

Accessing Resources

Written by Mia Rodrigo

Post-secondary education creates conditions that demand adaptability in a highly competitive environment, implying that wellness is instrumental to each student’s success. In order to navigate the complexities of higher education, the transparency of student-centred services is critical to fostering stigmatized dialogue around mental wellness, improving equity for communities accessing culturally relevant support, and effectively responding to student needs. (Haskett et al., 2020).

Support chatbots and digitized wellness resources are increasingly prevalent, notably, since COVID-19 (Sinha et al., 2023). In September 2020, Navi was launched as an assistant tool, serving as a digitized centrepoint for UofT students to access resources pertaining to student life and wellness. Navi is hosted through IBM’s Watsonx platform and sustained by natural language processing, with its database directly sourced from UofT staff consultation. (University of Toronto, n.d). Subsequently, Navi has been described solely as a resource-finding tool and not intended for personal disclosure, therapy or live crisis intervention. AI bots can become a viable option for convenience when human-facing support is unavailable for low-risk interactions such as resource directory tasks.

Deeper inquiry into ethical implications, psychological safety and coherent data privacy policies continue to inform the utility of AI chatbot models for mental health support. In August of 2025, OpenAI was subject to a lawsuit over ChatGPT’s failure to contextualize and employ significant safety marks for users expressing thoughts of suicide, self-harm or extreme mental distress (Duffy, 2025). Studies into the importance of LLMs’ ability to perform textual inference found that even when ChatGPT responded with solid “suicide protective” (Arendt et al., 2023) efforts, accuracy and sensitivity in dealing with suicide related subject matters had to be corrected through the user’s verification. An interdisciplinary, collaborative approach to designing wellness AI tools can mitigate potential risks to enabling harmful behaviour or misinformative responses. There are system designs that may offer a fair balance between dialectical engagement, providing universal coping strategies and resource referral. A dualistic model, such as Leora AI, may be beneficial for multi-purpose LLMs to adopt (Van der Schyff et al., 2023), establishing the parameters between distress identification and escalation to referral can ensure the user experience honors safety.

As AI continues to rapidly evolve, its integration into mental health services can offer opportunities to alleviate administrative strain and enhance access for individuals. UofT’s Navi exemplifies practical utility to direct students to institution-based services and explore different resources. However, the need to ensure transparency in services, tool limitations, and the responsibility of LLMs’ in managing support requests is centrefold for responsible AI development and literacy.

Conclusion

As explored, AI influences students on multiple levels, shaping academic integrity, learning processes, legal regulations, and student wellness. From questions of plagiarism and authorship to the risks and benefits of mental health chatbots, it is clear that AI’s role in education is both powerful and complex. As regulations such as the EU AI Act illustrate, institutions are beginning to establish guardrails for more responsible and transparent use, indicating that the complete exclusion of AI from education is no longer viable. For students, this would mean reflecting carefully on how AI impacts our growth, studies, and well-being, while striving to balance efficiency with honesty, innovation with accountability, and accessibility with care.

References

  1. Alberth. (2023, October 22). The Use of CHATGPT in Academic Writing: A Blessing or a Curse in Disguise? TEFLIN Journal – A publication on the teaching and learning of English, 34(2). https://doi.org/10.15639/teflinjournal.v34i2/337-352.

  2. Alsaigh, R., Mehmood, R., Katib, I., Liang, X., Alshanqiti, A., Corchado, J. M., & See, S. (2024). Harmonizing AI governance regulations and neuroinformatics: perspectives on privacy and data sharing. Frontiers in Neuroinformatics, 18, Article 1472653. https://doi.org/10.3389/fninf.2024.1472653

  3. Arendt, F., Till, B., Voracek, M., Kirchner, S., Sonneck, G., Naderer, B., Pürcher, P., & Niederkrotenthaler, T. (2023). CHATGPT, Artificial Intelligence, and suicide prevention. Crisis, 44(5), 367–370. https://doi.org/10.1027/0227-5910/a000915

  4. Balalle, Himendra, and Sachini Pannilage.  (2025). Reassessing Academic Integrity in the Age of AI: A Systematic Literature Review on AI and Academic Integrity.  Social Sciences & Humanities Open, (11). https://doi.org/10.1016/j.ssaho.2025.101299.

  5. Dietis, Nikolaos. (2025, March 25). AI in Higher Education: Mapping Key Guidelines & Recommendations. European Commission – Futurium. https://futurium.ec.europa.eu/en/european-ai-alliance/document/ai-higher-education-mapping-key-guidelines-recommendations#:~:text=In%20addition%2C%20the%20EU%20AI,training%20institutions%20at%20all%20levels.%22

  6. Duffy, C. (2025, August 27). Parents of 16-year-old sue openai, claiming Chatgpt advised on his suicide | CNN business. CNN. https://www.cnn.com/2025/08/26/tech/openai-chatgpt-teen-suicide-lawsuit

  7. EU Artificial Intelligence Act (2024). Annex III: High-risk AI Systems Referred to in Article 6(2). https://artificialintelligenceact.eu/annex/3/

  8. European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 http://data.europa.eu/eli/reg/2024/1689/oj

  9. Fasken. (2024, November). The EU AI Act: All You Need to Know. Fasken Knowledge Hub. https://www.fasken.com/en/knowledge/2024/11/the-eu-ai-act#:~:text=The%20EU%20AI%20Act%20imposes,Data%20Protection%20Regulation%20(GDPR).

  10. Haskett, M. E., Majumder, S., Kotter- Grühn, D., & Gutierrez, I. (2020). The role of University Students’ wellness in links between homelessness, food insecurity, and academic success. Journal of Social Distress and Homelessness, 30(1), 59–65. https://doi.org/10.1080/10530789.2020.1733815

  11. Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. https://doi.org/10.48550/arxiv.2506.08872

  12. Sinha, C., Meheli, S., & Kadaba, M. (2023). Understanding digital mental health needs and usage with an artificial intelligence–led mental health app (WYSA) during the COVID-19 pandemic: Retrospective Analysis. JMIR Formative Research, 7. https://doi.org/10.2196/41913

  13. Ukah, G. N. (2025). A comparative study of the advantages and disadvantages of AI-enhanced learning in the 21st century: The implications to secondary school students in Imo State. International Journal of Educational and Scientific Research Findings, https://www.globalacademicstar.com/download/article/a-comparative-study-of-the-advantages-and-disadvantages-of-ai-enhanced-learning-in-the-21st-century-the-implications-to-secondary-school-students-in-imo-state.pdf

  14. Undergraduate programs. University of Toronto. (n.d.-a). https://www.utoronto.ca/academics/undergraduate-programs

  15. University of Toronto. (n.d.-b). News & initiatives: Navi: Your U of T Resource Finder. Office of the Vice-Provost, Students. https://www.viceprovoststudents.utoronto.ca/news-initiatives/navi/

  16. Van der Schyff, E. L., Ridout, B., Amon, K. L., Forsyth, R., & Campbell, A. J. (2023). Providing self-led mental health support through an artificial intelligence–powered chat bot (Leora) to meet the demand of mental health care. Journal of Medical Internet Research, 25. https://doi.org/10.2196/46448

Please Note:

All content produced by the LexAI Journal Admin Team has been carefully written and edited to reduce biases or any personal infliction on the subject matter. If at any point you find the content misleading, please reach out to us on Instagram (@uoft_lexai) or send us an email at lawandethicsofai@gmail.com. Viewer’s discretion is advised. Thank you and happy reading!

LexAI Journal Admin Team

AI Deepfakes and Free Speech In The Johnny Somali Controversy

By Mia Reffell

September 14, 2025. The LexAI Journal

      In our first LexAI general meeting, we discussed a real-life example of the intersection of artificial intelligence, ethics, and legal struggles through the arrest of the problematic livestreamer Johnny Somali in South Korea. His arrest sparked outrage and debate over the limits of free speech and online censorship in a democratic society.  It also shed light on the limits of global internet freedom and the legal grey zones in AI governance/content moderation that livestreamers and social media platforms must navigate. 

       Ramsey Khalid Ismael, known as Johnny Somali online, first gained notoriety on Twitch by displaying provocative and insulting behavior. After being banned on Twitch for violating its terms of service, he switched to Kick, another livestreaming platform with fewer restrictions and less moderation. In pursuit of more content and attention, he first flew to Japan, and then to South Korea in 2024, where he livestreamed himself insulting locals, desecrating memorials of South Korean women subjected to sexual slavery (the “Statue of Peace” memorial), and disrupting businesses (Farrington 2024). 

       One of the most disturbing instances of Johnny Somali’s livestreaming involved the use of AI-generated deepfake pornography, a new and deceptive form of cyberharassment, to humiliate another streamer. The video depicted a female Korean livestreamer, BongBong_IRL, in an AI-generated intimate scene without her consent, which violated South Korea’s Special Act on Sexual Violence Crimes (The Express Tribune 2025). 

       South Korea has become one of the first countries to criminalize the production/distribution of non-consensual deepfake pornography, which is punishable by up to three years in prison or a fine of 30 million won ($35,500 USD) for both offenses (Lyons 2024). The law was introduced after the rise of digital sex crimes in the country, where AI-generated explicit content was used to blackmail and shame individuals. These reforms came after the alarming surge in digital sex crimes, as by September 2024, over 800 deepfake cases were reported, a sharp jump from 156 in 2021 (Yim 2024). 

       The South Korean government has also recently begun creating a “National Strategy in Artificial Intelligence” in 2019, with updates in 2023 that emphasize fairness, safety, and human-centered design (Yulchon 2024). In addition, the country has developed numerous content moderation tools through the Korea Communications Standards Commission (KCSC), which removes online content deemed to cause harm to public morals and/or order. The KCSC has ordered that service providers block content that they deem to be threats to national security and public morality, and has blocked 29,217 sites for “prostitution and obscenity” and 54,552 for others that are categorized as “digital sex crimes” (Freedom House 2023). 

       Johnny Somali was detained in Seoul and barred from leaving the country. He has pleaded guilty to four charges of disruption of business, but continues to protest the deepfake sex crimes (Times of India 2025). Johnny Somali faces up to 31 years in prison, including two deepfake-related charges (each carrying up to 10.5 years) alongside obstruction charges. 

       The case of Johnny Somali illustrates the inadequacy of our international regulations for governing AI. While South Korea has taken steps in criminalizing deepfake pornography and enforcing stricter content moderation through the KCSC, global enforcement of AI remains inadequate due to conflicting national interests. Platforms with looser content moderation, like Kick, allow for harmful content to thrive until it becomes criminal. International institutions must start creating global legal frameworks to address AI misuse and criminal behavior. Without this, criminals will be able to exploit gaps in generative AI under the guise of free expression. 

       For LexAI, this incident highlights the importance of safety and accountability in artificial intelligence legal regulations, as well as the need to hold platforms ethically and legally responsible for the harms they enable. This case reminds us that the ethical regulation of AI is not just a debate topic, but a necessity with real victims and long-lasting consequences. 

References

  1. Freedom House. 2023. “South Korea: Freedom on the Net 2023.” Freedom House. https://freedomhouse.org/country/south-korea/freedom-net/2023.

  2. Lyons, Kim. 2024. “South Korea to Criminalise Watching or Possessing Sexually Explicit Deepfakes.” Reuters, September 26, 2024. https://www.reuters.com/world/asia-pacific/south-korea-criminalise-watching-or-possessing-sexually-explicit-deepfakes-2024-09-26/.

  3. The Express Tribune. 2025. “Johnny Somali Faces Mandatory Prison Time in South Korea over AI Deepfake Charge.” The Express Tribune, January 16, 2025. https://tribune.com.pk/story/2536802/johnny-somali-faces-mandatory-prison-time-in-south-korea-over-ai-deepfake-charge.

  4. The Express Tribune. 2024. “Female streamer takes legal action against Johnny Somali for AI deepfake video.” The Express Tribune, November 13, 2024. https://tribune.com.pk/story/2509360/female-streamer-takes-legal-action-against-johnny-somali-for-ai-deepfake-video

  5. Times of India. 2025. “Johnny Somali’s South Korea Case Takes Dramatic Turn with Guilty Plea.” Times of India, August 14, 2025. https://timesofindia.indiatimes.com/sports/esports/news/johnny-somalis-south-korea-case-takes-dramatic-turn-with-guilty-plea/articleshow/123300525.cms.

  6. Yim, Hyunsu. 2024. “South Korea to Criminalise Watching or Possessing Sexually Explicit Deepfakes.” Reuters, September 26, 2024. https://www.reuters.com/world/asia-pacific/south-korea-criminalise-watching-or-possessing-sexually-explicit-deepfakes-2024-09-26/.

  7. Yulchon LLC. 2024. “Artificial Intelligence in South Korea.” Asia Law. https://www.asialaw.com/NewsAndAnalysis/artificial-intelligence-in-south-korea/Index/2202.

  8. Farrington, Jennifer. 2024. “What Did Johnny Somali Do in South Korea?” Distractify, November 11, 2024. https://www.distractify.com/p/what-did-johnny-somali-do-in-south-korea.

Please Note:

All content produced by the LexAI Journal Admin Team has been carefully written and edited to reduce biases or any personal infliction on the subject matter. If at any point you find the content misleading, please reach out to us on Instagram (@uoft_lexai) or send us an email at lawandethicsofai@gmail.com. Viewer’s discretion is advised. Thank you and happy reading!

LexAI Journal Admin Team

Positioning the Ethics of Artificial Intelligence: An Opinion Piece

image source: https://www.clio.com/resources/ai-for-lawyers/ethics-ai-law/

By Sarah Naveed

March 8, 2025. LexAI Journal

Artificial intelligence initially deemed a future-building tool is now categorized as an “existential threat” by a “godfather of AI,” Geoffrey Hinton (Brown, 2023). How did a man-made tool become a potential stakeholder in irreversible societal damage? If the purpose of AI is to augment or replicate forms of human intelligence, then is it possible for the same flaws that we possess to be implemented in the consciousness of AI operations? I am inclined to agree. I am also inclined to speculate this is a core reason as to why consistent rapid development is feared in the first place.

It is a reasonable consideration that the ethical implications of AI affect our creative and logical thinking spaces, which are all intriguing topics to be expanded on in additional entries. This includes individuals who are unable to perform mundane tasks without the usage of AI tools such as ChatGPT. From planning your daily lives, vacation itineraries, to even tracking macros in your diet, what becomes the signs or even consequences of an overly reliant relationship?

Another aspect I find fascinating is the rise of artificial relationships, including the ethics behind AI ‘companion’ robots, chat rooms, and other facilities, and if it is on the rise to replace social interaction. I think any kind of over-reliance on technology for unconventional purposes can impact social and cognitive development skills, whether that is replacing human interaction or practical organizational abilities.

The over-reliance of AI in education fundamentally changes how current students and future generations think and produce work. There is also the evident issue of academic integrity, resulting in a lack of crucial writing, researching, and application skills necessary in future academic endeavours.

Finally, a topic I think does not garner a lot of attention is the ethics surrounding bias and inconsideration of minorities. This can be done quantitatively through a lack of accurate data representing target populations, and how it may not be an effective replacement for traditional survey or manual data collection methods.

Even though I have spent the last bit critiquing AI’s lack of ability to serve society ethically or environmentally (which is a whole discussion on its own), the existence of artificial intelligence itself is remarkable and a true pillar of innovation. 

In September 2022, Sara de Lagarde was going through her daily morning rush of heading to work through the local train. Her routine was interrupted as she slipped and fell on the platform, causing injuries in her right arm and portion of her right leg that could only be salvaged through amputation. Only a month later she would hike Mount Kilimanjaro with her husband. This almost inconceivable recovery was only possible through the innovation of AI. Ms. de Lagarde received a bionic arm and through machine learning, it can understand and apply patterns typical to her movement. In cases like this, it is hard to ignore the overwhelming potential for artificial intelligence not just in the healthcare field, but in every aspect of living. (Satariano, 2024).

One argued benefit of artificial intelligence is how it can automate repetitive tasks to make certain processes easier for working individuals. However, I would counter that it is only a specific type of individual that benefits from this replacement. I believe this take does not support or include individuals of varying socio-economic statuses that may benefit from the opportunities taken away by artificial intelligence.

In order to be purposive as a “future-building” tool, stakeholders and aspects important to that future must be considered in the conversation of its development. Whether that’s creating protection for artists and their creative spaces or researching and implementing ways for AI to broaden its lens to consider minority groups, it is very possible for all these problems to be solved by the same tool that created them. I believe the success or failure of artificial intelligence is a reflection upon society. Ignorance and critical perspectives towards AI can simply be considered as a parallel on how we subconsciously view society. To call out a lack of critical thinking in schools or misrepresentation amongst minorities can ultimately hint to realms of improvement in our own operational systems. Is it possible for data collection to improve if governmental systems included minority perspectives, such as Indigenous peoples, in government or judicial processes? Or increased funding in schools to introduce better hands-on approaches with augmented support to lessen the inclination of using artificial intelligence.

I think rather than fearing the label we have put on technology, it can be used as intended to replicate forms of human intelligence for the better and for good. Whether AI is ultimately good or bad depends on how we characterize it, and by addressing issues within society foremost, can our usage of AI reflect its potential.

References

  1. Brown, S. (2023, May 23). Why neural net pioneer Geoffrey Hinton is sounding the alarm on AI. MIT Sloan. Retrieved March, 2025, from https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai
  2. Coursera Staff. (2025, February 3). AI Ethics: What It Is, Why It Matters, and More. Coursera. Retrieved March, 2025, from https://www.coursera.org/articles/ai-ethics
  3. Means, P. (n.d.). AI—The good, the bad, and the scary. Engineering | Virginia Tech. Retrieved March, 2025, from https://eng.vt.edu/magazine/stories/fall-2023/ai.html
  4. Satariano, A. (2024, May 26). Her A.I. Arm. The New York Times. Retrieved March, 2025, from https://www.nytimes.com/card/2024/05/26/technology/ai-prosthetic-arm
    • image: https://www.nytimes.com/card/2024/05/26/technology/ai-prosthetic-arm
Please Note:

All content produced by the LexAI Journal Admin Team has been carefully written and edited to reduce biases or any personal infliction on the subject matter. If at any point you find the content misleading or inexact, please reach out to us on Instagram (@uoft_lexai) or send us an email at lawandethicsofai@gmail.com. Viewer's discretion is advised. Thank you and happy reading!

LexAI Journal Admin Team