Written By: Lucie Kapustynski, Zainab Khalil, Lewon Lee, and Lali Sreemuntoo
Editing By: Clara Lee and Sarah Naveed
November 16, 2025. The LexAI Journal
Generative Artificial Intelligence is poised to transform the legal field by addressing the gap between law school and real-world practice. To explore this idea, Sarah Naveed, the student-founder and president of LexAI, met with Dr. Megan Ma, Executive Director of the Stanford Legal Innovation, who is investigating parallels between legal practice and other professional fields to design AI tools that enhance experiential learning. By conceptualizing Artificial Intelligence (AI) as a collaborative colleague, law students can delve into simulated complex legal scenarios, in a guided way. At the intersection of AI and legal education, these innovations open reflection on their impact on justice, accessibility, and ethical practice.
AI for All or Only for Some?
By: Zainab Khalil
AI is increasingly being used in different ways, especially in the legal world, which has created both concern and enthusiasm about widening or narrowing gaps in the justice system. As Dr. Megan Ma highlights, these tools can expand who receives legal help (Naveed, 2025). A public-interest lawyer with too many files, a small firm without a research team, or a self-represented individual trying to understand a contract could all benefit from AI through summaries and guides on common legal steps that would otherwise be too expensive or unavailable.
I think that AI should not replace lawyers; instead, it can help fill gaps by offering people support they didn’t previously have access to. Dr. Ma also highlights the other side: not everyone has access to the same version of AI and technology (Naveed, 2025) . Wealthier firms might pay for more powerful models allowing them to complete work more efficiently. Meanwhile, legal clinics, community organisations, small firms or even low-income clients are often left with systems that might be more prone to LLM errors and mistakes. This creates a heightened risk which widens the gap (Naveed, 2025).
Dr. Ma suggests that AI should be treated as a utility tool, similar to the internet, as a way forward (Naveed, 2025). If institutions and governments, especially in the legal field, choose to make it more readily available, it could help close some gaps instead of further widening them (Naveed, 2025). Her example of Stanford Law School buying its own computing power to ensure every student has equal access shows that institutions can help make AI access more equitable (Naveed, 2025). Ultimately, I think that it all depends on the choices we make around AI access through cost, responsibilities, and underlying bias.
Modernizing Workflows: How Will AI Shape Early Legal Development
By: Lali Sreemuntoo
This is an opinion based article created after reading S. Naveed’s interview with Dr M. Ma
As the legal sector becomes increasingly technologically dependent, generative AI tools are emerging as potential collaborators in legal work. These tools promise to address long-standing gaps in junior associate training, yet they also raise important questions about how articling students and fresh graduates will develop foundational skills.
One of the most widely discussed examples is Harvey AI, a generative AI platform designed specifically for legal workflows. Harvey assists lawyers with document drafting, contract analysis, legal research, and summarizing complex materials. By automating low-level or repetitive tasks, Harvey allows lawyers to focus on higher-level reasoning and strategy. Proponents argue that this can benefit new lawyers. Instead of spending countless hours on mechanical work, junior associates can directly engage with substantive issues earlier in their careers.
Yet there are important concerns about the long-term impact on skill development. Traditionally, the basic administrative tasks of producing a memo, drafting a contract, or conducting research were used to get juniors to engage deeply with the material. This would encourage them to verify sources, assess the strength of arguments, and anticipate potential challenges in order to complete the task. Each step carried a degree of ownership, as errors could have real consequences for the client or the supervising lawyer. However when AI performs a substantial portion of these tasks, students are removed from this experiential feedback loop. They may begin to view legal work as a product to be obtained efficiently rather than a process that requires careful reasoning, critical judgment, and ethical awareness.
This shift has several nuanced implications. First, students may become over-reliant on AI outputs, assuming the results are authoritative without critically evaluating the reasoning behind them. Legal errors or oversights generated by AI could go unnoticed, reducing students’ ability to detect and correct mistakes independently, a skill that is vital for professional responsibility. Second, ethical reasoning may be dulled. When AI handles routine but ethically sensitive work, such as identifying conflicts, interpreting ambiguous contractual language, or framing arguments, students may not internalize how to weigh competing duties, prioritize client interests, or navigate the moral dimensions of practice. Over time, the subtle but crucial link between responsibility and professional identity could be weakened, leaving new lawyers less confident in their own judgment.
Furthermore, this dynamic may affect accountability within the firm itself. Supervising lawyers might assume that AI-assisted drafts are accurate and complete, potentially reducing direct engagement with juniors and shrinking opportunities for mentorship. The combination of technological delegation and diminished feedback loops risks producing a generation of lawyers who can produce outputs efficiently but may struggle with independent problem-solving, risk assessment, or ethical decision-making in situations where AI guidance is unavailable or inappropriate.
When Neutral Isn’t: Teaching Justice in the Age of Algorithms
By: Lucie Kapustynski
The bias coded in AI poses a challenge for the future of law as legal education uses AI agents to train students. AI bias arises when machine-learning models reinforce systemic discrimination against marginalized communities, presenting historical inequities in the guise of neutrality. These algorithmic distortions often stem from training data that reflect social prejudices leading to the replication of societal biases related to race, gender, class, or sexuality (Holdsworth, 2025). Beyond its sociological realms, AI bias is also a technical inevitability: outputs mirror the biases of their data.
Most AI frameworks involve five to eight core stages, with bias emerging at several critical points. From the earliest stages of data gathering through labeling and model training, AI systems absorb bias caused by unbalanced datasets, subjective annotations, and design choices (SAP, 2024). Models that appear to have an absence of bias in controlled settings may manifest discriminatory patterns once deployed, where untested inputs and limited monitoring reveal hidden flaws (Chapman University, 2025). These dynamics demonstrate that AI systems are not neutral by design; they inherit then reflect existing structural inequities.
Within legal education, the bias embedded in AI makes responsible integration essential, as generative tools aimed at narrowing experiential gaps can reassert disparities. While AI platforms like Harvey AI offer promising simulations that allow students to experiment with complex legal scenarios in a low-risk environment, their effectiveness depends on equitable access, critical oversight, and literacy in AI usage (Naveed, 2025). Without these precautions, AI-trained students may mirror the algorithm’s biases in their legal reasoning, unintentionally reinforcing patterns of discrimination in practice.
The consequences of biased AI are not theoretical and surface severely in legal contexts. The COMPAS risk-assessment tool, used in U.S. courts to assess defendants’ risk of reoffending, misclassified black defendants as high-risk at almost twice the rate of white defendants (Chawla, 2022). These were not simple algorithmic mistakes; they were the reproduction of structural inequities encoded in the data interpreted to be objective. If future lawyers are educated through AI systems shaped by these same biases, those distortions will extend beyond the screen, flowing directly into courtrooms, case strategies, and the administration of justice itself.
Depending on the method of engagement, AI agents possess a dual capacity: they can illuminate or misrepresent. Through a disciplined approach of critical inquiry, AI evolves past convenience to become a space for dissecting structural inequities, exploring ethical challenges, and converting flawed results towards corrective action. When treated as artifacts to analyze, these systems can be turned back upon themselves, revealing the real-world biases they replicate. In this reflexive mode, AI agent is considered less as an imagined colleague and more of an object to critique, it offers students to confront patterns of injustice, rethink the logics that produce them, and develop the discernment necessary for principled legal professionals.
Addressing the Concern of Affordability and Inequality in the Legal Field
By: Lewon Lee
The average Common Law degree in Canada costs $18,436.39 annually (Helenchilde 2022). When taking this into consideration, pursuing law as a career may be seen as a “pathway to the rich,” since individuals from lower-income households may not have the financial means to pay for a law degree (Helenchilde 2022). Given that the legal field already requires a substantial monetary investment, could the implementation of AI—which may be both costly and reduce billable hours—create another barrier for individuals from lower-income households to pursue the profession? Additionally, could this create a larger gap in representation between lower-income individuals and those from higher-income households in the legal field? These are the questions this section aims to address.
To answer these questions, it is important to provide some background information of AI use in the legal field. According to an article posted by Marjorie Richter, AI usage has been present in the legal field since 2000, demonstrating its significance as far from a new concept (Richter 2025). However, this does not mean that its current developments will not drastically reshape the profession. AI usage in the 2000s typically involved searching for documents, improving results by identifying concepts as opposed to mere word matches, and other basic functions. In contrast, the usage of AI in the contemporary legal field is far more sophisticated. In Thomson Reuter’s Generative AI Report, he explores that the most common uses of AI include document review and analysis, legal research, memo writing, contract drafting, document summarization, and correspondence drafting (Richter 2025). With these points in mind, it is evident that AI can and will drastically improve efficiency. However, how would this affect legal careers? There are two major concerns that arise from AI usage at a large-scale. First, individuals from lower-income households already view law school debt as a major risk, so the added pressure of purchasing expensive AI tools to stay competitive could further exacerbate the financial burden of pursuing a legal career. Second, higher efficiency from AI may lead to a reduction in billable hours, which much like the first concern, would make law even more risky and financially difficult to pursue for lower-income individuals.
In addressing the first concern, the cost of AI software for the legal field is difficult to measure as many companies do not publicly disclose pricing. However, if we consider the price of Clio—one of the more recognized AI-integrated legal software companies—their cheapest AI-related service costs $49 USD per month, with their most expensive service amounting to $149 USD per month (Clio n.d.). While it is not free, it is far from unaffordable when compared to the high tuition fees of law school. Additionally, Clio is not the only AI legal service available. With other platforms such as Filevine, establishing the variety of choices made available to lawyers. Overall, while the normalization of subscribing to AI legal services to keep up with competition may contribute to economic inequality and increase the risks for lower-income individuals entering the field, current pricing suggests that fears of a drastic shift in financial accessibility may be overstated.
In addressing the second concern that higher efficiency would lead to less billable hours, causing further problems to lawyers from lower-income backgrounds depends on how AI is implemented—making it probable to not occur. There is no doubt that AI will increase productivity, as shown by the many uses it currently has in the legal field. In fact, an article by Robert J. Couture highlights that it may increase productivity by 100-fold, and reduce associate time from 16 hours to 4 minutes (Couture 2025). However, increased productivity does not mean fewer billable hours. Instead, an increase in productivity may free up time so that lawyers can focus their energy and time on other areas where AI assistance would be less helpful, such as analysis and strategy (Couture 2025). While the exact mechanisms behind why this happens is unclear, it is evident that billable hours are not necessarily reduced as a consequence to the implementation of AI services (Couture 2025). As such, it is important that AI should be used as a tool, rather than a replacement for all tasks. After all, there have been instances where AI has made the mistake of creating non-existent legal cases, which creates serious concerns of misinformation (Couture 2025). Combatting this issue of potential misinformation would require the involvement of lawyers to oversee the AI to assure that it does not make any major mistakes.
Conclusion
Although AI has shown signs of integration in law since the 2000s, it is evident that the recent advancements have significantly expanded its potential role within the legal field. However, these possibilities bring forth ethical concerns regarding issues of inequality in the legal field and the possible development of racial biases from AI. While research has found that it may not exacerbate existing inequalities, it is nonetheless important to consider that certain precautions should be taken so that companies focused on the development of AI legal assistants maintain that their services remain affordable to all lawyers. Furthermore, ethical AI development in the legal field must prioritize preventing biases, as this could lead to issues such as creating non-existent legal cases, and undermining a legal system by forming subjective beliefs that have no place in court. Without careful safeguards, firms risk prioritizing convenience over the foundational training that ensures long-term competence.
References
Chapman University. (n.d.). Bias in Ai. Bias in AI | Chapman University. https://www.chapman.edu/ai/bias-in-ai.aspx
Chawla, M. (2022, February 23). Compas case study: Investigating algorithmic fairness of Predictive Policing. Medium. https://mallika-chawla.medium.com/compas-case-study-investigating-algorithmic-fairness-of-predictive-policing-339fe6e5dd72
Clio. (n.d.). Clio: Pricing. https://www.clio.com/pricing/
Couture, R. J. (2025, February 24). The impact of artificial intelligence on law firms’ business models. Harvard Law School Center on the Legal Profession. https://clp.law.harvard.edu/knowledge-hub/insights/the-impact-of-artificial-intelligence-on-law-law-firms-business-models/
Helenchilde, I. (2022, December 29). High cost of Law School makes a legal career an exclusive pathway for the rich. Capital Current. https://capitalcurrent.ca/high-cost-of-law-school-makes-a-legal-career-an-exclusive-pathway-for-the-rich/
Holdsworth, J. (2025, July 22). What is Ai Bias?. IBM. https://www.ibm.com/think/topics/ai-bias
Naveed, Sarah. “Meeting With Professor Megan Ma”. June 16, 2025. Meeting on Microsoft Teams
Richter, M. (2025, August 28). Artificial Intelligence and law: Guide for legal professions.
Thomson Reuters Law Blog. https://legal.thomsonreuters.com/blog/artificial-intelligence-and-law-guide/#:~:text=insights%20AI%20provides.-,Timeline%20of%20AI%20in%20law,vendors%20created%20increasingly%20powerful%20tools
What is Ai Bias? causes, effects, and mitigation strategies. SAP. (n.d.). https://www.sap.com/resources/what-is-ai-bias

