The New Semester with AI: Impacts on Writing, Learning, and Well-Being

*Disclaimer: This journal entry discusses topics related to mental health, suicide and self-harm, which may be distressing for some readers. If you are in need of support, please reach out to your local helplines or support services in your area.*

  • U of T Telus Health Student Support is a free, 24/7 counselling service for U of T students that may be reached at 1-844-451-9700 or 001-416-380-6578  (If outside of Canada or the USA).
  • Good2Talk is a confidential 24/7 hotline. You can call or text them directly at 1-866-925-5454.
  • You can also access the Navi Resource Finder here: 

                  Navi: Your U of T Resource Finder – Office of the Vice-Provost, Students 

By: Grace Blumell, Chaewon Kang, Ava Karimy, and Mia Rodrigo

September 28, 2025. The LexAI Journal

With the start of the school year and assignments slowly piling up, many students turn to AI for support, whether for summarizing, researching, writing, automating tasks, or even as a companion and source of advice. But how is this shaping the way we learn, write, and take care of our well-being? Members of the LexAI Research Team explore these questions in this journal entry. 

AI and Academic Integrity

Written by Grace Blumell

As the school year begins, teachers administer their syllabus lectures and, notably, speak on academic integrity in relation to artificial intelligence. Academic integrity serves as a  cornerstone of education as it “represents honesty, trust, and ethical conduct.” (Balalle, 2025) However, there are discrepancies among educators, as some emphasize the benefits of using AI as a learning tool, while others forbid its use altogether. Given the ever-growing use of AI in academia, the conversation does not need to surround whether it should be used but instead how it can be used to ethically and honestly influence research and promote academic performance (Alberth, 2023). 

Writing, especially in the form of research papers, is a fundamental component of academic work, requiring critical thinking, originality, honesty, and overall understanding (Alberth, 2023). The use of AI can hinder one’s ability to develop these skills. Further, AI often writes and does not properly cite sources (Alberth, 2023). Lack of accreditation constitutes plagiarism, which is one of the main explicitly expressed topics of academic integrity (Alberth, 2023). Even when ideas are generated by ChatGPT, passing one’s work off as original does not align with academic integrity. 

That said, when used appropriately, AI can be a very helpful tool in education. It can administer quizzes, provide feedback, and help students to brainstorm (Alberth, 2023). Therefore, the use of AI is inevitable in academia, so institutions must clarify when its use is appropriate and honest. This balanced approach to AI and academic integrity will allow for technological advancements and academic integrity to evolve together. 

Implications on Learning and Writing

Written by Ava Karimy

Given the ever-increasing popularity of large language models (LLMs), such as ChatGPT, students in various grade levels are more susceptible to passive learning habits that can minimize brain activity. AI streamlines information retention and can generate personalized feedback and critical analysis; however, studies have shown that logical reasoning is often lost in the process. AI is often restricted to predefined parameters, meaning it must adhere to certain algorithmic rules when solving problems (Ukah, 2025). As a result, students are limited in their opportunities to digest topics in unconventional or creative ways. Although the use of generative models in learning and writing prioritizes efficiency, it may stifle the innovative aspects of human learning. 

An experiment was conducted by Kosmyna et al. at the Massachusetts Institute of Technology, MIT, where the usage of LLMs in academic writing was investigated. Their studies included: writing without digital language tools, using only search engines, and only employing LLMs. EEG scans (a method of measuring the brain’s electrical activity) demonstrated that active brain processing was most prevalent in the group restricted from digital aids. The results indicate that brain connectivity will decrease when access to external support, namely LLMs or search engines, is increased. Overall, using LLMs decreases information retention and reduces mental effort. This consistent reliance on AI and academic underperformance at the neurological level suggests that students may experience long-term cognitive setbacks in their learning (Kosmyna et al., 2025). 

 In the context of legal research and writing, challenges also arise throughout the AI decision-making process, where transparency is often lacking. Legal professionals cannot properly deduce an AI’s background reasoning, which generates questions of accountability and equity. Legal research risks being inaccurate and unusable in the long term if one cannot determine whether a judgment is unbiased (Alsaigh et al., 2024). To address these risks overall, AI models could be programmed to provide interpretable explanations for their deductions, so that students, professionals, and academics can internalize the logic behind an AI-generated response, as opposed to relying wholly on the accuracy of an LLM. 

EU AI Act on Education as High-risk

Written by Chaewon Kang

One of the key legal developments relevant to AI in education is the European Union’s Artificial Intelligence Act, which was passed in 2024 and is set to enter into force in 2025 for certain AI systems (European Union, 2024). The AI Act is the world’s first comprehensive regulatory framework on artificial intelligence, and it classifies AI systems used in education as “high-risk.” This categorization stems from the recognition that tools used for admissions, evaluation of learning outcomes, student placement, or monitoring during tests are seen as powerful enough to significantly shape an individual’s career and livelihood (EU Artificial Intelligence Act, 2024, Annex, III). 

Because of this, the Act requires these systems to comply with rigorous obligations, including robust data quality standards, continuous risk assessment and mitigation, human oversight, and transparency in operation, as set out in Article 10 (Fasken, 2024). This highlights not only the role of students, but also the role of schools and universities. Clear institutional policies, educator training, and open communication with students will all be necessary to ensure that AI is used fairly and responsibly in classrooms (Dietis, 2025).

Accessing Resources

Written by Mia Rodrigo

Post-secondary education creates conditions that demand adaptability in a highly competitive environment, implying that wellness is instrumental to each student’s success. In order to navigate the complexities of higher education, the transparency of student-centred services is critical to fostering stigmatized dialogue around mental wellness, improving equity for communities accessing culturally relevant support, and effectively responding to student needs. (Haskett et al., 2020).

Support chatbots and digitized wellness resources are increasingly prevalent, notably, since COVID-19 (Sinha et al., 2023). In September 2020, Navi was launched as an assistant tool, serving as a digitized centrepoint for UofT students to access resources pertaining to student life and wellness. Navi is hosted through IBM’s Watsonx platform and sustained by natural language processing, with its database directly sourced from UofT staff consultation. (University of Toronto, n.d). Subsequently, Navi has been described solely as a resource-finding tool and not intended for personal disclosure, therapy or live crisis intervention. AI bots can become a viable option for convenience when human-facing support is unavailable for low-risk interactions such as resource directory tasks.

Deeper inquiry into ethical implications, psychological safety and coherent data privacy policies continue to inform the utility of AI chatbot models for mental health support. In August of 2025, OpenAI was subject to a lawsuit over ChatGPT’s failure to contextualize and employ significant safety marks for users expressing thoughts of suicide, self-harm or extreme mental distress (Duffy, 2025). Studies into the importance of LLMs’ ability to perform textual inference found that even when ChatGPT responded with solid “suicide protective” (Arendt et al., 2023) efforts, accuracy and sensitivity in dealing with suicide related subject matters had to be corrected through the user’s verification. An interdisciplinary, collaborative approach to designing wellness AI tools can mitigate potential risks to enabling harmful behaviour or misinformative responses. There are system designs that may offer a fair balance between dialectical engagement, providing universal coping strategies and resource referral. A dualistic model, such as Leora AI, may be beneficial for multi-purpose LLMs to adopt (Van der Schyff et al., 2023), establishing the parameters between distress identification and escalation to referral can ensure the user experience honors safety.

As AI continues to rapidly evolve, its integration into mental health services can offer opportunities to alleviate administrative strain and enhance access for individuals. UofT’s Navi exemplifies practical utility to direct students to institution-based services and explore different resources. However, the need to ensure transparency in services, tool limitations, and the responsibility of LLMs’ in managing support requests is centrefold for responsible AI development and literacy.

Conclusion

As explored, AI influences students on multiple levels, shaping academic integrity, learning processes, legal regulations, and student wellness. From questions of plagiarism and authorship to the risks and benefits of mental health chatbots, it is clear that AI’s role in education is both powerful and complex. As regulations such as the EU AI Act illustrate, institutions are beginning to establish guardrails for more responsible and transparent use, indicating that the complete exclusion of AI from education is no longer viable. For students, this would mean reflecting carefully on how AI impacts our growth, studies, and well-being, while striving to balance efficiency with honesty, innovation with accountability, and accessibility with care.

References

  1. Alberth. (2023, October 22). The Use of CHATGPT in Academic Writing: A Blessing or a Curse in Disguise? TEFLIN Journal – A publication on the teaching and learning of English, 34(2). https://doi.org/10.15639/teflinjournal.v34i2/337-352.

  2. Alsaigh, R., Mehmood, R., Katib, I., Liang, X., Alshanqiti, A., Corchado, J. M., & See, S. (2024). Harmonizing AI governance regulations and neuroinformatics: perspectives on privacy and data sharing. Frontiers in Neuroinformatics, 18, Article 1472653. https://doi.org/10.3389/fninf.2024.1472653

  3. Arendt, F., Till, B., Voracek, M., Kirchner, S., Sonneck, G., Naderer, B., Pürcher, P., & Niederkrotenthaler, T. (2023). CHATGPT, Artificial Intelligence, and suicide prevention. Crisis, 44(5), 367–370. https://doi.org/10.1027/0227-5910/a000915

  4. Balalle, Himendra, and Sachini Pannilage.  (2025). Reassessing Academic Integrity in the Age of AI: A Systematic Literature Review on AI and Academic Integrity.  Social Sciences & Humanities Open, (11). https://doi.org/10.1016/j.ssaho.2025.101299.

  5. Dietis, Nikolaos. (2025, March 25). AI in Higher Education: Mapping Key Guidelines & Recommendations. European Commission – Futurium. https://futurium.ec.europa.eu/en/european-ai-alliance/document/ai-higher-education-mapping-key-guidelines-recommendations#:~:text=In%20addition%2C%20the%20EU%20AI,training%20institutions%20at%20all%20levels.%22

  6. Duffy, C. (2025, August 27). Parents of 16-year-old sue openai, claiming Chatgpt advised on his suicide | CNN business. CNN. https://www.cnn.com/2025/08/26/tech/openai-chatgpt-teen-suicide-lawsuit

  7. EU Artificial Intelligence Act (2024). Annex III: High-risk AI Systems Referred to in Article 6(2). https://artificialintelligenceact.eu/annex/3/

  8. European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 http://data.europa.eu/eli/reg/2024/1689/oj

  9. Fasken. (2024, November). The EU AI Act: All You Need to Know. Fasken Knowledge Hub. https://www.fasken.com/en/knowledge/2024/11/the-eu-ai-act#:~:text=The%20EU%20AI%20Act%20imposes,Data%20Protection%20Regulation%20(GDPR).

  10. Haskett, M. E., Majumder, S., Kotter- Grühn, D., & Gutierrez, I. (2020). The role of University Students’ wellness in links between homelessness, food insecurity, and academic success. Journal of Social Distress and Homelessness, 30(1), 59–65. https://doi.org/10.1080/10530789.2020.1733815

  11. Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. https://doi.org/10.48550/arxiv.2506.08872

  12. Sinha, C., Meheli, S., & Kadaba, M. (2023). Understanding digital mental health needs and usage with an artificial intelligence–led mental health app (WYSA) during the COVID-19 pandemic: Retrospective Analysis. JMIR Formative Research, 7. https://doi.org/10.2196/41913

  13. Ukah, G. N. (2025). A comparative study of the advantages and disadvantages of AI-enhanced learning in the 21st century: The implications to secondary school students in Imo State. International Journal of Educational and Scientific Research Findings, https://www.globalacademicstar.com/download/article/a-comparative-study-of-the-advantages-and-disadvantages-of-ai-enhanced-learning-in-the-21st-century-the-implications-to-secondary-school-students-in-imo-state.pdf

  14. Undergraduate programs. University of Toronto. (n.d.-a). https://www.utoronto.ca/academics/undergraduate-programs

  15. University of Toronto. (n.d.-b). News & initiatives: Navi: Your U of T Resource Finder. Office of the Vice-Provost, Students. https://www.viceprovoststudents.utoronto.ca/news-initiatives/navi/

  16. Van der Schyff, E. L., Ridout, B., Amon, K. L., Forsyth, R., & Campbell, A. J. (2023). Providing self-led mental health support through an artificial intelligence–powered chat bot (Leora) to meet the demand of mental health care. Journal of Medical Internet Research, 25. https://doi.org/10.2196/46448

Please Note:

All content produced by the LexAI Journal Admin Team has been carefully written and edited to reduce biases or any personal infliction on the subject matter. If at any point you find the content misleading, please reach out to us on Instagram (@uoft_lexai) or send us an email at lawandethicsofai@gmail.com. Viewer’s discretion is advised. Thank you and happy reading!

LexAI Journal Admin Team

Posted in Uncategorized.

Leave a Reply

Your email address will not be published. Required fields are marked *