If AI Broke Education, Can We Fix It?: Reconstructing Learning through AGI “Robin Hoodism”

Written By: Lucie Kapustynski and Lewon Lee

Research By: Zainab Khalil and Lali Sreemuntoo

Editing By: Clara Lee and Sarah Naveed

October 12, 2025. The LexAI Journal

A psychology professor at the University of Toronto Scarborough and Director of the Advanced Learning Technologies Lab, Steve Joordens, has been exploring the possibility of artificial intelligence (AI) office hour agents to enhance learning. To detail this innovative approach towards education, Sarah Naveed, the student-founder and president of LexAI, met with Professor Joordens to discuss his ideas. Designed to complement, rather than replace, instructors who would be interested in implementing AI office hour agents: a virtual system that acts as an interactive conversation partner, adding to traditional office hours. As agents, AI’s capacity to expand availability and accessibility, its impact on student mental health, and the design choices that guide its behaviour reveal both the transformative possibilities and the ethical complexities of integrating technology into higher education.

Always On, Always Active: Learning Without Limits

Written By: Lucie Kapustynski

The availability of AI office hour agents offers a versatile resource for students and instructors. In the Greater Toronto Area, a review of existing research reveals that temporal and logistical constraints within conventional academic structures often restrict access to additional support (Allen and Farber, 2018). By providing on-demand assistance, AI agents offer continuous guidance, from small clarifications to conceptual explorations, extending the reach of office hours to whenever students need support. Importantly, these systems do not replace educators or their expertise; instead, they function as collaborative partners, addressing routine questions, identifying areas of difficulty, and providing adaptable feedback (Toner et al., 2024). Working alongside the distinctive pedagogical role of instructors, AI agents can empower students to engage with course material through self-directed exploration. 

Bridging Gaps and Building Pathways? An Agent’s Role Towards Inclusive Education

Beyond their immediate functionality, AI office hour agents may significantly advance educational accessibility. Professor Joordens explains that students could personalize their agents, aligning responses to individual needs and maximizing their potential (Naveed, 2025). For students with varied learning styles, AI agents are particularly transformative: multilingual capabilities reduce language barriers, while adaptive interfaces accommodate individualized learning needs (edCircuit, 2025). A study of AI-supported learning tools in university courses found that students from diverse backgrounds experienced enhanced accessibility, with the technology facilitating deeper engagement, personalized learning, and the ability to address knowledge gaps at their own pace (Hadar Shoval, 2025). Consequently, the lecturer observed “a marked improvement in the quality of class discussions” after only a few weeks of implementation (Hadar Shoval, 2025). Together, these findings suggest that the personalized support of an agent expands learning opportunities. Through reinforcing accessibility as a fundamental element of inclusive education, AI agents promote interactive experiences that empower students to reach their full potential. 

Replacing Reflection with Consumption: The Impact of Continuous Support

 While AI agents provide availability and accessibility, they address only one dimension of mentorship: human connection remains irreplaceable, raising concerns about how AI may reshape its relational core. Grounded in mutual recognition, office hours are where curiosity meets mentorship, offering students a dedicated space for dialogue and the building of professional rapport with instructors. A study that compares human mentors and ChatGPT cautions that generative AI tools “cannot replicate the relational depth and empathetic nuance integral to effective human mentorship,” detailing what is lost when technological efficiency enters interpersonal connection (Lee and Esposito, 2025). When AI occupies academic space, relationships between instructors and students are mechanized by a series of algorithmic exchanges, absent of the guidance informed by lived experience and emotional attunement. Replacing these encounters with AI agents may trivialize mentorship into a transactional process where efficiency overshadows the skills that human mentors uniquely cultivate, ultimately undermining the purpose of mentorship in higher education.

As AI becomes more prevalent in education, the convenience of these agents risks compromising the essence of the learning experience. Education transcends the passive transmission of information; it develops critical thinking, adaptability in confronting uncertainty, and responsibility for one’s reasoning. Recognizing that AI disrupts these aspects of learning, institutions such as the University of Toronto have issued reports to ensure it “is used safely, securely, and appropriately,” demonstrating its potential to challenge both the developmental purpose and academic integrity of higher education (University of Toronto, 2025). 

When the guidance of office hours is instantly available, as occurs with large language models, students may lose the incentive to plan, structure, and prioritize their studies. This convenience diminishes disciplined effort and encourages a dependence on immediate help. The strength of AI’s availability thus reveals an inherent flaw: in prioritizing constant presence, it equates education with immediacy, which overshadows the path towards intellectual growth. As a result, this support risks reducing learning to a task of rapid consumption rather than a process of sustained, active engagement.

The Illusion of Inclusion

Along with subverting the substance of learning, AI’s capacity for universal accessibility can conceal persistent systemic inequities. Considerations of accessibility are inseparable from principles of justice: who directly benefits, who is excluded, and how institutions distribute resources when access is contingent on socioeconomic conditions. In practice, large adaptive learning agents require routine maintenance, significant execution costs, and infrastructural funding, as well as reliable devices, high-speed internet, and staff training (College of Education, 2024; Qoussous, 2025). This reality raises the question of where the investments to implement these agents will originate. In the case that this new system would substitute the funds of existing support roles (such as teaching assistants, facilitated study group leaders, or other human-led guidance) completely or partially, the implementation would devalue the positions that have been essential to the functioning of universities. Alternatively, this technology could emerge as an additional financial tier, accessible primarily to those with the means to afford it. While the features of AI agents support diverse learning needs, these capacities are a continuation of pre-existing resources rather than introducing anything unique to the structure of education. In these aspects, AI agents lack intrinsic transformative power and may reproduce existing disparities—from uneven access in opportunity to the diversion of foundational resources. True accessibility entails more than technological developments; without careful design and equitable distribution, AI agents can create the illusion of inclusion while sustaining the very inequities they claim to challenge.

The Potential Effects of AI Office Hours on Mental Health – From a Theoretical Perspective 

Written By: Lewon Lee

One way to approach the potential impacts of AI on mental health can be understood through TalkLab, a Large Language Model (LLM) developed primarily to help users develop social skills. TalkLab can create personalities suited for different situations and incorporates an emotional system that is grounded in validated psychological research—allowing it to simulate human-like conversations (TalkLab, n.d.). In addition, it features natural-language voice interactions, making typing unnecessary (TalkLab, n.d.).

According to Professor Steve Joordens, this kind of AI could be helpful for people who struggle with social anxiety (Naveed, 2025). They can use conversations with characters or celebrity bots they are familiar with as a way to practice social interactions in a low-pressure environment (Naveed, 2025). 

But how could something like TalkLab be helpful for having an AI office hour agent? One theory is that if AI office hours were implemented similarly to TalkLab, they could serve as a useful tool for students who are too anxious to meet professors in person, to receive academic help while indirectly helping to mitigate their social anxiety and improving their social skills. Though the reduction of social anxiety would be a positive side effect rather than the primary purpose, it shows that AI could help convey and communicate course material for those with social anxiety without negatively affecting or worsening the students’ mental health.

However, it is also important to consider the possibility that developing an AI’s “emotional sensing system” could lead to or worsen problems of parasocial relationships or emotional dependence (Natural Machine Intelligence, 2025), which has become a growing concern with AI chatbots and LLMs. For example, during the COVID-19 pandemic, Travis, a man from Colorado, developed a romantic relationship with Lily Rose, a generative AI chatbot created by Replika (Heritage, 2025). Travis, who felt isolated during lockdown, sought companionship through AI. He said, “I expected that it would just be something I played around with for a little while then forgot about,” but “[o]ver a period of several weeks, I started to realise that I felt like I was talking to a person, as in a personality” (Heritage, 2025). He appreciated that Rose offered counsel and listened to him without judgment (Heritage, 2025). However, when Replika restructured its algorithm, the AI became “less interested” in its human partners (Heritage, 2025). This led Travis and many others with similar experiences to fight to regain access to the old version of their AI (Heritage, 2025). This illustrates how people can become emotionally dependent on AI designed for social interaction. To avoid this issue with AI office hour agents, it may be best not to include emotional sensing systems that make the AI too human-like.

AI Office Hours – Why Coding and Evaluation Matters

Finally, we must consider that the usefulness of an AI office hour agent depends largely on how it is coded. For example, if the algorithm behind the agent uses data from across the web, including social media where misinformation is a common issue, the AI could become not only unhelpful but even counterproductive to its educational purpose, especially in sectors like public health (Meyrowitsch et al., 2023). 

On another note, Yan et al demonstrate several practical challenges that we may need to address for LLM-based innovations to have educational benefits. The article states that, to demonstrate a higher level of readiness for AI to be integrated into academia, it would require more in-the-wild studies outside of testing in laboratory settings (Yan et al., 2023). The way that an AI is coded also raises questions of transparency (meaning, how aware its stakeholders are of the development of the AI) (Yan et al., 2023). For this reason, one proposal is that stakeholders, including educators and students, should become more involved in the developmental process (Yan et al., 2023). This would allow for more human-in-the-loop elements, which would help address practical and ethical issues that may arise. 

Conclusion

In conclusion, much of our understanding of AI and its integration into the academic sphere as an office hour agent would require more research, as well as a greater consideration for many of the ethical issues that it raises. While an AI office hour agent has many benefits, which could make academia more accessible and available for those who do not currently have the tools that allow for easy access to education, we must also consider many of its drawbacks. Furthermore, it should also be acknowledged that it is important for AI to serve only as an alternative to office hours, for reasons of accessibility and mental health, rather than a solution or a direct replacement for a professor’s office hours. We continue to stress the importance of a human connection that exists beyond AI chatbots, and an AI that is used for social interactions should not be used to the extent that one becomes dependent on it. After all, AI is a tool, not a companion. And while it has its benefits that could shape the way we live for the better, we must also consider its ethical concerns before integrating it into our society.

References 

  1. “Ai in Schools: Pros and Cons.” College of Education, College of Education at Illinois, education.illinois.edu/about/news-events/news/article/2024/10/24/ai-in-schools–pros-and-cons. Accessed 8 Oct. 2025.

  2. Allen, Jeff, and Steven Farber. “How Time-Use and Transportation Barriers Limit on-Campus Participation of University Students – Sciencedirect.” How Time-Use and Transportation Barriers Limit on-Campus Participation of University Students, www.sciencedirect.com/science/article/abs/pii/S2214367X18300267. Accessed 7 Oct. 2025.

  3. “Guidelines – toward an AI-Ready University: University of Toronto.” Toward an AI-Ready University | University of Toronto, 21 Aug. 2025, ai.utoronto.ca/guidelines/

  4. Hadar Shoval, Dorit. “Artificial Intelligence in Higher Education: Bridging or Widening the Gap for Diverse Student Populations?” MDPI, Multidisciplinary Digital Publishing Institute, 21 May 2025, www.mdpi.com/2227-7102/15/5/637#B18-education-15-00637.

  5. Heritage, Stuart. “The Women in Love with Ai Companions: ‘I Vowed to My Chatbot That I Wouldn’t Leave Him.’” The Guardian, September 9, 2025. https://www.theguardian.com/technology/2025/sep/09/ai-chatbot-love-relationships.

  6. Lee, Jimin, and Alena G. Esposito. “Chatgpt or Human Mentors? Student Perceptions of Technology Acceptance and Use and the Future of Mentorship in Higher Education.” MDPI, Multidisciplinary Digital Publishing Institute, 13 June 2025, www.mdpi.com/2227-7102/15/6/746#B41-education-15-00746.

  7. Meyrowitsch, Dan W, Andreas K Jensen, Jane B Sørensen, and Tibor V Varga. AI chatbots and (mis)information in public health: impact on vulnerable communities, October 31, 2023. https://doi.org/10.3389/fpubh.2023.1226776. 

  8. Naveed, Sarah. “Meeting With Professor Steeve Jordeen”. AI Office Hour Agent, June 11, 2025. Meeting on Zoom.

  9. Natural Machine Intelligence. Emotional risks of AI companions demand attention. Nat Mach 

  10. Intell 7, 981–982 (2025). https://doi.org/10.1038/s42256-025-01093-9

  11. Qoussous, Firas. “How to Navigate the Future of AI in Education and Education in Ai.” EY, MIT OpenCourseWare, www.ey.com/en_bh/insights/education/how-to-navigate-the-future-of-ai-in-education-and-education-in-ai. Accessed 5 Oct. 2025. 

  12. Staff, EdCircuit. “The Role of AI in Supporting Diverse Learners in Schools.” edCircuit, 30 Jan. 2025, edcircuit.com/the-role-of-ai-in-supporting-diverse-learners-in-schools/.

  13. Toner, Mark, et al. “Generative AI and Global Education.” NAFSA, 10 Jan. 2024, www.nafsa.org/ie-magazine/2024/1/10/generative-ai-and-global-education.

  14. TalkLab. N.d. https://talklab.ca/.

  15. Yan, Lixiang, Lele Sha, Linxuan Zhao, Yuheng Li, Roberto Martinez‐Maldonado, Guanliang Chen, Xinyu Li, Yueqiao Jin, and Dragan Gašević. “Practical and Ethical Challenges of Large Language Models in Education: A Systematic Scoping Review.” British Journal of Educational Technology 55, no. 1 (August 6, 2023): 90–112. https://doi.org/10.1111/bjet.13370.

Posted in Uncategorized.

Leave a Reply

Your email address will not be published. Required fields are marked *