WGU's AI-Powered Student Support: Reimagining Personalized Education
Artificial intelligence (AI) has transitioned from a hypothetical concept to a practical reality in education, prompting questions about its implementation. WGU Labs has been exploring the potential of AI to transform student support services, focusing on leveraging generative AI capabilities and tools to enhance the student experience. This article discusses the development of an AI-powered assistant designed to provide students with 24/7 support across their needs. It details the journey from the initial concept to the Minimum Viable Product (MVP) development, the critical pivot based on user testing, and the insights gained from building AI systems that effectively serve students.
The Need for AI-Powered Student Support
WGU serves a large and diverse student population with varied needs. While WGU has a human-led support system, research has identified challenges in its utilization. Students have expressed uncertainty about the availability of support and a reluctance to ask for help. Program Mentors, who typically serve as the first point of contact for students, spend a significant amount of time addressing routine inquiries, which limits their ability to provide transformational support.
Transformational support includes developing personal connections with students and providing emotional, motivational support. Mentors act as accountability partners and academic advisors, guiding students through their programs. This type of support has been shown to have a significant impact on student success and the overall student experience at WGU. By automating routine inquiries, mentors can focus on providing the transformational support that students value.
The AI-enabled student support system was initially conceived as an intelligent chatbot to address these challenges. The system focused on academic assistance, social and psychological support, career guidance, student financial aid, and general resource matching.
Initial Multi-Agent Architecture
The initial approach to developing the AI-powered assistant involved creating specialized agents for specific tasks and areas of knowledge, orchestrated using tools like MCP and LangChain. This architecture aimed to provide deep specialization in each support area and modularity, allowing for the gradual addition of agents over time.
Read also: A Guide for Graduate Students
The original design included five subject matter expert (SME) agents:
- Mental Health and Well-being Agent: This agent supported students with questions about motivation, feeling overwhelmed, or mental health concerns.
- Academic Support Agent: This agent addressed broader academic needs, such as questions about course enrollment, program requirements, and navigating academic policies.
- Financial Aid Agent: This agent provided guidance throughout the complex financial aid process.
- Career Agent: This agent engaged students throughout their journey, integrating career exploration into multiple touchpoints.
- General Knowledge Agent: This agent, initially intended for uncategorized questions, was later eliminated as many queries could be handled by the other specialized agents.
An Orchestration Agent triaged incoming questions to the appropriate specialist, and a Compassionate Coach Agent refined every response, ensuring that each interaction was framed by principles of student success, belonging, and coaching pedagogy.
Challenges with the Multi-Agent Architecture
User testing revealed a critical flaw in the multi-agent orchestration: slow response times. The average response time was 38 seconds, with some instances taking over 60 seconds. This latency was due to the complexity of routing between agents, with each question requiring analysis, routing, processing, coaching refinement, and return.
The multi-agent approach also led to conversational issues. Agents would sometimes lose the topic or restart the conversation, with each SME thinking their contribution was the beginning of a new session. Additionally, some SMEs provided lengthy responses, with multiple paragraphs and follow-up questions, which was not ideal for students seeking quick and concise answers.
The Single-Agent Pivot
Developments in large language models (LLMs) presented a new opportunity to consolidate more specialized knowledge into a single agent without sacrificing depth. The multi-agent system was rebuilt as a single, comprehensive agent, incorporating all subject matter expertise and compassionate coaching elements. This agent utilized a Retrieval Augmented Generation (RAG) function to access specialized knowledge bases without the need for complex hand-offs to multiple agents.
Read also: Student Accessibility Services at USF
The results were significant: response times dropped to under 4.9 seconds while maintaining response quality, empathy, and depth of support. The single-agent architecture proved more efficient and effective than the multi-agent approach.
The system may utilize some agentic actors to accomplish tasks on behalf of the user. For example, a student might ask the agent to schedule a call with their mentor. The agentic actor would then compare the student’s calendar with their mentor’s calendar and automatically schedule the call on both calendars. Another example is when a student asks a complicated question about their transcript, the agentic actor may take a student’s complicated question about their transcript and send it to an AI agent in the registrar’s office. This specialized agent might then verify prior enrollment at another institution, validate the credits earned, and update the student’s transcript - all while the student is busy working on an upcoming research paper.
User Testing Insights
Approximately 600 students were invited to test the compassionate coach, with 180 actively engaging and 160 providing detailed feedback via a follow-up survey. The insights proved invaluable.
Students valued having 24/7 judgment-free support for questions.
However, seventy-five percent identified specific problems. Technical bugs topped the list, including some apparent garbled text, what appeared to be system prompts leaking into messages, and incorrect name usage. Students also flagged incorrect information about exam fees, which, it turns out, the AI actually got right.
Read also: Guide to UC Davis Student Housing
Many students are already using ChatGPT, Claude, and other AI tools. Any custom tool must be demonstrably better for institution-specific needs through deep system integration and personalized support. It must provide individualized and well-informed guidance that feels like a trusted friend.
Next Steps and Future Directions
Based on the user testing feedback, the immediate priorities are to eliminate technical bugs and correct factual errors. The next phase focuses on building the tool in a secure AWS environment, enabling a connection to real, live student data. This will allow the compassionate AI coach to recognize individual students and understand their academic progress, unlocking personalized support at scale.
Future plans include proactive outreach when risk signals emerge, with the tool sending timely messages to students based on research into effective interventions. The compassionate coach will also have direct scheduling integration and seamless handoffs to human support when needed. Additionally, there are plans to explore how the tool can learn from top-performing mentors to scale their best practices.
Another significant challenge is how to handle the support agent’s memory - specifically, how to identify and classify relevant information to remember, how long to retain it, and how to retrieve it in the current context. In some cases, this support agent may need to recall something relevant from weeks or months ago. This puzzle will need to be resolved through technology and will likely be driven by evolving industry standards and development.
Key Principles for Educational AI Development
The experience of developing the AI-powered student support system reinforced two key principles for product development, especially around emerging educational AI:
- Architectural elegance must yield to user experience. The theoretically superior multi-agent system proved problematic because it was too slow. Students need help that is fast, relevant, and reliable.
- Emerging tools need to go beyond general-purpose use, through deep institutional integration and actionable support. Educational AI must provide genuine value to users.
The technology is continually improving. It may not pay off in the long run to lock into a particular tool or set of technical components to build a solution like this, because LLMs and their related tools are constantly evolving. What works well today may be archaic in a month, so it is imperative to pay close attention to the latest developments in technology and be willing to try different approaches as they become available. As WGU prepares for a large pilot with students, the focus is on reimagining personalized support in online education.

