Multi-Agent LLM Architectures for Quality Assurance and Governance in Personalized Learning Systems
Thesis (MA)
Advisor(s): Alisa Mehler(alisa.mehler(at)tum.de)
CONTEXT
As personalized learning systems become increasingly automated, new risks emerge: hallucinated content, inconsistencies, didactic errors, and loss of governance control. Ensuring trustworthiness and robustness becomes critical when LLMs generate educational material dynamically.
One promising approach is the use of multi-agent LLM architectures, where specialized agents collaborate in structured workflows such as Generate–Verify–Refine.
This thesis investigates how multi-agent systems can ensure content quality, didactic correctness, and governance compliance in personalized learning environments based on the textbook Informationsmanagement by Krcmar (2015).
The objective is to design and implement a governance-oriented architecture that reduces hallucination risks, increases transparency, and enables scalable quality assurance mechanisms suitable for academic and executive education.
Research Objectives of the master’s thesis could include:
- Definition of specialized agent roles (generation, review, QA, governance)
- Multi-stage validation workflows (Generate–Verify–Refine)
- Prompt governance mechanisms and quality criteria
- Comparison of single-agent vs. multi-agent architectures
- Human-in-the-loop integration for critical learning content
Potential Research Questions guiding the thesis are:
This topic can be grounded in research questions such as:
- What factors determine the trustworthiness of automatically generated learning content?
- How can governance mechanisms ensure quality and consistency in adaptive AI-based education?
- What barriers hinder scalable validation of LLM-generated educational material?
- How can support and verification structures be strengthened through multi-agent architectures?
- Which workflows best reduce hallucination risks and increase transparency?
These and similar questions can serve as a foundation for a BA/MA/Guided Research thesis.
Other research questions to be pursued in the thesis can be suggested and discussed.
TASK(S)
Potential thesis activities include:
- Review research on multi-agent LLM systems and AI governance
- Identify risks of automated educational content generation
- Define agent roles for content creation, didactic review, and validation
- Design multi-stage agent workflows for robust learning path generation
- Implement a prototype comparing single-agent vs. multi-agent approaches
- Develop evaluation criteria for trustworthiness and quality assurance
- Discuss governance frameworks for scalable AI-based education
Explore human oversight strategies and maintainability implications
REQUIREMENTS
- Strong interest in AI architectures, governance, and educational systems
- Experience with Python and LLM-based prototyping
- Motivation to explore robustness, QA workflows, and system control
- Structured approach to evaluating trust and quality in AI systems
- Independent and reliable work style