Dialogue Repair in Conversational Agents with Large Language Models
Abstract:
Conversational agents are increasingly used in real-word applications such as education, customer support, and healthcare. Despite advances in natural language understanding, these systems remain vulnerable to conversational breakdowns caused by ambiguous, noisy, or out-of-distribution user input. Traditional intent-based dialogue pipelines rely on predefined intents and actions and often respond with generic fallback messages that fail to restore the interactional flow, which limits their ability to recover from misunderstandings. In contrast, large language models (LLMs) demonstrate strong capabilities in handling ambiguous input, making them promising candidates for the dialogue repair task. However, their integration into production systems raises additional challenges related to efficiency, latency, controllability, and evaluation.
This thesis investigates how LLM-based dialogue repair can be integrated into existing dialogue management frameworks to improve conversational robustness while balancing efficiency and cost. Based on foundation theories from conversation analysis, the work introduces dialogue repair as a structured process rather than a purely technical error-handling task. Given this perspective, a hybrid architecture is proposed, combining intent-based dialogue management with a LLM-based component that is selectively invoked to generate classifications, disambiguate user input, or recover from breakdowns that fall outside the pipeline’s predefined intent space. This design aims to combine the robustness and flexibility of LLMs with the efficiency and controllability of intent-based systems. The thesis further addresses the challenge of evaluating dialogue repair by organizing relevant datasets, automatic metrics, and human-centered evaluation methodologies. These evaluation components are combined into a unified evaluation framework that integrates repair success, efficiency, system cost, and user satisfaction.
The expected contributions are:
(i) a theoretically grounded synthesis of conversational breakdown and repair literature;
(ii) a reference hybrid architecture for LLM-based dialogue repair; and
(iii) an empirical evaluation framework and dataset demonstrating how hybrid systems improve dialogue robustness and user satisfaction compared to traditional repair approaches.
| Attribute | Value |
|---|---|
| Title (de) | |
| Title (en) | Dialogue Repair in Conversational Agents with Large Language Models |
| Project | AssistD |
| Type | Master's Thesis |
| Status | started |
| Student | David Leon Uhlenbrock |
| Advisor | Alexandre Mercier |
| Supervisor | Prof. Dr. Florian Matthes |
| Start Date | 08.12.2025 |
| Sebis Contributor Agreement signed on | |
| Checklist filled | Yes |
| Submission date | 08.06.2026 |