DA-Pred: Performance Prediction for Text Summarization under Domain-Shift and Instruct-Tuning
Large Language Models (LLMs) often don’t perform as expected under Domain Shift or after Instruct-tuning. A reliable indicator of LLM performance in these settings could assist in decision-making. We present a method that uses the known performance in high-resource domains and fine-tuning settings to predict performance in low-resource domains or base models, respectively. In our paper, we formulate the task of performance prediction, construct a dataset for it, and train regression models to predict the said change in performance. Our proposed methodology is lightweight and, in practice, can help researchers & practitioners decide if resources should be allocated for data labeling and LLM Instruct-tuning.
| Attribute | Value |
|---|---|
| Address | Suzhou, China |
| Authors | Anum Afzal, Alexander R. Fabbri |
| Citation | Anum Afzal, Florian Matthes, and Alexander Fabbri. 2025. DA-Pred: Performance Prediction for Text Summarization under Domain-Shift and Instruct-Tuning. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 7632–7643, Suzhou, China. Association for Computational Linguistics. |
| Key | Af25d |
| Research project | |
| Title | DA-Pred: Performance Prediction for Text Summarization under Domain-Shift and Instruct-Tuning |
| Type of publication | Conference |
| Year | 2025 |
| Team members | Anum Afzal |
| Publication URL | https://aclanthology.org/2025.emnlp-main.387/ |
| Project | |
| Acronym | EMNLP |