Challenges in Domain-Specific Abstractive Summarization and How to Overcome Them.
Large Language Models work quite well with general-purpose data and many tasks in Natural LanguageProcessing. However, they show several limitations when used for a task such as domain-specific abstractivetext summarization. This paper identifies three of those limitations as research problems in the context ofabstractive text summarization: 1) Quadratic complexity of transformer-based models with respect to theinput text length; 2) Model Hallucination, which is a model’s ability to generate factually incorrect text; and3) Domain Shift, which happens when the distribution of the model’s training and test corpus is not the same.Along with a discussion of the open research questions, this paper also provides an assessment of existingstate-of-the-art techniques relevant to domain-specific text summarization to address the research gaps.
| Attribute | Value | 
|---|---|
| Address | |
| Authors | Anum Afzal , Juraj Vladika , Prof. Dr. Florian Matthes | 
| Citation | Afzal, A.; Vladika, J.; Braun, D.; Matthes, F. Challenges in Domain-Specific Abstractive Summarization and How to Overcome Them. In Proceedings of the 15th International Conference on Agents and Artificial Intelligence (ICAART 2023), Lisbon, Portugal. SCITEPRESS - Science and Technology Publications. | 
| Key | |
| Research project | Abstractive Text Summarization for Domain-Specific Documents (ATESD) | 
| Title | Challenges in Domain-Specific Abstractive Summarization and How to Overcome Them. | 
| Type of publication | Conference | 
| Year | 2023 | 
| Publication URL | https://www.researchgate.net/publication/369016469_Challenges_in_Domain-Specific_Abstractive_Summarization_and_How_to_Overcome_Them | 
| Acronym | |
| Project | |
| Team members |