LLM-as-a-Judge for Privacy Evaluation? Exploring the Alignment of Human and LLM Perceptions of Privacy in Textual Data
Despite advances in the field of privacy-preserving Natural Language Processing (NLP), a significant challenge remains the accurate evaluation of privacy. As a potential solution, using LLMs as a privacy evaluator presents a promising approach – a strategy inspired by its success in other subfields of NLP. In particular, the so-called LLM-as-a-Judge paradigm has achieved impressive results on a variety of natural language evaluation tasks, demonstrating high agreement rates with human annotators. Recognizing that privacy is both subjective and difficult to define, we investigate whether LLM-as-a-Judge can also be leveraged to evaluate the privacy sensitivity of textual data. Furthermore, we measure how closely LLM evaluations align with human perceptions of privacy in text. Resulting from a study involving 10 datasets, 13 LLMs, and 677 human survey participants, we confirm that privacy is indeed a difficult concept to measure empirically, exhibited by generally low inter-human agreement rates. Nevertheless, we find that LLMs can accurately model a global human privacy perspective, and through an analysis of human and LLM reasoning patterns, we discuss the merits and limitations of LLM-as-a-Judge for privacy evaluation in textual data. Our findings pave the way for exploring the feasibility of LLMs as privacy evaluators, addressing a core challenge in solving pressing privacy issues with innovative technical solutions.
| Attribute | Value |
|---|---|
| Address | Taipei, Taiwan |
| Authors | Stephen Meisenbacher , Alexandra Klymenko |
| Citation | Meisenbacher, S.; Klymenko, A.; Matthes, F.: 2025. LLM-as-a-Judge for Privacy Evaluation? Exploring the Alignment of Human and LLM Perceptions of Privacy in Textual Data. In Proceedings of the 2025 Workshop on Human-Centered AI Privacy and Security (HAIPS '25). Association for Computing Machinery, New York, NY, USA, 126–138. |
| Key | Me25f |
| Research project | |
| Title | LLM-as-a-Judge for Privacy Evaluation? Exploring the Alignment of Human and LLM Perceptions of Privacy in Textual Data |
| Type of publication | Workshop |
| Year | 2025 |
| Team members | Stephen Meisenbacher , Alexandra Klymenko |
| Publication URL | https://dl.acm.org/doi/10.1145/3733816.3760760 |
| Project | |
| Acronym | HAIPS |