News

Two research papers presented at ACM CSCW 2023

Research |


Julia Sasse and Tina Kuo recently presented two research papers, co-authored by Jens Grossklags, at the 26th ACM Conference On Computer-Supported Cooperative Work And Social Computing (CSCW 2023) in Minneapolis, USA. Congratulations!

The articles have been published in the Proceedings of the ACM on Human Computer Interaction (HCI), Volume 7 (CSCW1 / CSCW2):

  • Sasse, J., & Grossklags, J. (2023) Breaking the Silence: Investigating Which Types of Moderation Reduce Negative Effects of Sexist Social Media Content. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW2), Article No. 327. Publisher Version (Open Access):

    Sexist content is widespread on social media and can reduce women's psychological well-being and their willingness to participate in online discourse, making it a societal issue. To counter these effects, social media platforms employ moderators. To date, little is known about the effectiveness of different forms of moderation in creating a safe space and their acceptance, in particular from the perspective of women as members of the targeted group and users in general (rather than perpetrators). In this research, we propose that some common forms of moderation can be systematized along two facets of visibility, namely visibility of sexist content and of counterspeech. In an online experiment (N = 839), we manipulated these two facets and tested how they shaped social norms, feelings of safety, and intent to participate, as well as fairness, trustworthiness, and efficacy evaluations. In line with our predictions, deletion of sexist content - i.e., its invisibility - and (public) counterspeech - i.e., its visibility - against visible sexist content contributed to creating a safe space. Looking at the underlying psychological mechanism, we found that these effects were largely driven by changes in what was perceived normative in the presented context. Interestingly, deletion of sexist content was judged as less fair than counterspeech against visible sexist content. Our research contributes to a growing body of literature that highlights the importance of norms in creating safer online environments and provides practical implications for moderators for selecting actions that can be effective and accepted.
     
  • Kuo, T., Hernani, A., & Grossklags, J. (2023) The Unsung Heroes of Facebook Groups Moderation: A Case Study of Moderation Practices and Tools. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1), Article No. 97. Publisher Version (Open Access):

    Volunteer moderators have the power to shape society through their influence on online discourse. However, the growing scale of online interactions increasingly presents significant hurdles for meaningful moderation. Furthermore, there are only limited tools available to assist volunteers with their work. Our work aims to meaningfully explore the potential of AI-driven, automated moderation tools for social media to assist volunteer moderators. One key aspect is to investigate the degree to which tools must become personalizable and context-sensitive in order to not just delete unsavory content and ban trolls, but to adapt to the millions of online communities on social media mega-platforms that rely on volunteer moderation. In this study, we conduct semi-structured interviews with 26 Facebook Group moderators in order to better understand moderation tasks and their associated challenges. Through qualitative analysis of the interview data, we identify and address the most pressing themes in the challenges they face daily. Using interview insights, we conceptualize three tools with automated features that assist them in their most challenging tasks and problems. We then evaluate the tools for usability and acceptance using a survey drawing on the technology acceptance literature with 22 of the same moderators. Qualitative and descriptive analyses of the survey data show that context-sensitive, agency-maintaining tools in addition to trial experience are key to mass adoption by volunteer moderators in order to build trust in the validity of the moderation technology.