New publication: Can we use automated approaches to measure the quality of online political discussion?

We’re proud to announce that our consortium members Sjoerd Stolwijk, Damian Trilling (both University of Amsterdam) and Simon Münker (Trier University) contributed to a freshly published paper on measuring the debate quality of online political discussions. The paper was released in the “Communication Methods and Measures” journal by Routledge and is open access.

Our researchers review how debate quality has been measured in communication science, and systematically compare 50 automated metrics against numerous manually coded comments. Based on their experiments, they were able to give clear recommendations for how to (not) measure debate quality in terms of interactivity, diversity, rationality, and (in)civility according to Habermas.

Their results show that transformer models and generative AI (like Llama and GPT-models) outperform older methods, yet there is variance and the success depends on the measured concept, as some (e.g. rationality) remain difficult to capture also by human coding. Which measure should be preferred for future empirical applications is likely dependent on the
objective of the study in question. For some genres, language and communication style (e.g. satire), it is strongly advised to test the accuracy of automated methods against the human interpretation beforehand, even if methods are widely used. Some approaches and implementations performed so poorly that they are not suitable for studying debate quality.