At the end of March, the Jožef Stefan Institute, the Slovenian Press Agency (STA), and the University of Maribor hosted a joint event exploring how modern AI tools can be integrated into the media landscape and how they can support journalistic work in an increasingly complex information environment.
The event brought together journalists, researchers, and media professionals to discuss how technology can help navigate challenges such as information overload, disinformation, and the changing dynamics of digital communication. Through presentations and discussions, participants examined how AI can provide new analytical perspectives while maintaining the core principles of responsible journalism.
Several projects and tools were presented, offering different approaches to analysing digital media environments. These included our project TWON and PERISCOPE, which developed a digital twin of online social networks, enabling the simulation of information diffusion and the effects of algorithmic systems; Between the Lines, developed by the Jožef Stefan Institute, which compares how different media outlets report on the same event; Solaris, which analyses the impact of AI on democratic processes; and Event Registry, a platform for monitoring and analysing the global media landscape in real time.
Discussions throughout the event highlighted both the opportunities and limitations of AI in journalism. While these tools offer valuable insights into the underlying mechanisms of social media and information flows, they are best understood as supportive instruments that complement, rather than replace, professional journalistic work.
The exchange also underlined the growing importance of media literacy and the need for robust, transparent analytical tools to strengthen democratic discourse in digital societies.
We would like to sincerely thank all organisers and participants for making this event such a valuable and inspiring exchange.
On 17 March 2026, we held our final TWON evening event in Brussels, bringing together policymakers, researchers, civil society representatives, and practitioners to discuss how Europe can strengthen digital sovereignty in the digital public sphere.
What does digital sovereignty mean in practice when a handful of global platforms structure Europe’s public sphere? How can the European Union ensure that online social networks operate in line with democratic values, fundamental rights, and the protection of minors? And how can research support policymakers in shaping and enforcing a European model of platform governance? These questions framed the final TWON event in Brussels and reflected the growing urgency of addressing geopolitical tensions, systemic disinformation, and the societal impact of platforms such as TikTok.
During the evening, we presented the EU-funded research project “TWON – Twin of Online Social Networks” and discussed its results and implications. TWON examined how the design of online platforms influences the quality of online democratic discourse. At its core, the interdisciplinary team developed an innovative “digital twin” approach: instead of experimenting on real users, simulations model how different platform architectures and ranking algorithms influence the quality of online debate and exposure to harmful content.
By translating these findings into policy recommendations and discussing them in participatory Citizen Labs across Europe, TWON contributed to evidence-based policymaking and digital citizenship. The consortium brought together leading European research institutions, including the University of Amsterdam, Karlsruhe Institute of Technology (KIT), University of Belgrade, University of Trier, FZI Forschungszentrum Informatik, Jožef Stefan Institute, and the Robert Koch Institute (RKI).
We were honoured to welcome Katarina Barley, Vice-President of the European Parliament, and Raegan MacDonald (Aspiration Tech) as keynote speakers and panelists. Katarina Barley reflected on the importance of existing regulatory frameworks such as the Digital Services Act (DSA), while emphasising the continued need for research initiatives like TWON to support their implementation. Raegan MacDonald highlighted the importance of connecting governance debates with perspectives from civil society and the tech policy community.
The event focused in particular on how Europe can strengthen its capacity to research, regulate, and strategically shape online platforms. This included discussions on establishing effective research infrastructures for platform data access and data donation, reinforcing European digital sovereignty over communication infrastructures, and identifying windows of opportunity in the evolving global digital policy debate.
In a joint panel moderated by Cosima Pfannschmidt (FZI), Achim Rettinger (University of Trier), Michael Mäs (KIT), and François t’Serstevens (University of Amsterdam) discussing TWON’s core approach, key findings, and policy recommendations. The project demonstrated how simulation-based analyses can support policymakers in understanding the societal effects of platform design
Moderated by Benjamin Fischer (CeMAS), the panel with Dr. Jonas Fegert (FZI), Katarina Barley and Raegan MacDonald explored how research and policy can jointly shape Europe’s digital future. Participants emphasised the importance of effectively enforcing existing regulation on very large online platforms, developing democratic European platform alternatives, designing digital tools that are engaging without fostering addictive dynamics, and strengthening transparency through improved access to platform data.
Bringing together policymakers, regulators, researchers, journalists, and civil society practitioners, the event created an important space to connect research insights with ongoing legislative and enforcement debates in Brussels. Before and after the stage programme, participants explored interactive project demonstrators and engaged in informal exchanges with project partners from across Europe.
We thank all speakers and participants for contributing to an insightful and productive discussion.
After three years of interdisciplinary research, the TWON – Twin of Online Social Networks project concluded with a final event on 28 January 2026 at Publix Berlin. At the event, the international consortium presented key findings on how online platforms shape democratic discourse and how mechanisms of discourse manipulation emerge in digital environments.
Hosted at Publix, the closing event brought together researchers as well as representatives from politics, academia, journalism, and civil society. Led by the FZI Research Center for Information Technology, the consortium reflected on the project’s results and discussed implications for future research, platform governance, and regulation.
TWON is an EU-funded research project that investigates how the design of online platforms influences the quality of democratic online discourse. To address this question, the interdisciplinary consortium developed a novel research approach based on a digital twin of online social networks, enabling simulations of platform mechanisms such as ranking systems without experimenting on real users. The findings were also translated into policy recommendations and discussed in participatory Citizen Labs with citizens across Europe.
The event was opened by Dr Jonas Fegert, who emphasized the central role of online platforms and their underlying mechanisms in shaping public debate and democratic participation.
In the keynote, Dr Annette Zimmermann explored how platform mechanisms influence discourse dynamics on social media, including practices such as dog whistling and self-censorship. She highlighted how these dynamics affect public deliberation and outlined important avenues for future research.
Parsa Marvi, Member of the German Bundestag, underlined the relevance of TWON for understanding democratic discourse in the digital age. He stressed the importance of evidence-based research for effective and responsible platform regulation.
Key research results from the project were presented and discussed in a session moderated by Cosima Pfannschmidt, with contributions from Dr Alenka Guček, Prof Damian Trilling, Prof Achim Rettinger, and Prof Michael Maes, among others.
The event also addressed broader questions of how online social networks can be researched and governed at the societal level. Particular attention was given to future avenues for evidence-based policymaking, including data access under the Digital Services Act (DSA), data donation frameworks, and current opportunities in European and global digital policy debates.
The evening concluded with a panel discussion titled “What’s next?”, featuring Annette Zimmermann, Parsa Marvi, Svea Windwehr, and Damian Trilling, moderated by Jonas Fegert. The discussion focused on concrete recommendations from research, policy, and implementation, particularly in relation to the Digital Services Act, and explored how online social networks can be held accountable in times of increasing geopolitical tensions.
Throughout the evening, participants engaged with interactive project demonstrators and poster stations, discussed research findings, and exchanged perspectives with TWON partners from across Europe.
TWON thanks all speakers, panelists, participants, and project partners for their valuable contributions and the close collaboration over the past three years.
As geopolitical tensions increasingly play out online, the need for a democratic digital public sphere has never been more urgent. Political interests, platform governance choices, and regulatory gaps all shape how online debate unfolds, but what concretely needs to change?
These questions were at the heart of the TWON Policy Hackathon, which brought together experts from research, policymaking, digital law, platform governance, content moderation, and civil society to develop actionable, empirically grounded policy recommendations for online social networks.
The hackathon addressed the shared question of how online social networks must evolve in order to better enable democratic online debate and to safeguard democratic societies. Throughout the afternoon, participants exchanged perspectives on current and future challenges related to online social networks and their governance.
Building on the work of the TWON project, the hackathon connected research on platform design choices and online debate with policy perspectives and practical experience. Draft policy recommendations developed within the project served as a starting point for discussion and were critically examined during the workshop sessions.
The agenda combined a spotlight round on pressing challenges with two structured workshop sessions. Working in professionally mixed groups, participants discussed the current state of knowledge, experiences from practice and regulation, and open research questions. The second workshop session focused on identifying regulatory needs and developing policy recommendations.
We would like to thank all participants for their valuable contributions, thoughtful discussions and engagement throughout the event.
Last week, the TWON consortium came together in Berlin to advance ongoing work on democratic online social networks and to strengthen the project’s dialogue with policy and civil society stakeholders.
Across a series of internal and public sessions, the meeting focused on how platform design and algorithmic choices shape democratic discourse, contribute to polarization, and influence the spread of disinformation. A dedicated Policy Hackathon provided space for consortium members and external experts to explore regulatory challenges and identify priorities for future policy-oriented work.
In addition, TWON hosted a public dissemination event at Publix, bringing together participants from politics, academia, and civil society to discuss responsible platform design and the broader implications of the project’s research.
The week also included TWON’s General Assembly, where the consortium reflected on progress and lessons learned and discussed how the project’s insights can support future research and collaboration beyond TWON.
The programme concluded with a visit to the German Chancellery and further exchanges with colleagues from the policy sphere, highlighting the importance of connecting research and governance perspectives in the field of online platforms.
The TWON consortium thanks all contributors for their engagement, constructive discussions, and continued collaboration.
EACL 2026 in Rabat: New Publications on Simulating Social Media Users
At this year’s European Chapter of the Association for Computational Linguistics (EACL) in Rabat, Morocco, Simon Münker, Nils Schwager and Achim Rettinger presented two new TWON papers on the simulation of social media users with Large Language Models (LLMs).
Their work contributes to a growing research field at the intersection of Natural Language Processing, Computational Social Science and social media analysis. Both publications examine how well LLMs can reproduce communication patterns on social networks and under which conditions such simulations can be considered empirically realistic.
The first paper, “Don’t Trust Generative Agents to Mimic Communication on Social Networks Unless You Benchmarked their Empirical Realism”, formalizes the task of simulating social media users and evaluates it through a case study based on German and English Twitter data. The study shows that many current simulation approaches in Computational Social Science rely on comparatively simple methods whose scientific validity is difficult to justify without systematic benchmarking. The results also reveal a clear language bias: English communication patterns are significantly easier to simulate than German ones.
The second paper, “Towards Simulating Social Media Users with LLMs: Evaluating the Operational Validity of Conditioned Comment Prediction”, co-authored with Alistair Plum, builds on this work and extends the analysis to Luxembourgish. The findings indicate that Luxembourgish is even more challenging to simulate than German. This points to a broader issue for multilingual AI research: the smaller a language and the less data available, the more difficult it becomes to model realistic online behavior.
Together, the two publications underline a central challenge for the use of LLMs in social media research. If generative models are used to study online communication, their empirical realism must be carefully validated, especially across different languages and data contexts. The findings make an important contribution to more rigorous, multilingual and methodologically sound approaches in AI supported Computational Social Science.
From January 6-9, 2026, TWON researcher Simon Münker presented his paper at the Hawaii International Conference on System Sciences (HICSS), one of the leading international conferences in the field of information systems and digital innovation.
The paper addresses societal risks associated with open source Large Language Models and evaluates the effectiveness of existing safety and guardrail mechanisms. Together with his co author Fabio Sartori, Simon Münker received the Best Paper Award for this research.
The study systematically examines guardrail vulnerabilities across seven widely used open source LLMs. Using advanced natural language processing classification methods, it identifies recurring patterns of harmful content generation under adversarial prompting. These vulnerabilities were first observed during earlier research activities within the TWON project, where initial experiments revealed persistent weaknesses in model safety mechanisms.
The findings show that several prominent models consistently produce content classified as hateful or offensive. This raises concerns about the potential implications of open source LLMs for democratic discourse and social cohesion. In particular, the results challenge public safety assurances by model developers and point to discrepancies between stated safeguards and observed model behavior.
The research contributes to ongoing discussions on responsible AI development and the governance of AI systems that shape online communication and public discourse. It underlines the need for more robust, transparent and empirically tested safety mechanisms in open source AI ecosystems.
The paper was presented as part of the Digital Democracy Minitrack at HICSS 2026.