Natalia Umansky

Natalia Umansky

Postdoctoral Research Fellow

University of Zurich

About me

I am a Postdoctoral Research Fellow in Political Science at the University of Zurich and the Digital Democracy Lab. Previously, I was an Iseult Honohan Scholar at University College Dublin and the Connected_Politics Lab, where I obtained my PhD in Politics and International Relations. My research applies Computational Social Science, including quantitative text analysis, network analysis, and computer vision to study the social media sphere and answer questions related to Digital Political Communication.

Interests

  • Digital Political Communication
  • Computational Social Science
  • Social Media
  • Hate Speech Detection
  • Multimodal Content Analysis
  • Algorithmic Curation

Research

Peer-Reviewed Publications



Under Review


  • Networks of Security.

While the idea that securitization is a relational, continuous process is not new, it remains unclear how this reconceptualization can be applied to systematically study the emergence of security threats. To address this problem, this study offers and innovative adaptation of Discourse Network Analysis (Leifeld, 2016) to develop a formalized model that facilitates the exploration of the (trans)formation and evolution of (meanings of) security. The core purpose of this study is to strengthen the modern conceptualizations of securitization that move beyond the 'speech act' by addressing the void between the theoretical advancements and their limited empirical applications. By leveraging the empirical opportunities allowed by the advent of social media, an example is provided to demonstrate the usefulness of this formalized model and to illustrate the applicability of networks of security for the study of securitization as an intersubjective, dynamic process.


  • Dances, Duets, and Debates: Analysing political communication and viewer engagement on TikTok (with Christian Pipal).

Launched in 2017, TikTok has rapidly gained global popularity, extending beyond entertainment to become a hub for ideological formation and political activism, revolutionizing the way politicians interact with constituents. Yet, despite extensive research on politicians’ use of social media platforms like Twitter and Facebook, TikTok’s distinct platform-specific languages and features have received limited attention. Its short video clips, algorithmic content recommendations, and mobile-first interface present new challenges and opportunities for politicians aiming to communicate effectively with society. This paper addresses this gap by providing a first examination of the ways in which U.S. politicians embrace and adapt to the TikTok platform. Employing a multimodal approach that studies video, text, and audio, we collect and analyse a novel dataset of TikTok videos created by U.S. Governors and Members of Congress to understand how they use comedic, documentary, communal, explanatory, interactive, and meta communication styles to connect with their audience. Our analyses reveal that U.S. politicians primarily use TikTok as a platform to communicate about political issues, often adopting explanatory and documentary styles to articulate their policies. However, the comedic style significantly boosts engagement, especially for female politicians, who see a fourfold increase in engagement when producing comedic content compared to male counterparts. This presents a challenge for politicians: balancing humor's allure with the seriousness expected from them. Our findings provide valuable insights into digital political communication on emerging platforms like TikTok and the role of multiple message modalities in harnessing online attention and engagement.


  • The blackbox of social media content moderation: A first look into a novel Twitter dataset (with Emma Hoes and Maël Kubli).

Out of all social media platforms, Twitter has been the most generous in sharing its data with academia and industry to date. However, like other platforms, Twitter has been increasingly criticized for its lack of transparency regarding its content moderation practices and the reasons behind their decisions to restrict or delete certain content. Addressing these concerns, in September 2022, Twitter launched the `Twitter Moderation Research Consortium' (TMRC), which grants approved researchers access to data on numerous accounts that have been deleted due to violations of Twitter’s Platform Manipulation and Spam Policy. In this paper, we conduct an in-depth investigation of Twitter's moderation decisions in line with their community guidelines using the TMRC14- and TMRC15-datasets, which contain Tweets and user information belonging to accounts deleted from 2016 to October 2022. By employing descriptive analytics, network analysis, and topic modeling, we uncover alignment between Twitter's suspension decisions and their reported moderation practices while also noting instances of opacity and geographical disparities in moderation reasons. Despite the noisiness of these datasets, our findings predominantly confirm the concurrence of account suspensions with community guidelines, shedding some first light on the black box of social media content moderation. However, the results also underscore the need for greater transparency, suggesting avenues for understanding the complexities surrounding content moderation practices and emphasizing the need for continued investments in transparency initiatives like TMRC.


  • Enhancing Hate Speech Detection with Fine-Tuned Large Language Models Requires High Quality Data (with Maël Kubli, Karsten Donnay, Fabrizio Gilardi, Dominik Hangartner, Ana Kotarcic, Laura Bronner, Selina Kurerand, and Philip Grech).

Efforts to curb online hate speech depend on our ability to reliably detect it at scale. Previous studies have highlighted the strong zero-shot classification performance of large-language models (LLMs), offering a potential tool to efficiently identify harmful content. Yet for complex and ambivalent tasks like hate speech detection, pre-trained LLMs can be insufficient and carry systemic biases. Domain-specific models, fine-tuned for the given task and empirical context could help address these issues but, as we demonstrate, the quality of data used for fine-tuning decisively matters. In this study, we fine-tuned GPT-3.5 using a unique corpus of online comments annotated by diverse groups of coders with varying annotation quality: research assistants, activists, two kinds of crowd workers, and citizen scientists. We find that only annotations from those groups of annotators that are better than zero-shot GPT-3.5 in recognizing hate speech improve the classification performance of the fine-tuned LLM. Specifically, fine-tuning using the two most high quality annotator groups -- research assistants and Prolific crowd workers -- boosts classification performance by increasing the model's precision without notably sacrificing the good recall of zero-shot GPT-3.5. In contrast, low quality annotations do not improve or even decrease the ability to identify hate speech.



Chapters


  • van der Velden, M.A.C.G., Umansky, & Pipal. (forthcoming). “Sentiment Analysis”, in Nai, Grömping, and Wirz (eds) Encyclopedia of Political Communication. Edward Elgar Publishing.

  • Umansky, N., & Puschmann, C. (forthcoming). “Algorithmic Curation”, in Nai, Grömping, and Wirz (eds) Encyclopedia of Political Communication. Edward Elgar Publishing.

  • Umansky, N. (forthcoming). “Speaking Security on Social Media Networks”, in Reuss and Stetter (eds) Social Media and Conflict. Palgrave Macmillan.


Working Papers


  • Dancing to the partisan beat: A comparative analysis of political parties’ TikTok use across Europe (with Christian Pipal, Johannes Gruber, Jason Greenfield, and Aleksandra Urman).

Work in Progress


  • Multimodal approaches for social media studies: A methodological recap (with Andreu Casas and Christian Pipal).

  • #gay: Analysing LGBTQ+ content on TikTok (with Alberto López and Christian Pipal).


Non-peer reviewed publications



If you would like to get access to the latest version of a paper, feel free to send me an e-mail.

Teaching

Module instructor: Graduate Level


  • Platform Governance: Regulating Social Media (University of Zurich: Spring 2023, Spring 2024) [Syllabus]

Module instructor: Undergraduate Level


  • Digital Communication and Politics (University of Zurich: Spring 2024)

  • Internet and the Global South (University of Zurich: Spring 2023) [Syllabus]

  • Critical Security Policy (University of Zurich: Autumn 2022, Autumn 2023) [Syllabus]

  • Theories of International Security and Critical Security Studies (University College Dublin: Spring 2022) [Syllabus]


Workshop Instructor



Teaching Assistant at University College Dublin


  • Spring 2020, Autumn 2020, Autumn 2021: EU Politics
  • Autumn 2019, Autumn 2020: Foundations of Political Theory and International Relations
  • Spring 2019: Foundations of Contemporary Politics
  • Autumn 2018: Research Methods in Political Science

Contact

  • umansky@ipz.uzh.ch
  • Department of Political Science, University of Zurich, Affolternstrasse 56, 8050 Zurich, Switzerland