I am a Postdoctoral Research Fellow in Political Science at the University of Zurich and the Digital Democracy Lab. Previously, I was an Iseult Honohan Scholar at University College Dublin and the Connected_Politics Lab, where I obtained my PhD in Politics and International Relations. My research applies Computational Social Science, including quantitative text analysis, network analysis, and computer vision to study the social media sphere and answer questions related to Digital Political Communication.
Umansky, N. (2022). Who gets a say in this? Speaking security on social media. New Media & Society. https://doi.org/10.1177/14614448221111009.
Cross, J. P., Greene, Umansky, & Calò. (2023). Speaking in unison? Explaining the role of agenda-setter constellations in the ECB policy agenda using a network-based approach. Journal of European Public Policy, 1-27. https://doi.org/10.1080/13501763.2023.2242891.
Umansky, N. (2023). Spreading Like Wildfire: The Securitization of the Amazon Rainforest Fires on Twitter. International Journal Of Communication, 18, 21. Retrieved from https: //ijoc.org/index.php/ijoc/article/view/20420/4419
While the idea that securitization is a relational, continuous process is not new, it remains unclear how this reconceptualization can be applied to systematically study the emergence of security threats. To address this problem, this study offers and innovative adaptation of Discourse Network Analysis (Leifeld, 2016) to develop a formalized model that facilitates the exploration of the (trans)formation and evolution of (meanings of) security. The core purpose of this study is to strengthen the modern conceptualizations of securitization that move beyond the 'speech act' by addressing the void between the theoretical advancements and their limited empirical applications. By leveraging the empirical opportunities allowed by the advent of social media, an example is provided to demonstrate the usefulness of this formalized model and to illustrate the applicability of networks of security for the study of securitization as an intersubjective, dynamic process.
Launched in 2017, TikTok has rapidly gained global popularity, extending beyond entertainment to become a hub for ideological formation and political activism, revolutionizing the way politicians interact with constituents. Yet, despite extensive research on politicians’ use of social media platforms like Twitter and Facebook, TikTok’s distinct platform-specific languages and features have received limited attention. Its short video clips, algorithmic content recommendations, and mobile-first interface present new challenges and opportunities for politicians aiming to communicate effectively with society. This paper addresses this gap by providing a first examination of the ways in which U.S. politicians embrace and adapt to the TikTok platform. Employing a multimodal approach that studies video, text, and audio, we collect and analyse a novel dataset of TikTok videos created by U.S. Governors and Members of Congress to understand how they use comedic, documentary, communal, explanatory, interactive, and meta communication styles to connect with their audience. Our analyses reveal that U.S. politicians primarily use TikTok as a platform to communicate about political issues, often adopting explanatory and documentary styles to articulate their policies. However, the comedic style significantly boosts engagement, especially for female politicians, who see a fourfold increase in engagement when producing comedic content compared to male counterparts. This presents a challenge for politicians: balancing humor's allure with the seriousness expected from them. Our findings provide valuable insights into digital political communication on emerging platforms like TikTok and the role of multiple message modalities in harnessing online attention and engagement.
Out of all social media platforms, Twitter has been the most generous in sharing its data with academia and industry to date. However, like other platforms, Twitter has been increasingly criticized for its lack of transparency regarding its content moderation practices and the reasons behind their decisions to restrict or delete certain content. Addressing these concerns, in September 2022, Twitter launched the `Twitter Moderation Research Consortium' (TMRC), which grants approved researchers access to data on numerous accounts that have been deleted due to violations of Twitter’s Platform Manipulation and Spam Policy. In this paper, we conduct an in-depth investigation of Twitter's moderation decisions in line with their community guidelines using the TMRC14- and TMRC15-datasets, which contain Tweets and user information belonging to accounts deleted from 2016 to October 2022. By employing descriptive analytics, network analysis, and topic modeling, we uncover alignment between Twitter's suspension decisions and their reported moderation practices while also noting instances of opacity and geographical disparities in moderation reasons. Despite the noisiness of these datasets, our findings predominantly confirm the concurrence of account suspensions with community guidelines, shedding some first light on the black box of social media content moderation. However, the results also underscore the need for greater transparency, suggesting avenues for understanding the complexities surrounding content moderation practices and emphasizing the need for continued investments in transparency initiatives like TMRC.
Efforts to curb online hate speech depend on our ability to reliably detect it at scale. Previous studies have highlighted the strong zero-shot classification performance of large-language models (LLMs), offering a potential tool to efficiently identify harmful content. Yet for complex and ambivalent tasks like hate speech detection, pre-trained LLMs can be insufficient and carry systemic biases. Domain-specific models, fine-tuned for the given task and empirical context could help address these issues but, as we demonstrate, the quality of data used for fine-tuning decisively matters. In this study, we fine-tuned GPT-3.5 using a unique corpus of online comments annotated by diverse groups of coders with varying annotation quality: research assistants, activists, two kinds of crowd workers, and citizen scientists. We find that only annotations from those groups of annotators that are better than zero-shot GPT-3.5 in recognizing hate speech improve the classification performance of the fine-tuned LLM. Specifically, fine-tuning using the two most high quality annotator groups -- research assistants and Prolific crowd workers -- boosts classification performance by increasing the model's precision without notably sacrificing the good recall of zero-shot GPT-3.5. In contrast, low quality annotations do not improve or even decrease the ability to identify hate speech.
van der Velden, M.A.C.G., Umansky, & Pipal. (forthcoming). “Sentiment Analysis”, in Nai, Grömping, and Wirz (eds) Encyclopedia of Political Communication. Edward Elgar Publishing.
Umansky, N., & Puschmann, C. (forthcoming). “Algorithmic Curation”, in Nai, Grömping, and Wirz (eds) Encyclopedia of Political Communication. Edward Elgar Publishing.
Umansky, N. (forthcoming). “Speaking Security on Social Media Networks”, in Reuss and Stetter (eds) Social Media and Conflict. Palgrave Macmillan.
Multimodal approaches for social media studies: A methodological recap (with Andreu Casas and Christian Pipal).
#gay: Analysing LGBTQ+ content on TikTok (with Alberto López and Christian Pipal).
Elena Reinaga. 2016. If I were born again I would still be a sex worker. P. Purdy and N. Umansky. OpenDemocracy.
Natalia Umansky. 2016. What is the Effect of Terrorist Attacks on the Securitization of Migration? Case Studies from the UK and Spain. Institut Barcelona d’Estudis Internacionals, Student Paper Series.
If you would like to get access to the latest version of a paper, feel free to send me an e-mail.
Digital Communication and Politics (University of Zurich: Spring 2024)
Internet and the Global South (University of Zurich: Spring 2023) [Syllabus]
Critical Security Policy (University of Zurich: Autumn 2022, Autumn 2023) [Syllabus]
Theories of International Security and Critical Security Studies (University College Dublin: Spring 2022) [Syllabus]
2024 - University of Lucerne: Data Wrangling and Visualization with Tidyverse
2024 - University of Lucerne: Automated Image and Video Data Analysis
2023/2024 - GESIS Fall Seminar in Computational Social Science: Introduction to R (with Christian Pipal)
2022 - COMPTEXT Conference: Collecting and Analyzing Twitter Data
2020 - The Connected_Politics Lab: Creating and Hosting an Academic Personal Website Using Hugo and GitHub (with Stefan Müller).