Measuring Social Media Polarization Using Large Language Models and Heuristic Rules

Reading time: 5 minute
...

📝 Original Info

  • Title: Measuring Social Media Polarization Using Large Language Models and Heuristic Rules
  • ArXiv ID: 2601.00927
  • Date: 2026-01-02
  • Authors: Jawad Chowdhury, Rezaur Rashid, Gabriel Terejanu

📝 Abstract

Understanding affective polarization in online discourse is crucial for evaluating the societal impact of social media interactions. This study presents a novel framework that leverages large language models (LLMs) and domain-informed heuristics to systematically analyze and quantify affective polarization in discussions on divisive topics such as climate change and gun control. Unlike most prior approaches that relied on sentiment analysis or predefined classifiers, our method integrates LLMs to extract stance, affective tone, and agreement patterns from large-scale social media discussions. We then apply a rule-based scoring system capable of quantifying affective polarization even in small conversations consisting of single interactions, based on stance alignment, emotional content, and interaction dynamics. Our analysis reveals distinct polarization patterns that are event dependent: (i) anticipationdriven polarization, where extreme polarization escalates before wellpublicized events, and (ii) reactive polarization, where intense affective polarization spikes immediately after sudden, high-impact events.

💡 Deep Analysis

Figure 1

📄 Full Content

Measuring Social Media Polarization Using Large Language Models and Heuristic Rules Jawad Chowdhury, Rezaur Rashid, and Gabriel Terejanu Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA {mchowdh5, mrashid1, gabriel.terejanu}@charlotte.edu Abstract. Understanding affective polarization in online discourse is crucial for evaluating the societal impact of social media interactions. This study presents a novel framework that leverages large language models (LLMs) and domain-informed heuristics to systematically ana- lyze and quantify affective polarization in discussions on divisive topics such as climate change and gun control. Unlike most prior approaches that relied on sentiment analysis or predefined classifiers, our method in- tegrates LLMs to extract stance, affective tone, and agreement patterns from large-scale social media discussions. We then apply a rule-based scoring system capable of quantifying affective polarization even in small conversations consisting of single interactions, based on stance align- ment, emotional content, and interaction dynamics. Our analysis reveals distinct polarization patterns that are event dependent: (i) anticipation- driven polarization, where extreme polarization escalates before well- publicized events, and (ii) reactive polarization, where intense affec- tive polarization spikes immediately after sudden, high-impact events. By combining AI-driven content annotation with domain-informed scor- ing, our framework offers a scalable and interpretable approach to mea- suring affective polarization. The source code is publicly available at: https://github.com/hasanjawad001/llm-social-media-polarization Keywords: Affective Polarization, Social Media Discourse, Large Lan- guage Models, Stance Detection, AI for Social Impact 1 Introduction The rise of social media platforms in recent years has transformed political dis- course by enabling real-time information exchange and broader audience engage- ment [9,25]. This transformation, driven by the evolving media landscape, con- tinues to shape how information is produced, distributed, and consumed while simultaneously redefining how individuals interact and maintain connections in digital spaces [4–6, 16]. While these platforms facilitate engagement, they have also intensified ideological divisions, as algorithmic content curation reinforces preexisting beliefs by prioritizing content aligned with users’ prior views, limiting exposure to diverse perspectives. This selective exposure contributes to affective polarization, where individuals develop strong positive emotions toward their arXiv:2601.00927v1 [cs.SI] 2 Jan 2026 2 Chowdhury et al. in-group members while exhibiting hostility toward those from opposing groups or with opposing views [8,19,26]. Studies suggest that such polarization is not only shaped by political ideology but also by the emotional tone, discourse struc- ture, and interaction patterns within online discussions. Affective polarization has been linked to increased political radicalization, reduced bipartisan cooper- ation, and the spread of misinformation [31,33]. Understanding its dynamics is crucial for evaluating the broader societal implications of online discourse. Social media platforms, such as Twitter (now X), can further amplify these divisions through algorithmic content curation, which prioritizes engagement- driven interactions and often promotes sensationalized, polarizing content [5, 21, 24]. Research suggests that online echo chambers reinforce polarization by predominantly exposing users to like-minded perspectives while restricting in- teraction with opposing viewpoints [10,12]. However, while exposure to counter- ideological content has the potential to correct misperceptions and reduce po- larization in some cases [7], it can also provoke defensive responses, particularly in contentious social movements, where ideological conflict is often accompanied by toxic interactions and digital aggression [27]. These dynamics highlight the complex role of social media in shaping ideological divides and underscore the need for robust methodologies to quantify and analyze affective polarization in online discourse. Existing approaches to measuring affective polarization in social media largely rely on sentiment analysis, stance detection, and/or network-based polarization indices [11, 17, 23, 29]. Sentiment analysis techniques classify text as positive, negative, or neutral, providing a general sense of emotional tone but often fail- ing to capture the complexity of political discourse, such as sarcasm or implicit bias [22]. Stance detection methods aim to determine whether a user supports, opposes, or remains neutral on an issue, yet they frequently struggle with lin- guistic nuances, especially in highly polarized debates where positions are subtly framed [1, 18]. Recent studies have combined multimodal signals [28] or social network structures [14

📸 Image Gallery

figure1.png figure2.png figure3.png figure4.png met_pipeline.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut