Information sharing and political polarisation on social media: The role of falsehood and partisanship

We explore if misinformation from political elites (i.e., members of the US Congress) and extreme partisan information from media outlets generate greater engagement than accurate information and non‐extreme partisan information. We also investigate how exposure to these information types leads to negative emotions (e.g., anger) in individuals and its association with attitude polarisation. To this end, we analysed fact‐checked tweets from political elites, tweets from media outlets and replies to those tweets. Together, these tweets received more than 100 000 replies and were shared more than two million times. We also conducted two online experiments. Our field studies reveal that misinformation and extreme partisan information are associated with higher levels of negative emotions and greater engagement than accurate information and non‐extreme partisan information. Our data also show that—while negative emotions in response to extreme partisan information are higher among social media users at the ideological extreme than those at the ideological centre—there is no difference in the two groups' level of negative emotions in response to misinformation. The online experiments demonstrate that exposure to misinformation and extreme partisan information elicits stronger negative emotions than exposure to accurate information and non‐extreme partisan information. These negative emotions, in turn, contribute to attitude polarisation. Our work makes practical and theoretical contributions concerning social media information sharing, negativity and political polarisation. We also provide future research avenues with associated research questions.

exposure to accurate information and non-extreme partisan information.These negative emotions, in turn, contribute to attitude polarisation.Our work makes practical and theoretical contributions concerning social media information sharing, negativity and political polarisation.We also provide future research avenues with associated research questions.
K E Y W O R D S misinformation, negative emotions, partisanship, political ideology, political polarisation, sharing

| INTRODUCTION
The ratio of Americans who consistently express conservative or liberal opinions has doubled from about 10% to over 20% within the first two decades of the 21st century (De-Wit et al., 2019;Pew Research, 2021).These figures suggest a divergence of political attitudes to ideological extremes, referred to as political polarisation (Beam et al., 2018;Kim & Kim, 2019).This polarisation is often fuelled by information spread through social media platforms and has been shown to be a barrier to solving pressing economic, environmental and social challenges (Tucker et al., 2018).Misinformation (i.e., false or misleading information regardless of whether there is intent to mislead) and extreme partisan information (i.e., information that may or may not be accurate but contains extreme political views) may play a significant role as contributors to political polarisation (Funke et al., 2019;Garrett et al., 2019).Both types of information are prevalent in a political context where they tend to generate interest and have the power to polarise opinions.For example, after the 2020 US elections, former US president Donald Trump spread misinformation on social media claiming he had won the election and that the opposition manipulated the official election results (Jurkowitz, 2021).
Incited by these claims, Trump supporters breached the US Capitol to prevent officials from certifying the election results.As this event shows, spreading misinformation can have severe consequences, stemming partially at least from inciting extreme views and a growing ideological gap between people's collective viewpoints (Hopp et al., 2020;Jilani & Smith, 2019;Del Vicario et al., 2018).Of course, the negative consequences of misinformation and extreme partisan information are not limited to a political context.For example, during the pandemic, hundreds of deaths can be traced back to misinformation on social media claiming that consuming methanol orally can cure COVID-19 (Coleman, 2020).Thus, it is key to better understand the role of information systems (IS) in potentially facilitating and intensifying societal issues.Specifically, it is important to learn more about how people share misinformation and extreme partisan information on social media, including how these information types relate to negative emotions in individuals and political polarisation.Although IS researchers have explored the misinformation and extreme partisan information phenomenon (e.g., Barfar, 2019;King & Wang, 2021;Turel & Osatuyi, 2021), studies on the (1) actual engagement with these types of information, (2) individual emotional responses to these types of information, (3) role of political ideology and (4) association with political polarisation remain limited.There is also scarce literature on how the influence of misinformation and extreme partisan information compares to the influence of accurate information and non-extreme partisan information.With this in mind, we ask the following research questions: RQ1.Is exposure to misinformation from political elites and extreme partisan information from media outlets associated with higher rates of sharing than exposure to accurate information from political elites and non-extreme partisan information from media outlets?RQ2.Is exposure to misinformation from political elites and extreme partisan information from media outlets associated with higher levels of attitude polarisation than exposure to accurate information from political elites and non-extreme partisan information from media outlets?
In exploring these questions, we examine an important mediating mechanism.Specifically, we explore how negative emotions in individuals in response to information (e.g., misinformation or accurate information) contributes to attitude polarisation.We also explore an important boundary condition, namely, the role of political ideology (i.e., a set of ideas, beliefs, values and opinions on how society should work that are often expressed in support of specific policies).
Grounded in empirical work and multiple theoretical lenses (e.g., System 1 and System 2 theory and emotional contagion) that have been extensively used in the IS literature, we put forth a series of hypotheses.Firstly, we hypothesise that falsehood (i.e., the extent to which the information is misinformation or accurate information) and partisanship (i.e., the extent to which the information is extreme partisan or non-extreme partisan information) are positively associated with sharing and attitude polarisation.We also argue that the arousal of negative emotions is an important mechanism underlying the positive association between falsehood, partisanship and attitude polarisation.Moreover, we hypothesise that the positive association between falsehood, partisanship and negative emotions is stronger when people have extreme political ideologies compared to people that have non-extreme political ideologies.
Two field studies conducted on Twitter confirm that misinformation from political elites and extreme partisan information from media outlets attract higher rates of sharing than accurate and non-extreme partisan information.
Two online experiments in which we exposed participants to a series of tweets confirm that people respond and feel more negatively (e.g., responding with anger) to misinformation and extreme partisan information than to their converse, which, in turn, contributes to people experiencing higher levels of attitude polarisation.The four studies also show that, in some cases, political ideology acts as a boundary condition.For example, when individuals are extremely conservative or liberal, they tend to respond with more negative emotions to extreme partisan information.
Our work makes several practical and theoretical contributions.Among other practical contributions, we highlight a need for sociocultural change in handling information on social media, including more education on information systems' potentially adverse effects on individuals' attitudes and behaviours.We also make several contributions to the IS literature on information sharing, particularly in the context of falsehood and partisanship (e.g., King & Wang, 2021;Laato et al., 2020;Miller et al., 2023).Drawing on System 1 and System 2 theory and emotional contagion, we demonstrate how misinformation and extreme partisan information lead to negative emotions in individuals, which, in turn, are associated with increased levels of political polarisation.In so doing, we are responding to calls for more research into the nature and mechanisms driving social media induced polarisation (e.g., Qureshi et al., 2020Qureshi et al., , 2022a)).
In proposing and finding statistically significant support for a more inclusive and comprehensive theoretical model, we go some way towards providing a more complete picture of the complex mechanism in which misinformation and extreme partisan information can contribute to an often-vicious cycle of political polarisation (and even social unrest).Consequently, we also extend the nascent literature on information systems' dark side (e.g., Schuetz et al., 2021;Turel & Osatuyi, 2021;Turel & Qahri-Saremi, 2016).
Finally, we show that political ideology is an important boundary condition when it comes to falsehood and partisanship and their effect on negative emotions.Specifically, while negative emotions in response to extreme partisan information are higher among social media users at the ideological extreme than those at the ideological centre, there is no difference in the two groups' level of negative emotions in response to misinformation.This finding points towards an interesting dynamic that extends existing IS research investigating partisan information and misinformation (e.g., Barfar, 2019;Horner et al., 2021;King & Wang, 2021;Laato et al., 2020) in that we show that misinformation has the potential to not only arouse negative emotions and further polarise people that are already ideologically extreme, but that this can happen to the same extent for people that are at the ideological centre.This and other findings can serve as an impetus for future studies (see research agenda in Appendix A; Table A1).

| The sharing of misinformation and extreme partisan information on social media
Information sharing is an age-old human phenomenon, but the internet and social media platforms have taken this activity to new levels.Information distributed on social media can reach millions of people within seconds.
Given the importance of information sharing in an online context, it is not surprising that it is also a growing area of IS research (e.g., Han et al., 2020;King & Wang, 2021;Stieglitz & Dang-Xuan, 2013).Research has found that various factors affect information sharing, including information content, information source and user characteristics.For example, when it comes to information content, emotional content (i.e., content that contains words that are suggestive of positive or negative emotions or that generates an emotional response) is more likely to go viral than neutral content (Nikolinakou & King, 2018;Stieglitz & Dang-Xuan, 2013;Tellis et al., 2019).Content that expresses emotions that engender high physiological arousal, such as anger, is particularly successful in increasing information diffusion (e.g., Berger & Milkman, 2012;Heimbach & Hinz, 2016).Concerning the information source, research suggests that information perceived to come from a credible source is more likely to be shared than that from a non-credible source (Ha & Ahn, 2011).Similarly, if the information source forms strong parasocial relationships with its audience, the information is more likely to be shared (Hwang & Zhang, 2018).
The characteristics of those who seek and consume content also play a key role in the virality of information (Kapoor et al., 2018).User characteristics include whether people consume social media content for information or entertainment purposes and their views, opinions and values.For example, research suggests that people are more likely to share information that confirms or supports their values and beliefs (e.g., Weismueller et al., 2022).
While many studies investigate how information content, information source and user characteristics affect the virality of information on social media, few studies have explored how these factors affect the sharing of misinformation and extreme partisan information (Colliander, 2019;Kim & Dennis, 2019;Shin et al., 2018;Turel & Osatuyi, 2021).For example, in the context of extreme partisan information, Hasell and Weeks (2016) found that partisan media may drive online information sharing using polarising and negative content, generating anger in their audience.In a misinformation context, Vosoughi et al. (2018) investigated information content of news stories on Twitter.They found that falsehood (vs.truth) diffused much farther, faster, deeper and more broadly than the truth in all categories of information (politics, business, sports and so forth).More research is warranted given the noted negative consequences of sharing misinformation and extreme partisan information.
T A B L E 1 Selected overview of literature.

Hasell and Weeks (2016)
Panel survey data To explore how partisan media use elicits anger and anxiety in a manner that may influence the degree to which people post and share political news and information in social media • Partisan media may drive online information sharing by generating anger in its audience.
Research Stream II: Emotional dynamics surrounding political (mis)information on social media

Horner et al. (2021) Survey data
To explore discrete emotional reactions to fake news headlines and how these emotions contribute to the perpetuation of fake news through sharing behaviours • Emotional reactivity of participants is associated with response behaviour intentions such that participants who reported high levels of emotions are more likely to take actions that would spread or suppress the fake news.

| Negative emotional responses to information on social media
Information's impact on users depends on the extent to which it changes attitudes, including people's emotional state and their views, opinions and values.The extent to which information changes attitudes, in turn, depends on more factors still, including whether the information is presented in an emotional manner, whether it is about a polarising issue or, specifically in a political context, whether the information is about attacking the political opposition.For example, Ferrara and Yang (2015) found that the higher the emotionality of the content people were exposed to, the higher the emotionality of their social media posts.Thus, people exposed to negative content were more likely to post negative content too.Similarly, Kim and Kim (2019) found that people exposed to uncivil comments felt more negative (e.g., angrier) than those exposed to civil comments.Both studies show that emotional, uncivil or polarising information can affect people's emotional state and, hence, the emotionality of their responses to posts and the posts they create.
While misinformation and extreme partisan information often share similar (emotional) characteristics, there is little research on how these types of information affect emotional states.The relative dearth of work on this issue is surprising since people's emotional state can contribute to political polarisation on social media, which, in turn, can lead to negative consequences, such as politically motivated violence and political gridlock (Del Vicario et al., 2016;Jilani & Smith, 2019;Kim & Kim, 2019).In the following subsection, we discuss the different forms of political polarisation in more detail.

| The polarisation of user attitudes on social media
Political polarisation is often referred to as the growing gap between liberals and conservatives in terms of their attitudes towards a political party, political candidate or policy (Kubin & von Sikorski, 2021).However, many sub-categories are discussed in the literature.Social polarisation is individuals' preference to maintain social relationships with like-minded others only, which leads to polarised networks in which there are few heterogenous social interactions on, for example, Facebook (Bakshy et al., 2015;Kitchens et al., 2020) and Twitter (Boutyline & Willer, 2017;Conover et al., 2011).Attitude polarisation is ideological segregation based on individuals' beliefs and attitudes towards an issue (Buder et al., 2021;Kim & Kim, 2019;Mitchell et al., 2021).For example, many individuals developed increasingly extreme attitudes towards the government's handling of the COVID-19 pandemic based on the information they consumed on social media.Other sub-categories of political polarisation include group polarisation (Iyengar & Westwood, 2015), affective polarisation (Garrett et al., 2014;Wakefield & Wakefield, 2023) and perceived polarisation (Yang et al., 2016).
Our research focuses on attitude polarisation because it is an important sub-category of political polarisation in a social media context.Indeed, information on social media has repeatedly been found to change user attitudes across contexts, including politics (Kapoor et al., 2018).For example, research found that information distributed on social media can impact an individual's voting choice in an election (McGregor, 2017).Thus, it is not surprising that information on social media can also affect attitude polarisation.However, few studies investigating the increasing spread of misinformation and extreme partisan information on social media considered attitude polarisation (Buder et al., 2021;Kim & Kim, 2019).Among these few studies, Kim and Kim (2019) found that incivility in user comments on Facebook can lead to increased negative emotions in users, leading to more attitude polarisation.
In summary, while there are many IS studies on information sharing on social media platforms, emotional dynamics on social media platforms and political polarisation, in the context of misinformation and extreme partisan information, research remains fragmented.More importantly, little has been done to combine these research streams to provide a better understanding of how exposure to information and the resulting emotions might contribute to political polarisation in IS.This is despite the many negative consequences of the spread of misinformation and extreme partisan information, negative emotions in individuals and political polarisation.Hence, in the next section, we describe how we build on existing theory to address this research gap.
We draw on System 1 and System 2 theory to explain how exposure to misinformation and extreme partisan information is associated with higher rates of sharing.According to System 1 and System 2 theory, there are two distinct modes of information processing.Automatic, unconscious and effortless thinking is attributed to System 1, while deliberate, conscious and effortful thinking is attributed to System 2 (Evans & Stanovich, 2013;Kahneman, 2011).Thus, depending on the cognitive system that is engaged, people usually react differently to the same information (Bago et al., 2020).That said, unless people are motivated to expend cognitive effort and invoke System 2, they tend to accept whatever intuitive conclusions System 1 offers (Kahneman, 2011).
In a social media context, System 1 and System 2 matter because when people browse through social media, they constantly interpret information and reach conclusions based on that information.Depending on whether individuals engage with System 1 or System 2, their approach to processing information varies, resulting in different reactions even when presented with the same information (Bago et al., 2020).Most users tend to be in a hedonic mindset, which makes less mindful social media consumption and reliance on System 1 more likely (Moravec et al., 2020;Thatcher et al., 2018).Indeed, few people tend to invoke System 2 cognition to critically consider the information they are exposed to on social media.These differences in information processing can, in turn, influence people's perceptions and reactions to information.For example, a lack of effortful thinking helps explain why many people like or share social media posts without reading them (Gabielkov et al., 2016).
We also draw on the emotional contagion phenomenon to argue how exposure to misinformation and extreme partisan information is associated with higher levels of negative emotions than exposure to their converse.Emotional contagion posits that social interactions can trigger emotional contagion between individuals, which means that emotions expressed by one individual become contagious and lead to similar emotions in others (Hatfield et al., 1993).Emotional contagion has been applied to the IS domain (e.g., Kramer et al., 2014).
Our conceptual model is presented in Figure 1.We propose that falsehood (i.e., whether the information is misinformation or accurate information) and partisanship (i.e., whether the information is extreme partisan information or non-extreme partisan information) are positively associated with sharing, negative emotions and attitude polarisation.We also consider an important boundary condition: extreme versus moderate political ideology.

| The role of falsehood and partisanship in information sharing
In the main, we expect misinformation to be associated with higher rates of sharing than accurate information.That is because misinformation tends to be more negative, novel, sensational or outrageous than accurate information (Osatuyi & Hughes, 2018;Tucker et al., 2018;Vosoughi et al., 2018).While the truth in the form of accurate information can have similar characteristics, falsehood in the form of misinformation has a greater potential to be, for example, outrageous or sensationalistic as the claims are not bound by truthfulness.For example, the claim that 10 million people in the United States died from COVID-19 due to government failure would likely be seen as more sensational and outrageous than a claim that 100 people died from COVID-19, even if the latter claim is closer to the truth.Hence, people may be more likely to share the more sensationalistic claim, even if it is not accurate.Why though would people tend to share the novel, negative, sensational or outrageous?
Part of this question's answer lies in peoples' pre-disposition to use System 1 when processing information online (Kim & Dennis, 2019;Moravec et al., 2020;Pennycook & Rand, 2019).Invoking System 2 and, thus, engaging in behaviours that require more mental effort and time, such as verifying information, is simply too inconvenient.
Therefore, people often base their sharing decision on intuitive judgements made by System 1, which will likely favour more sensational or outrageous information.Thus: Hypothesis 1. Exposure to misinformation is associated with higher rates of sharing than exposure to accurate information.
We also expect extreme partisan information to be associated with higher rates of sharing than non-extreme partisan information.Extreme partisan information tends to be more negative and sensational than non-extreme partisan information (Kilgo et al., 2018;Ng & Zhao, 2020;Sparks & Hmielowski, 2022).Much of this sensationalism and negativity is probably driven by extreme partisan information being framed in a way that furthers a specific political agenda from either a conservative or liberal side, creating an 'us' versus 'them' group dynamic (Guess et al., 2021).
Moreover, in expressing either liberal or conservative viewpoints, the information feeds peoples' biassed viewpoints (Nickerson, 1998).While non-extreme partisan information might also be negatively framed and sensational, it is usually reported in a more neutral manner that does not create an us-versus-them dynamic to the same extent as extreme partisan information.Further, given the more moderate news reporting and more moderate expression of opinions, it is less likely to confirm extreme biases.For example, while a social media post from a non-extreme partisan media outlet might report how the US Congress voted for a COVID-19 stimulus bill objectively, a social media post from an extreme partisan media outlet might point out how outrageous it is that the political opposition voted for or against the bill.
To overcome automatic and intuitive responses generated by System 1, it is necessary to engage in effortful System 2 processing, which involves devoting mental effort and questioning extreme partisan information to embrace and understand diverse political viewpoints.However, as noted, individuals tend to accept System 1's response to information, especially if it confirms their existing beliefs and appeals to their social identity (Turel & Qahri-Saremi, 2016).Although critically engaging with information and approaching it in an unbiassed manner would likely make extreme partisan content less attractive and less likely to be shared, it is unlikely that individuals undertake these activities.Thus: Hypothesis 2. Exposure to extreme partisan information is associated with higher rates of sharing than exposure to non-extreme partisan information.

| The role of falsehood and partisanship in affecting negative emotions
As noted, misinformation can be emotionally charged, sensationalistic and outrageous (Osatuyi & Hughes, 2018;Vosoughi et al., 2018); after all, it is not bound to any factual or evidence-based standards.Consequently, it is probable that this information arouses emotions.For example, many political elites claimed that COVID-19 is a hoax to arouse negative feelings towards government measures, such as lockdowns.When people are exposed to such misinformation, they likely respond emotionally.
Emotional contagion can help explain the mechanism at play here (Hatfield et al., 1993).Emotional contagion is a multi-step process that demonstrates how emotions can spread from one person to another.In short, people perceive an emotional stimulus, respond with a similar emotion to the one conveyed, and likewise pass this emotion on to others.Here, we focus on people's perception of an emotional stimulus and the emotions they experience in response to that stimulus.For example, misinformation expressing high levels of negative emotions serves as an emotional stimulus to which people likely respond to with similar feelings of negative emotions (Horner et al., 2021).
While emotional contagion can occur in response to positive and negative social media posts, as noted, political information tends to be negative.Specifically, misinformation tends to be more negative than accurate information.
It would then not be surprising that people experience (and potentially pass on) higher levels of negative emotions in response to misinformation than in response to accurate information.
Even if the information itself is not negative but simply attempts to trigger negative emotions, it can lead people to react negatively.Responses are usually made quickly; that is, cognitive effort is not expended to keep 'knee-jerk' emotional reactions in check.While engaging in System 2 thinking would lead to a more judicious consumption of information and reduce, for example, feelings of anger, it would also require more cognitive effort.Hence, given the nature of misinformation and people's tendency to avoid effortful thinking when browsing through social media, we argue that: Hypothesis 3. Exposure to misinformation is associated with higher levels of negative emotions in individuals than exposure to accurate information.
A relatively high proportion of partisan content from media outlets is sensationalistic and negative, in contrast to non-partisan content (Kilgo et al., 2018;Ng & Zhao, 2020).Moreover, as discussed, extreme partisan information is often framed to further a specific political agenda.In contrast to non-extreme partisan information, this creates an 'us' versus 'them' dynamic, which can fuel negative feelings.Indeed, research found that much anger in politics is due to an 'us' versus 'them' dynamic and affective polarisation (Druckman et al., 2021;Iyengar & Westwood, 2015).
Hence, extreme partisan information can not only attract sharing but also lead to more negative emotions.
Negative responses to extreme partisan information expressing negativity can again be explained through the lens of emotional contagion.Extreme partisan information that, for example, expresses negativity towards the political opposition (or perhaps, in some cases, even towards more moderate parts of their political aisle) serves as an emotional stimulus that can lead to similar feelings of negativity in the people who are exposed to the stimulus.For example, extreme partisan information stating 'Outrageous: The incompetent Biden Administration is telling Congress it needs an additional $30 billion to press ahead with the fight against COVID-19.This administration is a joke' is negatively framed and expresses negative emotions, which is likely to lead to similar feelings (and potentially expressions) of negativity in its readers.
Even if the information itself is not negative, the often-biassed nature of the information can lead to negative emotions.We again refer to System 1 and System 2 theory, which states that individuals do not usually engage deeply with content on social media (i.e., they do not activate the more effortful System 2 mode).Yet, doing so could help in embracing different political viewpoints and respond to extreme partisan information more judiciously.However, it is probable that individuals follow the intuitive System 1 thinking mode, which can feed their biases and makes them more likely to be emotionally involved.While people are similarly likely to follow the intuitive System 1 thinking mode in response to extreme and non-extreme partisan information, the latter is usually less biassed, making potential negative feelings and expressions less common.Thus, we argue that: Hypothesis 4. Exposure to extreme partisan information is associated with higher levels of negative emotions in individuals than exposure to non-extreme partisan information.

| The moderating role of political ideology
The degree to which negative emotions are associated with falsehood differs based on whether individuals exposed to the information are located at the ideological centre or extreme.While we expect misinformation to be associated with higher levels of negative emotions in individuals than accurate information, people at the ideological extreme likely feel higher levels of negative emotions than people at the centre.For example, suppose there is misinformation about school shootings stating that more than 120 school shootings have occurred in the United States in 2022.In that case, people at the extreme will likely react more strongly in response to the misinformation than people at the centre.That is because people at the ideological extreme tend to feel more strongly about political issues (e.g., gun laws) and identify more strongly with political groups that support or oppose those issues than people at the centre (van Prooijen et al., 2015;van Prooijen & Krouwel, 2019).Moreover, people with extreme ideologies tend to consume content in information environments where strong opinions and negativity are the norm (Buder et al., 2021;Kitchens et al., 2020).In contrast, people at the ideological centre tend to be less emotionally involved when consuming social media content.Hence, while people at the ideological centre might experience heightened feelings of negative emotions in response to misinformation, it is likely that these feelings are not as strong as those from people with extreme political viewpoints.Moreover, even with heightened feelings of negative emotions, people with moderate viewpoints are less likely to engage in intense social media discussions in which they express their negative emotions in response to misinformation than those with extreme political viewpoints.Based on the ideological differences in handling information on social media, we argue that: Hypothesis 5.The positive association between falsehood and negative emotions is stronger when individuals are located at the ideological extreme as opposed to the ideological centre.
Similarly, we expect political ideology to impact the relationship between partisanship and negative emotions.
People at the ideological extreme tend to be less open-minded regarding political viewpoints and less diverse in the media sources they consume (Mitchell et al., 2021;Pennycook et al., 2020).Indeed, people at the ideological extreme are usually found in environments that support their extreme political viewpoints and fuel their biases (Boutyline & Willer, 2017;Lammers et al., 2017).The partisan nature (e.g., attacking the political opposition) of extreme partisan information appeals to people at the ideological extremes and their social identity (Guess et al., 2021).It is likely then that those negative emotions are particularly observed in social media users at the ideological extremes who already hold negative views of the opposing party and its followers than those at the centre (Webster & Abramowitz, 2017).
Indeed, research found that people who were highly involved in so-called echo chambers (which are environments where people consume and distribute, often extreme, content that is congruent with their political ideology) showed a faster shift towards negativity than those who were less involved in echo chambers (Del Vicario et al., 2016).In summary, individuals that already hold extreme viewpoints can be expected to be more emotionally involved when being exposed to extreme partisan information than individuals at the centre.Thus: Hypothesis 6.The positive association between partisanship and negative emotions is stronger when individuals are located at the ideological extreme as opposed to the ideological centre.

| The mediating role of negative emotions
We hypothesised that falsehood and partisanship could affect individuals' emotions by triggering negative emotions.
Such a relationship can have far-reaching consequences because studies in many contexts have shown that emotions can influence attitudes and behaviours (e.g., Bagozzi & Moore, 1994;Clifford, 2019;Venkatesh, 2000).For example, in a political context, research has shown that feelings of anger can influence attitudes towards moral issues, such as abortion rights (Clifford, 2019).Here, we argue that negative emotions, which are particularly strong in response to misinformation and extreme partisan information, will polarise peoples' attitudes.Thus, negative emotions are an underlying mechanism that drives the relationship between falsehood, partisanship and attitude polarisation.For example, suppose people are exposed to misinformation about COVID-19 government measures and experience much anger in response.Those negative feelings can lead to more supportive or opposed opinions towards COVID-19 government measures.This notion is consistent with research that found that users felt more strongly about gun laws after being exposed to negative comments about gun laws on Facebook (Kim & Kim, 2019).In short, it can be argued that the arousal of negative emotions can make individuals more polarised in their opinions about an issue, whether they are for or against it.Thus: Hypothesis 7. Negative emotions in individuals mediate the relationship between falsehood and attitude polarisation.
Hypothesis 8. Negative emotions in individuals mediate the relationship between partisanship and attitude polarisation.

| RESEARCH METHOD
To examine the proposed model empirically, we conducted online experiments and collected data from Twitter's Application Programming Interface (API).Twitter was launched in 2006 and, with 229 million daily active users at the time of writing in 2022, is one of the most successful social media platforms of all time (Dixons, 2022).The Twitter API can be used to retrieve and analyse Twitter data programmatically.
For several reasons, Twitter is an ideal social media platform for studying extreme partisan information, political misinformation and political polarisation.First, Twitter is a fast-paced environment in which users usually share relatively short pieces of content in an unedited way.Thus, tweets may be more impulsive, and people may have less of a barrier to sharing content they may not share on other social media platforms, which makes polarised debates more likely (Rid, 2017).Second, Twitter is a favoured social media platform for journalists and political elites.In fact, 86% of political elites frequently tweet, which makes it the dominant social media platform for this group (Devlin et al., 2020).It was also found that politically interested users on Twitter have higher exposure to political information than politically interested users on other social media platforms, including Facebook (Gottfried, 2014).Third, Twitter is a preferred social media platform for researchers who desire to study user ideologies.This is because the political elites and media outlets a user follows on Twitter convey information about an individual's political preferences, as confirmed by several studies (e.g., Barberá et al., 2015;Golbeck & Hansen, 2014).
Based on our prior definition of misinformation and extreme partisan information, it cannot be assumed that they are the same, and this is reflected in our research method (see Figure 2).In Study 1, we collected tweets that contained fact-checked misinformation and accurate information.In Study 3, we collected tweets that contained extreme partisan and non-extreme partisan information.Both studies were followed up with online experiments (Study 2 and Study 4) to receive more granular data on the issue at hand, get insights that were not observable in a real-world setting, and, thus, complement Studies 1 and 3, respectively.In Study 2, participants were exposed to misinformation and accurate information.Meanwhile, in Study 4, participants were exposed to extreme and non-extreme partisan information.For Study 1, we used PolitiFact, an independent fact-checking website, to create a list of fact-checked tweets that received either a false or a true rating of the information.This process produced a total of 240 tweets.From this raw dataset, we excluded all tweets from Donald Trump due to the large number of his tweets and the uniqueness of his position as the former president of the United States of America.Using the Twitter API, we then collected relevant tweet information (e.g., number of shares and number of replies) and response information (e.g., text data of the response) of the first 100 replies for each tweet.We excluded tweets with fewer than 10 replies, which led to a final sample consisting of 142 tweets on different political issues (i.e., immigration, gun control, COVID-19) and a total of 8318 replies.Using the textual analysis program language inquiry word count (LIWC) (Tausczik & Pennebaker, 2010), we identified linguistic characteristics, such as the word count of the reply and the words that were suggestive of negative emotions in the replier.Moreover, using a verified method to compute ideological scores of social media users (Golbeck & Hansen, 2014), which is discussed in more detail in a later subsection, we analysed whether the replier to the tweet was positioned at the ideological extreme (i.e., extreme conservative or extreme liberal) or at the ideological centre.

| Measures and descriptive statistics
As noted, we used the independent fact-checking website PolitiFact for our falsehood measure, which rates information circulating on social media platforms around political issues such as COVID-19, immigration, elections, healthcare and the like on their accuracy.Independent fact-checking websites like PolitiFact work with major tech companies such as Twitter and Facebook to fight digital misinformation in alignment with a code of principles, such as non-partisanship and transparency of sources (Poynter, 2021;Schuetz et al., 2021).We considered statements to be misinformation if they were rated false (i.e., the statement was not accurate) or accurate if they were rated true (i.e., the statement was accurate and nothing significant was missing).
To measure users' political ideologies, we used DW-Nominate scores computed from voting rolls for the 116th US Congress (Carroll et al., 2015;Lewis et al., 2022).The primary dimension of these scores closely corresponds to the liberal-conservative dimension in US politics (Lewis et al., 2022), ranging from roughly À1 (liberal) to 1 (conservative).We divided this dimension by its standard deviation.In addition to the DW-Nominate scores from senators and representatives of the 116th US Congress, we also used the DW-Nominate scores of politicians who ran for presidency within the previous 20 years and were still politically active on Twitter.Doing so increased our resource list of proxies for social media users' ideologies.To further increase our ideology accuracy, we took media outlets into account.Media outlets play a crucial role in the online political environment because most of the political news is disseminated through them, and it is known that most US media outlets report in a partisan manner (Druckman et al., 2018;Levendusky, 2013).Therefore, we included the ideological scores of media outlets (a more detailed operationalisation of the ideology of media outlets is provided in Study 3).In sum, we used the ideological scores of more than 550 senators, representatives, previous presidential candidates and media outlets as proxies for the orientations of their followers (Golbeck & Hansen, 2014).Such an approach is in line with prior research exploring political ideologies on Twitter (e.g., Barberá et al., 2015;Boutyline & Willer, 2017;Golbeck & Hansen, 2014).On average, users in our dataset followed 40 senators, representatives, previous presidential candidates and media outlets that served as proxies.Thus, the ideology of each user was, on average, computed based on the ideological scores of a mix of 40 senators, representatives, previous presidential candidates and media outlets.After computing the ideological score for each social media user in our misinformation dataset, we divided the users into extreme and nonextreme users.More specifically, we compared users above the 90th percentile range (extreme: with an absolute ideological score above 0.526) with users below the 90th percentile range (non-extreme: with an ideological score below 0.526).A robustness check was conducted to test different percentile ranges (e.g., users above the 95th percentile range and users below the 95th percentile range).In both scenarios, we found consistent results.
To analyse the linguistic style of each tweet, this study used the LIWC text-analytics software (Tausczik & Pennebaker, 2010), which uses dictionaries to calculate the degree to which each piece of text contains specific category words, such as first-person pronouns or affective processes.Based on more than a decade of research, Pennebaker et al. (2015) developed dictionaries that capture emotions expressed in text and reflect the emotional state of the text's author at the time of writing.Their research included multiple tests, such as asking people to complete a questionnaire to assess their general mood and then write an essay to observe how their words related to their emotional state.Hence, more than just capturing sentiment in the form of negative words, LIWC captures the language people use based on their emotional state.LIWC measures have been rigorously tested for reliability and external validity using different textual data (Pennebaker et al., 2015).In the management and marketing literature, LIWC is a popular tool for extracting psychological and linguistic constructs from texts (e.g., Barasch & Berger, 2014;Berger & Milkman, 2012;Ludwig et al., 2013).Here, we used LIWC's 'negemo' dictionary, which contains words people use in their writing when experiencing negative emotions such as anger or anxiety.
The outcome variable, sharing, represented the ratio between a tweet's number of retweets (shares) and the original tweeter's total number of followers.The retweet count has previously been used in studies of sharing behaviour on social media (Stieglitz & Dang-Xuan, 2013;Stone & Can, 2020;Vosoughi et al., 2018).Other variables that were collected included word count, friends count and the negativity of the tweet.Table 2 provides an overview of all variables used in our data analysis in Study 1 and Study 3, including the control variables.The descriptive statistics are summarised in Table 3.

| Hypotheses testing
The nature of our dataset was that individual users might send multiple tweets and replies, so the model had to control for the possibility that tweets and replies originating from any user expose possible similar characteristics.
Therefore, tweets and replies were nested within users, and a hierarchical linear model (HLM) was specified as an intercept model that included tweet-level variables (e.g., falsehood), user-level variables (e.g., friends count) and reply-level variables (e.g., negative emotions).Due to the nature of our proposed research model with falsehood as an independent variable, two dependent variables, one moderator variable and several control variables on different levels, we conducted two separate HLM analyses in Study 1.In the first analysis, we tested the relationship between falsehood and sharing.In the second, we tested the relationship between falsehood and negative emotions in individuals responding to the information.In the second analysis, we also tested whether political ideology moderated the relationship between falsehood and negative emotions.The independent variables were standardised and the HLM models did not suffer from major multi-collinearity shown by the correlation matrix in Table 3 and the variance inflation factor (VIF) scores (i.e., the maximum VIF equals 3.47) (O'Brien, 2007).The models used the maximum likelihood estimation with Laplace approximation, relying on an unstructured covariance matrix and were implemented in SAS 9.4 (Wolfinger, 1993).To reduce data skewness, we took the natural logarithm of the retweet, follower and friend count (see skewness and kurtosis scores in Appendix B; Table B1; Hair et al., 2010).
In our first analysis, we used an HLM to regress the influence of falsehood on the dependent variable of sharing (Hox, 2010).The results show that exposure to misinformation was associated with significantly higher rates of sharing than exposure to accurate information (ß = 0.91, p = 0.03), supporting Hypothesis 1.We also found that the length of the information (ß = À0.48,p = 0.02) and follower count (ß = À0.99,p < 0.001) were associated with lower rates of sharing.The negativity of the tweet and the author's friends count were not statistically significant control variables.
T A B L E 2 Description of variables (Study 1 and Study 3).

Independent variables
Falsehood a Tweet Whether a tweet contains inaccurate (falsehood = 1) or accurate (falsehood = 0) information Partisanship b Tweet Whether a tweet is posted by a media outlet that is extreme partisan (partisanship = 1) or non-extreme partisan (partisanship = 0)

Moderating variable
Political ideology Reply Whether the replier is located at the ideological extreme (ideology = 1) or at the ideological centre (ideology = 0)

Dependent variables
Negative emotions Reply Whether the reply contains words that reflect a negative emotional state in the author (e.g., words such as 'worried', 'hate', or 'sad')

Sharing Tweet
The number of retweets the tweet received divided by the number of followers the tweeter has

Word count Tweet
The number of words in a tweet

Word count Reply
The number of words in a reply to a tweet

Friends count Author
The number of users the tweet author follows

Follower count Author
The number of followers the tweet author has Negative emotions Tweet Whether the tweet contains words that reflect a negative emotional state in the author (e.g., words such as 'worried', 'hate', or 'sad') In our second analysis, we used an HLM to regress the influence of falsehood on the dependent variable negative emotions, using political ideology as a moderating variable.The results show that exposure to misinformation was associated with higher levels of negative emotions in individuals who responded to the information than was exposure to accurate information (ß = 0.42, p < 0.001), supporting Hypothesis 3. Our findings also suggest that there was no significant moderating effect of political ideology on the relationship between falsehood and the level of negative emotions in individuals who responded to the information (ß = 0.06, p = 0.26).Hence, there was no support for Hypothesis 5. Lastly, we found that negative emotions in the tweet (ß = 0.08, p < 0.001), the length of user replies (ß = 0.14, p < 0.001) and the author's friends count (ß = 0.08, p = 0.02) were significant control variables.
The length of the tweet was not a statistically significant control variable.The participants were recruited through Amazon's Mechanical Turk (MTurk) crowd-sourcing software and received financial compensation for completing the study.Research has indicated the advantages of using MTurk data, including its effectiveness in obtaining valid and representative samples (Steelman et al., 2014).We implemented several attention-check questions and data quality functionalities.Our respondents had to be at least 18 years old, a resident of the United States, and regular SNS users (i.e., someone who engages with social media posts weekly).This process led to a total of 120 usable responses.The final sample (M age = 39.4) included 72 males, 47 females and one who did not respond to the gender question.

|
In line with other research on misinformation (e.g., Pennycook et al., 2020), we exposed participants to a series of real-world stimuli.Specifically, participants were exposed to six tweets, one at a time, that contained misinformation (condition 1) and six tweets, one at a time, that contained accurate information (condition 2).The stimuli were tweets from political elites identified by PolitiFact as either false (falsehood = 1) or accurate (falsehood = 0), in line with other studies that have examined the impact of misinformation and accurate information (e.g., Pennycook & Rand, 2019).Each tweet contained a profile photo, name, username and message (see Appendix C; Table C1).We used stock photos and made-up names to avoid participant bias towards the message source.The stimuli focused on both misinformation and accurate information about COVID-19.To accommodate different political ideologies, the stimuli contained messages that were consistent with left-leaning, moderate and right-leaning views in a pre-test survey we conducted via MTurk with 60 US-based social media users across the ideological spectrum.
Manipulation checks were conducted to examine whether participants perceived the 12 stimuli as misinformation.After completing the main task of the survey, participants were again exposed to the stimuli and asked the following: 'Some information on social media is false information (regardless of whether there is intent to mislead or not) and contains content that is emotionally charged and/or sensational.Often, it also aims to create outrage and/or negativity in social media users.To what extent do you think that the tweet fits the above description?' (1 = Not at all, 5 = A great deal).We then used multiple paired t-tests to compare the accurate information stimuli that scored highest in response to the manipulation check question with each of the scores of the misinformation stimuli.We found that all misinformation stimuli were perceived as misinformation significantly more than the highest-scoring accurate information stimuli (see Appendix C; Table C3).
Participants were asked about their emotions relating to each tweet.After each set of tweets (e.g., six tweets containing misinformation or accurate information), participants were asked about their attitudes towards the topic.
The order of the set of tweets was randomised evenly, with some participants exposed to the six misinformation stimuli first and others exposed to the six accurate information stimuli first.This process ensured that our results were not biassed due to the order in which participants received the information.

| Measures
Study 2 explored how falsehood affects negative emotions in individuals and attitude polarisation, while controlling for political ideology.We measured each respondent's political ideology using a single-item 7-point self-report scale (1 = extremely liberal and 7 = extremely conservative).An index of ideological extremity was created by folding the scores of the scale (i.e., the mid-score of the attitude scale represented the low end of the extremity scale, while the two extremes represented the high end) (Kim & Kim, 2019).To measure attitude polarisation about the COVID-19 pandemic, participants were asked to indicate on a 7-point scale the extent to which they agreed or disagreed with three items (α = 0.91, M = 1.28,SD = 0.91, see Appendix C; Table C2) on COVID-19 (e.g., 'How strongly do you support or oppose the government's handling of the COVID-19 pandemic?')(Kim & Kim, 2019).The measure was pre-administered (t1 attitudes) before participants were exposed to the stimuli and then re-administered (t2 attitudes) after exposure to the first set of stimuli, whether it was misinformation or accurate information.Using the same approach that we used to measure the extremes of political ideologies, we then measured the extremes of the t1 and t2 attitudes.As with the ideology index described above, we folded the scores of each scale so that the mid-score of the attitude represented the low end of the extremity scale, while the two extremes represented the high end.We then subtracted the t2 attitudes from the t1 attitudes to measure the difference in attitude polarisation (Johnson et al., 2020).We also measured eight emotions previously identified as discrete and basic human emotions in the psychological literature, for example, in the positive and negative affect schedule (PANAS; Ekman, 1992;Watson et al., 1988).Specifically, we measured negative emotions (anger, sadness, anxiety, disgust, fear, α = 0.96, M = 2.49, SD = 0.85) and positive emotions (happiness, joy, awe, α = 0.89, M = 1.93,SD = 1.16).Using a 5-point scale (1 = not at all to 5 = extremely), we measured the extent to which an individual felt the specific emotion in relation to each of the social media posts they were exposed to (i.e., 'To what extent do you feel one or more of the following emotions on seeing the social media post?') (see Appendix C; Table C2).

| Hypotheses testing
Hypothesis 7 specified that negative emotions in individuals mediate the relationship between falsehood and attitude polarisation.To assess this hypothesis, we performed a moderated mediation analysis using Model 7 of Hayes's (2017) PROCESS with political ideology as a moderating variable in the relationship between falsehood and negative emotions in individuals.As shown in Table 4, the results of our analysis show that falsehood had a significant direct effect on attitude polarisation (coefficient: 1.17, SE: 0.12, Lower BCCI: 0.93, Upper BCCI: 1.40).Moreover, our results suggest that the indirect effect concerning falsehood and attitude polarisation channelled through negative emotions was significant (coefficient: 0.07, SE: 0.03, Lower BCCI: 0.01, Upper BCCI: 0.15), supporting Hypothesis 7.However, the mediation effect concerning negative emotions was not more pronounced when people had an extreme ideology instead of a non-extreme ideology (Index: 0.02, SE: 0.09, Lower BCCI: À0.15, Upper BCCI: 0.20).

| Procedure
Study 3 followed the Study 1 procedure, but the independent variable of falsehood was replaced with partisanship.
Hence, we collected and analysed extreme partisan and non-extreme partisan information instead of misinformation and accurate information.

| Measures and descriptive statistics
Partisanship indicates whether a tweet was from an extreme partisan media outlet or not.We initially created a list of 50 well-known and popular (in terms of the number of Twitter followers) US media outlets across the ideological spectrum.We then measured each media outlet's ideological extremity based on media bias ratings.Allsides, an American company that assesses the political bias of prominent media outlets, assigns media outlets to one of five buckets (i.e., left, lean left, centre, lean right and right) through a multi-partisan scientific analysis (Allsides, 2021).
Although the media bias ratings follow a scientific analysis and are popular in the political space, we also developed and computed our own ideological scores for each media outlet.Specifically, we created an algorithm that identifies whether a member of the US congress follows the respective media outlet on Twitter and then assigns the member's DW-nominate score to it.Hence, the computation of a media outlet's ideological score is based on the ideological scores of the members of the US Congress that follow the media outlet.Following this computation, we compared the scores obtained from the algorithm with the media bias ratings and excluded media outlets with computed scores that did not match the ratings.Finally, we randomly chose three media outlets for each of the five buckets, leading to 15 media outlets (see Appendix B; Table B2).Based on the ratings and computed scores, we confirmed that the media outlets on the left and right are among the most popular extreme partisan media outlets on Twitter.
In contrast, the media outlets in the centre and on the lean left or lean right are popular but significantly less extreme partisan media outlets.The final sample consisted of 8017 tweets with 102 681 replies, and the descriptive statistics are summarised in Table 5.

| Hypothesis testing
The hypotheses testing in Study 3 followed the same analysis approach used in Study 1, the only difference being that instead of testing misinformation or accurate information tweets, we tested tweets from extreme partisan or  non-extreme partisan media outlets.As in Study 1, we conducted two HLM analyses.The independent variables were standardised, and the HLM models did not suffer from major multi-collinearity shown by the correlation matrix in Table 5 and the VIF scores (i.e., the maximum of VIFs equals to 2.00; O'Brien, 2007).To reduce data skewness, we took the natural logarithm of the retweet and friend count, and the length of the user reply (see skewness and kurtosis scores in Appendix B; Table B1; Hair et al., 2010).
In our first analysis, we used an HLM to regress the influence of partisanship on the dependent variable sharing (Hox, 2010).The results show that exposure to extreme partisan information was associated with higher rates of sharing than exposure to non-extreme partisan information (ß = 2.12, p = 0.007), supporting Hypothesis 2. We also found that negative emotions in the tweet (ß = À0.32,p < 0.001) was associated with lower rates of sharing.In contrast, the length of the tweet (ß = 0.21, p < 0.001) was associated with higher rates of sharing.The author's friends and followers count were not statistically significant control variables.
In our second analysis, we used an HLM to regress the influence of partisanship on the dependent variable negative emotions, with political ideology as a moderating variable.The results show that exposure to extreme partisan information was associated with higher negative emotions in individuals who responded to the information than exposure to non-extreme partisan information (ß = 0.24, p = 0.004), supporting Hypothesis 4.Moreover, our findings suggest a significant moderating effect of political ideology on the relationship between partisanship and negative emotions in individuals (ß = 0.55, p < 0.001).More specifically, as shown in Figure 3, users at the ideological extreme had higher levels of negative emotions in response to extreme partisan information than users at the ideological centre, supporting Hypothesis 6. Lastly, we found that negative emotions in the tweet (ß = 0.03, p < 0.001), the length of the tweet (ß = 0.03, p < 0.001), the length of the user reply (ß = 0.05, p < 0.001) were significant control variables.More specifically, they were positively associated with negative emotions in individuals who responded to the information.that participants were exposed to tweets containing extreme partisan information or non-extreme partisan information instead of misinformation or accurate information.
Just like in Study 2, we exposed participants to real-world stimuli.Specifically, participants were exposed to six tweets containing extreme partisan information (condition 1) and six tweets containing non-extreme partisan information (condition 2).Each tweet contained a profile photo, name, username and message (see Appendix C; Table C1).We used stock photos and made-up names to avoid participant bias towards the message source.The stimuli contained COVID-19-related information from extreme partisan media outlets (e.g., Breitbart) and nonextreme partisan media outlets (e.g., Reuters).To accommodate different political ideologies, the extreme partisan stimuli contained a mix of messages that were identified as consistent with either left (liberal) or right (conservative) views in a pre-test survey we conducted via MTurk with 60 US-based social media users positioned across the ideological spectrum.
Manipulation checks were conducted to examine whether participants perceived the 12 stimuli as extreme partisan information or not.After completing the main task of the survey, participants were again exposed to the stimuli and asked, 'Some information on social media contains extreme political viewpoints, is emotionally charged, and/or sensational.Often, it also aims to create outrage and/or negativity in social media users.To what extent do you think that the following tweet fits the above description?'(1 = Not at all, 5 = A great deal).We conducted multiple paired t-tests, comparing the non-extreme partisan information stimulus that scored highest in response to the manipulation check question with the extreme partisan information stimuli scores.We found that each extreme partisan information stimulus was perceived as extreme partisan information significantly more than the highest-scoring non-extreme partisan information stimuli (see Appendix C; Table C1).

| Hypotheses testing
Hypothesis 8 specified that negative emotions mediate the relationship between partisanship and attitude polarisation.To assess this hypothesis, we performed a moderated mediation analysis using Model 7 of Hayes's (2017) PROCESS with political ideology as the moderating variable in the relationship between partisanship and negative emotions in individuals.As shown in Table 6, the results of our analysis show that partisanship had a significant direct effect on attitude polarisation (coefficient: 0.63, SE: 0.21, Lower BCCI: 0.22, Upper BCCI: 1.04).Moreover, our results suggest that the indirect effect concerning partisanship and attitude polarisation channelled through negative emotions was significant (coefficient: 0.27, SE: 0.10, Lower BCCI: 0.05, Upper BCCI: 0.46), supporting Hypothesis 8.However, the mediation effect concerning negative emotions was not more pronounced when people  had an extreme ideology instead of a non-extreme ideology (Index: À0.07, SE: 0.14, Lower BCCI: À0.43, Upper BCCI: 0.15).

| GENERAL DISCUSSION
Many studies in IS and related fields have focused on social media and its role in benefiting users and organisations, for example, through the democratisation of information and increased connectivity among users (e.g., Kapoor et al., 2018).However, ever-growing opportunities to connect with one another, consume, create and react to messages have also revealed new tensions.One such tension that can harm the well-being of users is the propagation of false information and extreme partisan content that potentially has the power to spread negativity and polarisation through networks on social media and beyond (Kitchens et al., 2020;Shin et al., 2018;Shore et al., 2018).With this in mind, we analysed real-world data from social media and data from online experiments to better understand how misinformation and extreme partisan information relate to individuals' sharing behaviours, emotions and attitude polarisation (see Table 7).We drew on System 1 and System 2 theory and emotional contagion to theoretically underpin the dynamics at play.
Our findings (see Figure 4) show that exposure to misinformation and extreme partisan information is associated with higher engagement (in the form of sharing) than exposure to accurate information and non-extreme partisan information.These results confirm prior studies that people are more engaged with misinformation than accurate information and that extreme partisan information generates high engagement levels (King & Wang, 2021;Vosoughi et al., 2018).Further, our results show that exposure to misinformation is more likely to elicit negative emotions than exposure to accurate information, extending prior research on the impact of misinformation on affective responses (e.g., Featherstone & Zhang, 2020).Our findings also suggest that extreme partisan information is more likely to elicit negative emotions than non-extreme partisan information.This finding is consistent with only a handful of studies on emotional responses to extreme partisan information (e.g., Barfar, 2019).Consistent with our theorising, the results provide evidence that-although the sharing of misinformation and extreme partisan information are distinct phenomena-they share similar outcomes in that they can increase people's negative emotions and polarise them.A post-hoc analysis shows that even when anger is used as a more specific mediator, the research model remains robust, which indicates that anger is a main driver of the dynamics at hand (refer to Appendix B for further details).
T A B L E 7 Hypotheses testing summary.The results also reveal that, in some cases, the extent to which an individual's political ideology is extreme can play a role in how people react to information on social media.We found that ideologically extreme individuals tend to show higher levels of negative emotions responding to extreme partisan information than individuals responding to non-extreme partisan information.However, political ideology had no statistically significant moderating effect on the relationship between falsehood and the level of negative emotions in individuals who responded to the information.We also did not find statistically significant support for a moderating effect in our online experiments, which examined the relationship between falsehood and partisanship and actual feelings of negative emotions.Hence, our results around the importance of political ideology as a boundary condition are mixed.This finding is surprising considering existing studies suggest that an (in)congruence of political ideologies in information environments often leads to a spiral of extreme and negativity-fuelled views (e.g., Bail et al., 2018;Del Vicario et al., 2016).
Surprisingly, our data also show that while negative emotions in response to extreme partisan information are higher among social media users at the ideological extreme than those at the ideological centre, there is no difference in the two groups' levels of negative emotions in response to misinformation.There are a couple of potential reasons for this finding.First, the content of misinformation might cater to a broader audience than extreme partisan information, which leads people from across the ideological spectrum to react similarly to misinformation but not to extreme partisan information.Misinformation tends to focus on issues trending in the mainstream to maximise its reach and, thus, might be relevant for people across the ideological spectrum.For example, misinformation from political elites during the COVID-19 pandemic claimed that vaccines for children are ineffective and dangerous (Waiss, 2022).On the other hand, extreme partisan information might cover more niche issues, primarily aiming to engage those at the ideological extreme.For example, extreme partisan information in the context of COVID-19 might discuss specific state-related policy issues about COVID-19 vaccines for children (e.g., Furr, 2021).It follows then that, in the case of misinformation, people from the ideological extreme and ideological centre are more likely to react equally negatively.Moreover, as noted, while partisanship arouses negative emotions through statements that appeal to partisan viewpoints, falsehood arouses negative emotions through false statements.False statements are not necessarily partisan, meaning that reactions to them can be equally negative across individuals at the ideological extreme and those at the ideological centre.
Lastly, the results suggest that negative emotions are associated with attitude polarisation.Thus, people exposed to misinformation about COVID-19 vaccines, for example, experience anger and their attitudes towards COVID-19 vaccines become more extreme.These results confirm prior studies that show individuals experiencing negative emotions are likely to become more polarised towards an issue (Buder et al., 2021;Kim & Kim, 2019).Overall, our empirical findings indicate that misinformation and extreme partisan information play a role in the cycle of political polarisation in which these types of information are more stimulating than accurate information and nonextreme partisan information and create negative emotions and polarise people.The results also suggest that, in some cases, these dynamics might be a particularly pertinent issue for people already supporting extreme political viewpoints.

| Theoretical implications
Our study makes several key theoretical contributions.First, broadly speaking, our findings add to our understanding of information types and their effect on user behaviours on social media platforms.Decades of IS research have examined many aspects of information types and their characteristics in an online context (e.g., King & Wang, 2021;Stieglitz & Dang-Xuan, 2013;Turel & Osatuyi, 2021).While some of this work has provided key insights into the association between falsehood and user engagement, less attention has been paid to whether extreme partisan information triggers greater engagement than non-extreme partisan information.We help address this issue and show that both falsehood and partisanship play a key role in affecting people's information sharing on social media.
Second, our data suggest a positive relationship between misinformation and extreme partisan information, and negative emotions and attitude polarisation.In exploring these associations, we respond to calls for more research and insights into the nature and mechanisms driving political polarisation, particularly in an IS context (Qureshi et al., 2020(Qureshi et al., , 2022b)).Prior research suggests that negative emotions, such as feelings of anger in response to reading an uncivil comment on social media, can lead to extreme attitudes (e.g., Kim & Kim, 2019).Here, we advance these findings and shed light on the role misinformation and extreme partisan information play in leading to negativity, which, in turn, leads to attitude polarisation.We, thus, go some way towards providing a more complete picture of the complex mechanism in which misinformation and extreme partisan information can contribute to an oftenvicious cycle of political polarisation (and potentially even social unrest).In so doing, we also extend the nascent literature on social media's dark side, including misinformation sharing and political polarisation (e.g., Sharma & Vasuja, 2022;Turel & Qahri-Saremi, 2016;Wang et al., 2023).
Third, we explore how specific factors may moderate the association between misinformation and extreme partisan information, and negative emotions in individuals.As noted, the results of our social media studies suggest that while negative emotions in response to extreme partisan information are higher among social media users at the ideological extreme than those at the ideological centre, there is no discernible difference between the two groups in their level of negative emotions in response to misinformation.Our findings demonstrate that while extreme partisan information may elicit stronger negative emotions from individuals at the ideological extremes, misinformation can generate negative emotions across the ideological spectrum.This finding indicates that negativity in response to misinformation has the potential to permeate the mainstream, meaning it could drive political polarisation on a broad scale.In contrast to many IS studies that draw on the phenomenon of echo chambers to explain how people at the extreme become more and more extreme (e.g., Del Vicario et al., 2016), this finding suggests that IS-particularly those in which misinformation is prevalent-might polarise the middle to the same extent as the extreme, creating an ever-widening gap between people with left-leaning or right-leaning political views.This finding points towards an interesting dynamic that extends prior research (e.g., Barfar, 2019;Horner et al., 2021;Vosoughi et al., 2018) and deserves further research attention.
Finally, we discuss several important future research opportunities.Based on our findings and existing literature, we distil key problem-or solution-oriented themes (see Appendix A).Problem-oriented themes call for more research into the problematic nature of the spread of misinformation and extreme partisan information, and political polarisation.Solution-oriented themes call for more research into the design of solutions that can tackle the problematic nature of the spread of misinformation and extreme partisan information.In laying out this agenda for future research, we intend to inspire scholars to explore misinformation and political polarisation further.Indeed, compared to the many advantages and opportunities that research usually emphasises in the context of technology-based communications (e.g., Gruner et al., 2014;Ray et al., 2014), there is significant scope to further examine and potentially avoid its harmful impacts on society.

| Practical implications
In this research, we go some way towards understanding how misinformation and extreme partisan information are engaged with on social media and potentially contribute to negativity and political polarisation.In so doing, we stimulate a practical discussion on how certain types of information can be more detrimental than beneficial.
On a platform level, our findings suggest that platform owners should pay more attention to information distributed by political elites and extreme partisan media outlets.Social media platforms could examine (if and) how their algorithms prioritise information from extreme partisan media outlets over other media outlets.Similarly, perhaps they could develop algorithms that prioritise information according to its veracity rather than maximising user engagement.After all, mitigating the spread of misinformation from political elites, information from extreme partisan media outlets, and the spread of negativity in user discussions could be crucial in breaking the cycle of political polarisation.We understand that people's engagement with information distributed on their platforms remains a top priority for most social media businesses (Donovan, 2021).Consequently, it may not always be in their best interest to mitigate the spread of misinformation, extreme partisan information, negativity and so forth.That said, we hope that our findings can assist organisations in developing new policies, regulations and strategies to facilitate the diffusion of more accurate and less extreme information.
On a user level, we suggest that the challenges that come with spreading misinformation and extreme partisan information could be tackled by highlighting a need for sociocultural change in consuming information on social media.Such changes could include greater education by introducing social media literature and etiquette in schools.
Educational activities could also include targeting influential social media users (so-called mavens; see Harrigan et al., 2021) to educate their followers on engaging more critically with political information.Moreover, our findings suggest that, across stakeholder groups, more should be learned about spotting and responsibly dealing with misinformation.

| LIMITATIONS AND DIRECTIONS FOR FUTURE RESEARCH
Our research investigated to what extent misinformation and extreme partisan information are associated with greater rates of sharing and attitude polarisation than accurate information and non-extreme partisan information.
However, we did not test the specific information characteristics that lead misinformation and extreme partisan information to attract higher rates of sharing and polarise people to a greater extent than their converse.Beyond theorising, future research could explore the specific content characteristics that interact with falsehood and partisanship in influencing sharing.The focus could be on manipulating characteristics to the extent they prime System 1 and System 2. In so doing, future research could also empirically test the specific role of System 1 and System 2 in the sharing of misinformation and extreme partisan information.
Similarly, while we explored to what extent negative emotions are key in driving political polarisation on social media, we did not test whether people are negative because they support or oppose the information.Future research should dig deeper into the nature of the negative emotions that drives political polarisation on social media.
We also found that people at the ideological centre are just as likely to be negative and polarised in response to misinformation as people at the ideological extreme.Future research should investigate in greater detail how people from across political ideologies, particularly those in the middle who are not often in focus, are susceptible to the negativity around misinformation and, thus, integral to its pervasiveness in the mainstream of a range of IS, and indeed society.
In our field studies, we focused on the first 100 replies that were received in response to each tweet.Our findings revealed a wide spectrum of political ideologies among the participants, ranging from extremely liberal to extremely conservative.However, it is worth noting that different studies may present alternative results.For example, tweets posted during night-time on the US west coast might attract responses primarily from local residents who tend to lean more liberal.We recognise the possibility of temporal ordering effects and suggest that future scholars consider sampling their data with these effects in mind.
In addition to the above, we also list some future research avenues in Appendix A; Table A1.Clearly, there are still many untapped opportunities in this field, and we encourage readers to delve deeper and explore the potential for advancing our understanding of the topic.
• What is the role of social media-induced content recommendations in increasing political polarisation?There is a need for research that captures the spread of misinformation and extreme partisan information and the effect on political polarisation through offline word of mouth.
• How does the effect of extreme partisan information on user behaviours and attitudes, such as sharing and political polarisation, differ between social media platforms?• How does misinformation and extreme partisan information spread in an offline setting?• Is misinformation or extreme partisan information more likely to spread through offline word of mouth than online word of mouth?• Are people more likely to get polarised through online or offline word of mouth?
T A B L E A 1 (Continued) Theme Examples Research questions

Sources
• Our research confirms that misinformation from political elites and extreme partisan information from media outlets have similar associations with sharing, negative emotions and attitude polarisation.Misinformation and extreme partisan information from different sources (e.g., political parties, social media influencers, political groups and political advocates) may, however, have different effects (e.g., Harff et al., 2022).Consequently, another key theme is how these effects differ when different sources spread misinformation and extreme partisan information.
• How do the effects of misinformation or extreme partisan information on sharing, emotions and political polarisation differ depending on the information source?• What is the role of source credibility in the spread of extreme partisan information or misinformation?
Solution-related themes

Corrections
• As part of our implications for practise, we suggested policies around the use of independent fact-checking websites or other entities that can help in fighting misinformation.Yet, it could be argued that labels which highlight that the post is false lead to reactance (i.e., an unpleasant motivational arousal to rules or regulations that threaten or eliminate specific behavioural freedoms, which leads to resistance) (e.g., Kozyreva et al., 2022).• Therefore, whether labels from social media platforms or independent factchecking websites are effective needs to be further explored.
• How effective are labels that highlight that the post is false, misleading, incites violence or similar, in decreasing belief in misinformation and political polarisation?• Are people more likely to express negative emotions when posts are labelled as misinformation?• Are people more likely to have polarised discussions in response to posts that are labelled as misinformation?

Sources
• As part of our implications for practise, we suggested an increase in efforts to combat misinformation with fact-checks and warnings from the social media platforms themselves, through integration of independent fact-checking websites such as PolitiFact, or through other entities that qualify for this task.However, even if those measures prove effective, there is an ongoing discussion on the power of social media platforms on the moderation of speech.Experts in the field suggest that platforms should self-regulate (Cusumano et al., 2021) and that social media platforms have too much power (Ghosh, 2021).• Thus, there should be further research on who the public perceives as the most trustworthy source and which source is the most effective in determining which • Are attempts to fight misinformation, for example, through labels, more effective depending on the source it comes from?• Who does the public perceive as most trustworthy on determining whether information is false, misleading or propaganda?How does this trustworthiness differ depending on political ideology? (Continues) T A B L E A 1 (Continued)
Study 2: How falsehood affects negative emotions and attitude polarisation 5.2.1 | Materials and procedure An online experiment was conducted through Qualtrics to complement Study 1 in several ways.Study 1, which was based on the textual characteristics of the replies, only observed negative emotions in individuals who replied to the misinformation or accurate information.On the other hand, Study 2 asked participants, regardless of whether they would reply to the information, what their actual emotions were after being exposed to misinformation and accurate information.Additionally, while in Study 1, we could not collect insights on any polarisation in individuals' attitudes, Study 2 allowed us to ask individuals about their attitudes before and after being exposed to misinformation or accurate information.

5. 4 |
Study 4: How partisanship affects negative emotions and attitude polarisation 5.4.1 | Materials and procedure This study followed the same procedure as Study 2 and the measures used (i.e., political ideology, emotions, attitude polarisation) were also identical.As in Study 2, the participants were recruited from MTurk with 123 usable responses.The final sample (M age = 38.3)included 77 males and 46 females.The only difference from Study 2 was Moderating effect: political ideology.

a
Bias corrected 95% confidence interval.b Statistically significant based on the 95% confidence interval.
T A B L E 4 Results of moderated mediation analysis (Study 2).The R-square value is relevant for the entire model, which is why we did not cite it at a specific location in the table.Instead, it is an overall footnote, being relevant to the entire model.Bold indicates significance levels are represented in the confidence interval Lower CI and Upper CI.
Note:The R-square value is relevant for the entire model, which is why we did not cite it at a specific location in the table.Instead, it is an overall footnote, being relevant to the entire model.Bold indicates significance levels are represented in the confidence interval Lower CI and Upper CI.
(Aldwairi & Alwahedi, 2018;Asr & Taboada, 2019elied on the individual fact-checking website PolitiFact to operationalise the falsehood of the information.Although some misinformation will need to be evaluated by humans and validated by many sources, there may be more obvious misinformation that can be more easily detected and classified as misinformation with the help of machinelearning algorithms(Aldwairi & Alwahedi, 2018;Asr & Taboada, 2019.• Detection algorithms might be especially crucial in an era where social media users, empowered by generative artificial intelligence, possess the ability to generate a significant volume of false or extremely partisan content.Mor research is needed.• How can machine learning models help in the detection of misinformation?• How can machine learning models complement human's evaluation of the falsehood of statements?• What are the challenges and limitations in detecting artificial intelligencegenerated content, and how can technological advancements be developed to enhance the detection and verification of such content?• How does the effectiveness of detection algorithms differ based on the source of the content (e.g., media outlet, political elite or regular user) or the type of the content (e.g., image, video, text)?Advertising • In our implications for research section, we suggest some ways that could potentially reduce the belief in and spread of misinformation.The goal, however, must not only be to hinder the spread of misinformation and the resulting increase in political polarisation.Manipulation check results (Study 2 and Study 4).Note: AI1 and NP5 were chosen for the paired t-tests because they have the highest mean score based on the manipulation check question.Abbreviations: AI, accurate information; MI, misinformation; NP, non-extreme partisan information, EP, extreme partisan information.Results of moderated mediation analysis (Study 2)-Anger only.The R-square value is relevant for the entire model, which is why we did not cite it at a specific location in the table.Instead, it is an overall footnote, being relevant to the entire model.Bold indicates significance levels are represented in the confidence interval Lower CI and Upper CI. Results of moderated mediation analysis (Study 4)-Anger only.Note: The R-square value is relevant for the entire model, which is why we did not cite it at a specific location in the table.Instead, it is an overall footnote, being relevant to the entire model.Bold indicates significance levels are represented in the confidence interval Lower CI and Upper CI.
T A B L E C 4 a Bias corrected 95% confidence interval.b Statistically significant based on the 95% confidence interval.T A B L E C 5 a Bias corrected 95% confidence interval.b Statistically significant based on the 95% confidence interval.