Reflection #10 – [10/02] [Dhruva Sahasrabudhe]

Paper-

[1] Examining the Alternative Media Ecosystem through the Production of Alternative Narratives of Mass Shooting Events on Twitter – Kate Starbird

Summary-

This paper uses twitter data to construct a network graph of various mainstream, alternate, government controlled, left leaning and right leaning media sources. It uses the structure of the graph to make certain qualitative observations about the alternative news ecosystem.

Reflection-

Some of my reflections are as follows:

This paper uses a very interesting phrase, “democratization of news production”, i.e. it very broadly deals with the fundamental struggle between power and freedom, and the drawbacks of having too much of either. In this case, an increasing democratization of news has weakened the reliability of the very information we receive.

It would be interesting to see what users who are involved in alternative media narratives follow outside the alternative media content, by analyzing their tweets in general, outside of the context of gun violence and mass shootings.

I found the case of InfoWars interesting – it was only connected to one node in the network graph. What were the reasons for that? Maybe infowars did not release much content about mass shootings, or maybe users who use it do not refer to other sources very often, or maybe it just produces content which no one else is producing, and thus sort of stands alone?

Only 1372 users tweeted about more than one alternative media event over the period studied. Maybe another longer-duration study can be conducted, since conspiracy worthy events happen rarely, and 10 months may not be enough to really find the network structure.

It was very interesting that this paper saw evidence of bots propagating fake news, and this paper also later claims that the sources were largely pro-Russia,  which might give some insight into Russian tampering in the 2016 election. The paper also mentions that the sources were not more left leaning or right leaning, but the thing they had in common was an anti-globalist bias, and they insinuated that all of Western Government is basically controlled by powerful external interests, painting the west in a bad light.

The graph provides a sort of spatial link, but it would be interesting to also have a temporal link between source domains, to see what the originators of the information are, and how information propagates over time in these ecosystems. The paper also alludes to this in the conclusion.

The graph is dominated by aqua nodes, which also hints at selective exposure being prevalent here too, providing further evidence to the topic of discussion of last week’s papers, i.e. users who have a tendency to believe in conspiracies, will interact with alternative media more than they will interact with mainstream/other types of media.

It is very interesting that 66/80 alternative media sites cited mainstream media at some point, while not a single mainstream media site cited an alternative media site. It hints at the psyche of alternative media, painting a sort of “underdog” picture for these sources, where they are fighting against an indifferent “big media” machine, which I feel is quite appealing to people who are prone to believing in conspiracies.

The paper states that belief in conspiracies can be exacerbated because someone thinks they have a varied information diet, but they actually have a very one-sided diet. This reminds me of the concept of the Overton Window, which is the name given to the set of ideas which can be publicly discussed at any given time. Vox had a very interesting video on how the Overton Window, after decades of shifting leftwards, is now beginning to shift to the right. This also has an effect on our information diet, where what we feel might be politically neutral, might actually be politically biased, because the public discourse itself is biased. 

Read More

Reflection 10 – [10/02] – [Lindah Kotut]

  • Kate Starbird. “Examining the Alternative Media Ecosystem Through the Production of Alternative Narratives of Mass Shooting Events on Twitter” (2017)
  • Mattia Samory and Tanushree Mitra. “Conspiracies Online: User discussions in a Conspiracy Community Following Dramatic Events” (2018)

Brief: The two papers extend the discussion from filter bubbles and echo chambers that are as a result of blackbox platforms and onto crippled epistemology or “intellectual isolation” that are due to the lack of source diversity due to mostly driven by user actions.

Lens: The two papers apply different lenses in looking at alternative theories surrounding dramatic events, Starbird vs Samory: Journalist/alternative media vs user lens, Twitter vs Reddit. While wanted an overall picture of the alternative media ecosystem — who the big hitters were, so followed the network approach, Samory focused on who the users are, how their behaviors both typifies them as conspiracy theorists, and how this behavior develops. Starbird’s work is also (mostly) qualitative while Samory follows quantitative analysis.

Countering Conspiracy
There is a thin line between conspiracy/alternative narrative and weaponized disinformation. The latter is touched in Starbird’s paper in discussing the #PizzaGate narrative and the role that Russian disinformation played in distributing misinformation. It is only when the line has been crossed from free speech to endangering lives that the law machinery comes to play. Before bridging that line, Samory recommends intervention before the line is breached. This is a starting point based on the quantitative analysis on the type of users in the r/conspiracy subcommunity.

This line between free speech and weaponized misiniformation allows us to apply retrospective lens from which to view both work, but especially the recommendations made in Samory’s paper. We use three examples that preceded and followed both papers:

  1. The Fall of Milo Yiannopoulos.
  2. The fall of Alex Jones.
  3. Grassroot campaigns

The three cases allow us to probe for reason: Why do platforms choose to spread misinformation? State actors aside, there other reason is for monetary reason (setting aside the argument that it is easy to churn out massive amount of content if journalistic integrity is not a concern — as notes Starbird). There is money in conflict. Conspiracy is a motherlode of conflict.

Removing the monetary incentive seems to be the way to best counter the spread/flourishing of these platforms. How to do this uniformly and effectively requires the cooperation of the platform and is subject, ironically to the rise shadow banning conspiracy (Twitter/Facebook being blamed for secretly silencing mostly conservative voices).

Why seek veteranship?
I would propose another vector of measurement to best counter theories from veterans: Why do veterans stay and continually post? There is no (overt) monetary compensation that they gain. And even if they are a front of an alternative news source, it does not square the long-game of continually contributing “quality” content to the community — which is counter to the type and volume of “news” from the alternate sources. It is easy to profile joiners and converts as they are mostly swayed by dramatic events, but not veterans. Does they also chase the sense of community with other like-minded people, or the leadership — the clout brought about by the longevity of the posters bring to bear in these decisions? Because these innate, yet almost unmeasurable goals would make it difficult to ‘dethrone’ (for lack of a better term) these veterans from being arbiters of conspiracy theories. The (lack of) turnover of the community’s moderators would also aid in answering these questions.

Read More

Reflection #10 – [10/02] – Subhash Holla H S

“When you are young, you look at television and think, there’s a conspiracy. The networks have conspired to dumb us down. But when you get a little older, you realize that’s not true. The networks are in business to give people exactly what they want.”                                – Steve Jobs

The paper is a good precursor to the research project that Shruthi and I are interested in. I would like to analyze the paper in terms of a few keys points that were mentioned trying to capture the things mentioned in the paper and sharing my own insights and possible explanations for them.

Conspiracy theory:

The paper eludes to this concept of people online and websites, in general, are catering to “conspiracy theories” as that is what has been seen to draw people’s attention.  But what are conspiracy theories? The paper categorizes them under “alternative narratives” not giving a formal definition of what is being considered as “conspiracy theory”. I will define it as “any propagation of misinformation” which I believe is broadly the same meaning the paper talks about as well. A couple of interesting facts that the paper talks about which I feel is worthy of address are:

  • once someone believes in a conspiracy theory of an event, it is extremely hard to dissuade them from this belief“. Here I would like to further substantiate that some opinions and beliefs are ingrained into a person and unless they are given a logical explanation over a long period of time reinforcing the notion that they might be wrong it will be difficult to get a person to change their stance. This effort is a necessary one and I feel painting the picture of where the information is coming from will help negatively reinforce the idea that they are right. If we do not put an effort to do this then “belief in one conspiracy theory correlates with an increased likelihood that an individual will believe in another” will turn out to be true as well.
  • The definition of conspiracy theorists being a part of ‘alternative to “corporate-controlled” media‘ is one I do not agree with. This raises a philosophical debate as to where we draw a line? Should we draw a line or should we look for methods that do not try and draw a line?

Bias:

“first author is a left-leaning individual who receives her news primarily through mainstream sources and who considers that alternative narratives regarding these mass shooting events to be false” was according to me a revelation in the field of human behavioral modeling. Being a part of the Human Factors community and have interacted with many Human Factors professionals this is the first time that I have seen an author explicitly mentioned inherent bias in an effort to eliminate it. Acknowledgment is the first step to elimination. I think the elimination of Bias should follow a similar procedure like the time-tested 12-step model we follow in Addiction recovery.  That could be an interesting study as a model like that could shed some light on my hypothesis that “Humans are addicted to being biased”.

Another point is the use of confusion matrix based on signal detection theory. We could use this to build a conservative or liberal model of a human and then use this generalized model to help design tools to foil the propagation of misinformation and “alternative narratives”.

General Reflection:

In general, I found a couple more observations that resonated with the previous readings of this semester.

The overall referral to misinformation propagation coupled with the video lecture where the author presents an example of Deep Fakes for misinformation propagation, sent me back to this question that I have been asking myself off late. All of the research we have analyzed is on text data. What if the same was video data? Especially in this case as we do get some if not all of our information from YouTube and other such video platforms. Will this research translate directly to that case? Is there existing and/or ongoing research on the same? Is there a need for research on it?

Theme convergence was another concept that really interested me as I would be really interested in understanding how diverse domains converge to common themes. These will help build better group behavioral models and overcome the problem of falling into Simpson’s paradox that researchers fall into, especially when dealing with human data.

PAPER:

K. Starbird, “Examining the Alternative Media Ecosystem through the Production of Alternative Narratives of Mass Shooting Events on Twitter,” Icwsm17, no. Icwsm, pp. 230–239, 2017.

Read More

Reflection 10 – [10/02] – [Vibhav Nanda]

Readings:

[1] Examining the Alternative Media Ecosystem through the Production of Alternative Narratives of Mass Shooting Events on Twitter

Summary:

In this paper the author talks about the ecosystem around fake news/alternative news/conspiracy theories. In order to study such an ecosystem, author took an interpretivist approach to understand the ecosystem — by “blending qualitative, quantitative and visual methods to identify themes and patterns in the data.” The data was collected from Twitter Streaming API, tracking on words that could indicate a shooting — such as gunmen, shooter, shooting etc. — over a 10 month period, resulting in 58M tweets. An extremely high tweet count, for a single topic, was the result of 3 high profile mass shootings — Orlando(FL), Munich(Germany), and Burlington(WA). To extract tweets related to alternative narratives, author used key words like false flag, crisis actor, hoax etc, resulting in 99,474 tweets. After getting the tweets, the author carried out a great deal of classification of the accounts and the domains that the links point to. During their analysis, the author created a network graph to fully understand the alternative news ecosystem. Interestingly enough, the author found thematic similarities between conspiracy theories, and alternative narratives of real life events.

Reflection/Questions:

This was an extremely interesting read for me as conspiracy theories are my guilty pleasure, but only for entertainment reasons. 58M tweets were collected relating to gun shootings, however only 99,474 were identified as being related to alternative news. Seeing how only an extremely small percentage (around 0.17%) were related to conspiracy theories, I would say this is not an epidemic YET. Whilst reading this paper, I started thinking about possible solutions or tools to raise awareness amongst readers; without banning/blacklisting websites/pages/users who indulged into such sort of activities. I came up with the following system.

New Design:

Background: I would say there is a difference between information and news. Information includes both opinions and facts and may or may not be verified; news (in its typical sense) is only facts and is verified from multiple sources. Stemming from this difference, citizens should only be allowed to freely disseminate all the information they want, however they should not be allowed to disseminate any sort of news. Only authenticated citizens should be allowed to disseminate news, we can call them e-journalists. Same goes for websites, and pages on faceboook (and other social media websites). The system I am going to outline, only focuses on websites.

Assumption: User is a male, and the platform that is under discussion is Twitter (can be scaled to other platforms also).

Explanation of the system: The system has multiple facets to it.

A) Each time a user cites an authenticated website as a news source on his tweet, he gets some reward points (for being a responsible citizen). Each time a user cites an unauthenticated website as a news source, he gets penalized for that. If the user ends up having 0 points, he will not be allowed to cite any more unauthenticated websites unless he gains some points by citing an authenticated source first. Let’s call this point system “Karma points”

B) When the user posts a link to an unauthenticated website as a news source, he will get a warning pop up window when he presses the tweet button. The pop up window will let him know that the news source is not authenticated, could include some more discouraging language, and then will have a confirm button — which will allow him to cite that website anyway. When that tweet is finally posted, it will have a warning next to it. The warning will let other users know that this specific tweet has cited an unauthenticated website as its news source. This will discourage the user from citing such websites in the first place.

C) When a different user(the reader) clicks on the unauthenticated website, he will also get a pop warning saying that he is “about to enter a website that is not identified as an authentic news source.” He would have to click the confirm button to move forward. The readers Karma points will remain unaffected.

 

 

Affects of the design:   I believe such a design will caution people when entering an unauthenticated website cited as a news source. It will also dissuade people from sharing such websites that tend to have fake news/ conspiracy theories. The process of dissuasion will come first as a caution (before they post) and then as shame (when their post is marked with a warning label and reduced karma points).

Read More

Reflection 10 – [10/02] – [Prerna Juneja]

Examining the Alternative Media Ecosystem through the Production of Alternative Narratives of Mass Shooting Events on Twitter

Reflection:

In this paper, the author studies the “fake news” phenomenon by generating and studying domain network graphs and qualitatively analysing tweets collected over a ten-month period. She explains the political leanings of the alternate news sources and describes how these websites propagate and shape alternate narratives.

Examples

Global warming is a hoax created by the UN and politically funded scientists/environmentalists, aided by Al Gore and many others to put cap and trade in place for monetary gain

Osama bin Laden is not dead / has been dead for long / is an invention of the American government.”

That ample evidence of alien life and civilization exists in our solar system, but is covered up by NASA.”

These are some of the popular conspiracy theories. Several others can be found on website that is dedicated to listing and categorizing these theories.

What is the need to study conspiracy theories?

Research has shown that if you start believing in one conspiracy theory, there is a high chance that you might start believing in another. “Individuals drawn to these sites out of a concern with the safety of vaccines, for example, may come out with a belief in a Clinton-backed paedophilia ring”, stated Kate Starbird in one of her published articles.

Why do people believe in conspiracy theories and false narratives? How to change their mind?

The author of an article replicated a part an existing study. He conducted a twitter poll asking users if the sequence 0 0 1 1 0 0 1 0 0 1 0 0 1 1 has any pattern. 56% agreed that there is indeed a pattern even though sequence was generated by just randomly flipping a coin.

The author cites the paper “Connecting the dots: how personal need for structure produces false consumer pattern perceptions” [I couldn’t find the pdf version, I just read the abstract] and states that

 One of the reasons why conspiracy theories spring up with such regularity is due to our desire to impose structure on the world and incredible ability to recognise patterns” and

“facts and rational arguments really aren’t very good at altering people’s beliefs”

The same article discusses several approaches and cites several studies on how to convey authentic information and make people change their mind: –

  • Use stories: People engage with narratives much more strongly than with argumentative or descriptive dialogues.
  • Don’t mention the myths while making your point since it has been seen that myths are better remembered than facts.
  • While debunking the fake theories, offer explanations that resonate with people’s pre-existing beliefs. For example, conservative climate-change deniers are much more likely to shift their views if they are also presented with the pro-environment business opportunities.

Use of Bots

One of the things I noticed in the paper is the use of bots to spread the conspiracy theories and misinformation. It seems that during major events, the bot activity increases manifold. I found two studies analysing bot activity (not just limited to spread of conspiracy theories) “NEWS BOTS Automating news and information dissemination on Twitter”[1] and “Broker Bots: Analysing automated activity during High Impact Events on Twitter”[2].

More diverse events

The data in the study was limited to shooting events. The analysis could extend to other high impact events like natural disasters, elections, policy changes and find out the similarities if any in the sources spreading the misinformation and the misinformation itself.

Influence of multiple online communities

Certain subreddits (the_donald) and 4chan (/pol/) communities have been accused of generating and disseminating alternative narratives and conspiracy theories. What is the role of these communities in spreading the rumours? Who participates in these discussions? And how are users influenced by these communities?

Identify conspiracy theories and fake news

How do rumors originate? How do they propagate in an ecosystem? What is the lifespan of these rumors?

I feel an important step to identify conspiracy theories is to study how the language and structure of the articles coming from alternate sources differ from those of the mainstream ones. Not just the articles but also the carriers i.e the posts/tweets sharing them. How is the story in these articles weaved and supported? We saw an example in the paper where Sandy Hook shootings (2012) were referenced in Orlando shooting (2016) as an evidence to support the alternate narrative. What other sources are used to substantiate their claims? Authors in paper “How People Weave Online Information Into Pseudoknowledge” find out that people draw from a wealth of information sources to substantiate and enrich their false narrative, including mainstream media, scholarly work, popular fiction, and other false narratives.

Artcles/videos related to conspiracy theories among top search results

A simple search for “vaccination” in YouTube gives atleast 3-4 anti-vaccination videos in the top 30 results where people share theories/reasons about why vaccines are not safe (link1, link2). More conspiratorial content in the top results will expose more people to fake stories which might end up influencing them.

Read More

Reflection 10 – [10/02] – [Shruti Phadke]

Paper 1: Starbird, Kate. “Examining the Alternative Media Ecosystem Through the Production of Alternative Narratives of Mass Shooting Events on Twitter.” ICWSM. 2017.

Image result for conspiracist cartoon

Startbird et. al’s paper qualitatively analyzes sources of news related to shooting events on twitter based on the interpretive graph analysis. She analyzes how alternative sources and mainstream sources are referred in different contexts when it comes to conspiracy theories. That is, either in support or to dispute the claims. The paper also dwells on how such citing of alternative news sources can foil politically motivated conspiratorial thinking.

One common theme that is present throughout the paper, is how all of the events are blamed on one central “untouchable” entity such as U.S government or a secret organization. This comes from a very common conspiratorial trait according to which “everything needs to have a reason”. It is found that conspiracists try to relive the event as an aftermath of that event by “rationalizing” it.[1] Further, such theories go on to giving increased importance to certain agents such as FBI, government, Illuminati etc. The paper mentions that 44 out of 80 sources were promoting political agenda. It would be interesting to know which agents such source frequently target and how do they tie such agents to multiple events. 

The paper makes another excellent point that “exposure to online media correlates with distrust of mainstream media”.  Considering that the mainstream media is correcting conspiratorial claims or presenting the neutral narrative, it will be interesting to do a contrast study in which network of users “bashing” mainstream media can be found. One important thing to note here is that just by text analysis methods, it is difficult to understand the context in which the source of information is cited. This reflects in the three ways mentioned in the paper by which alternative media promotes alternative narratives. 1. They cite alternative sources to support the alternate narrative or 2. They cite mainstream sources in a confrontational way. This is where the quantitative approaches are tricky to use because clearly, just the presence of a link in a post doesn’t tell much about the argument made about it. Similarly, hand-coding techniques are also limiting because analyzing the context, source and narrative takes a long time and results in high quality but smaller dataset. One possible way to automate this process can be to perform “Entity Sentiment Analysis” that combines both entity analysis and sentiment analysis and attempts to determine the sentiment (positive or negative) expressed about entities within the text. Treating the sources cited as “proxy” entities, it can be possible to find out what is being said about them in the positive or negative light.

The paper also mentions that believing in one conspiracy theory makes a person more likely to believe another. This, alongwith the cluster of sources supporting alternative narratives domains in the figure 2 can form a basis for quantitatively analyzing how communities unify. [2]

Lastly, as a further research point, it is also interesting to analyze when a particular alternate narrative gets popular. Why some theories take hold while many more do not? Is it because of the informational pressure or because of the particular event. One starting point for this kind of analysis is [3] which says that when the external event threatens to influence users directly, they explore content outside their filter bubble. This will require retrospective analysis of posting behavior before and after a specific event considering users which are either in geographical, racial any other ideological proximity of the group affected by that event.

 

 

[1] Sunstein, C. R., & Vermeule, A. (2009). Conspiracy Theories: Causes and Cures. Journal of Political Philosophy, 17(2), 202–227. https://doi.org/10.1111/j.1467-9760.2008.00325.x

[2]Science vs Conspiracy: Collective Narratives in the Age of Misinformation. PLOS ONE, 10(2), e0118093. https://doi.org/10.1371/journal.pone.0118093

[3] Koutra, D., Bennett, P., & Horvitz, E. (2015). Events and Controversies: Influences of a Shocking News Event on Information Seeking. TAIA Workshop in SIGIR, 0–3. https://doi.org/10.1145/2736277.2741099

Read More

Reflection 10 – [10/02] – [Subil Abraham]

Starbird’s paper is an interesting examination of the connections between the many “alternative news” domains, with mass shooter conspiracies being the theme connecting them all.

The paper mentions that the author themselves is a politically left-leaning individual and points out that this may bias their perceptions when writing this paper. The author mentioning their bias made me think about my own possible left-leaning bias when consuming the news. When I see some news from an “alt-left” site that someone on the right would call a conspiracy theory, am I taking that news as the truth because it probably agrees with my perceptions; perceptions which may have been sculpted by ages of consuming left-leaning news. How would I, as an individual on the left, be able to conduct a study on left-leaning alternative narratives without my bias skewing the conclusions? Scientists are humans and you cannot eliminate bias from a human entirely. You could bring on people from the right as equal partners when conducting the study and then keep each other in check to try and cancel out each other’s bias. How well you would be able to work together and produce good, solid research considering that this is essentially an adversarial relationship, I do not know.

It’s interesting that Infowars has such a large share of the tweets but only has one edge connecting to it. GIven how prominent Infowars is, one would think that they would have a lot more edges i.e. users that tweet out other alt-right websites would tweet out Infowars too. But it seems like a bulk of the users just tweet out Infowars and nothing else. This means that the audience of Infowars, for the most part, does not overlap with the audience of other alt-right news sites. Now, why would that be? Is it because Infowars’ audience is satisfied with the news they get from there and don’t go anywhere else? Is it also because the audience of other alt-right sites think Infowars is unreliable or maybe they think Infowars is too nutty? Who knows. A larger examination of the reading habits of Infowars’ audience would be interesting. Since this study focuses only on mass shooter conspiracies, it would be interesting to know if and how widely Infowars’ audience read when it comes to the wider field of topics the alt-right websites talk about.

The conclusions from this paper ties really well into the theme of Selective Exposure we talked about the last two reflections. People see different sources all spouting the same thing in different ways and repeated exposure reinforces their opinions. You only need to plant a seed that something is maybe true and then barrage them with sources that are seemingly confirming that truth. Confirmation Bias will take care of the rest. It is especially problematic when it comes to exposure to alternative narratives because the extreme nature of the opinions that will form will be particularly damaging. This is how the anti-vaxxer movement grew and now we have the problem that diseases like measles are coming back because of the loss of herd immunity thanks to anti-vaxxers not vaccinating their children [1]. Trying to suppress alternative narratives is a lost cause as banning one account or website will just lead to the creation of two others. How can we delegitimize false alternative narratives in people who are deep in the throes of their selective exposure? Simply pointing to the truth clearly doesn’t work, otherwise it would be easy and we wouldn’t be having this problem. People need to be deprogrammed by replacing their information diet and enforcing this change for a long time. This is basically how propaganda works and it is fairly effective. Conducting a study of how long it takes to change someone’s entrenched mindset from one to the opposite with only information consumption (controlling for things like personal life events) would be a good first step to understand how we can change people (of course, we run into ethical questions of should we be changing people en masse at all but that is a can of worms I don’t want to open just yet).

Read More

Video Reflection #9 – [09/27] – [Nitin Nair]

How we humans select information depends on a few known factors which make the information selection process biased. This is a well established phenomenon. Given such cognitive biases exist and we live in a democratic system in an age where information overload exists, how does it impact social conversation and debate?
As mentioned in the talk this selective exposure or judgment can be used for good, for example to increase voter turnout. But this gets me thinking, is this nudging sustainable? Relating to the discussions after reading reflection #1, about different kinds of signals, is this nudge an assessment or conventional signal? One could definitely think about an instance where users exposed to barrage of news instances which bolsters their positions get desensitized, resulting in neglection of these cues.
The portion of the talk where the speaker discusses about behaviour of people when exposed to counter-attitudinal positions is an interesting one. This portion coupled with one of the project ideas proposed by me got me thinking about a particular news feed design.

Given that we solve the issue of mapping the position of different news sources in the political spectrum, in order to expose users to sources from outside their spectrum, we could design a slide bar whose position will decide the news sources which will populate the news feed as shown above. The lightning symbol next to each article allows one to shift to a feed populated by articles talking about the same topic. The topic tags which are found through keyword extraction (Rose et al. (2010)) combined with the time of publishing of the article could help us suggest news articles talking about the same issue from a source with a different political leaning.
Given such a design, we could identify the trends on how and when the users enter the sphere of counter-attitudinal positions which is an idea the speaker mentions in the video.
Do people linger more on comments which go against their beliefs or which suits their beliefs? One could run experiments on consented users to see which comments they spend more time reading. Pick and analyze posts which top the list, accounting for the length of the post. My hypothesis is that comments which go against one’s belief would warrant more time as one would take time to comprehend the position first, compare and contrast with their own belief systems and then take action which can be replying or reacting to the comment. If using temporal information is useful, it could pave way to a potential method through which one can find “top comments”, uncivil comments(more time taken) along with explicit content(less time taken). During the extraction of top comments one has to have a human in the loop along with accounting for the personal political position in the spectrum.
The discussion by the speaker on “priming” of users using the stereotype content model, is extremely fascinating (Fiske et al. (2018)). Given that priming has a significant impact on the way users react to certain information, can it be possible to identify “priming” in news articles or news videos?
One could build an automated tool to do so to detect and identify the kind of priming, may it be “like”, “respect” or other orthogonal dimensional primes. The orthogonal prime could be “recommend” the one the speaker points out in her research (Stroud et al. (2017)). Given such an automated tool exists, it would be interesting to use it on large number of sources to identify these nudges.

 

References

Susan T Fiske, Amy JC Cuddy, Peter Glick, and Jun Xu. 2018. A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition (2002). In Social Cognition. Routledge, 171–222.

Stuart Rose, Dave Engel, Nick Cramer, and Wendy Cowley. 2010. Automatic keyword extraction from individual documents. Text Mining: Applications and Theory (2010), 1–20.

Natalie Jomini Stroud, Ashley Muddiman, and Joshua M Scacco. 2017. Like, recommend, or respect? Altering political behavior in news comment sections. New Media & Society 19, 11 (2017), 1727–1743.

Read More

Reflection #9 – [09/27] – [Vibhav Nanda]

Video: Partisanship and the search for engaging news

Summary: In this blog I am proposing a system which will nudge readers towards the other side — based on their current emotional and mental state.

Introduction: Natalie Stroud’s video inspired me to come up with a system which can encourage bipartisanship and burst the echo chamber effect. From the video and the previous papers that I have read, I have gathered that we need to work on and worry about people with extreme political standpoints(extreme left leaning and extreme right leaning); people with a more balanced standpoint already read news from disparate sources — their balance is what makes them supporters of the center politics. Extreme political takes can usually be traced down to belief systems, and to nudge people out of their belief system is risky — sometimes leading to resentment towards others’ belief system. Howbeit, based on an individuals mental and emotional state they are more or less likely to try to understand other side of the story. I am proposing a system which will track users’ behavior online, understand how they usually behave given a circumstance and if their behavior is deviant from usual then nudge them towards the other side.

Assumption: For the sake of simplicity and brevity, I am going to make the following assumptions:

  1. The system only tracks behavior in the comment section (inspiration drawn by the video)
  2. User is a male liberal and aggressive towards anyone opposing his opinion (on an online platform)

Explanation of the system through an example:  Now lets say our hypothetical user gets his daily dose of news from articles shared on facebook (extremely realistic situation), and because of all the filters, the news he gets is usually published by CNN and MSNBC. He reads the news, scrolls through the comments section and responds aggressively to users whose comments are in opposition to either the article or the topic it is about(lets say gay rights). Aggression is the users usual response to top 5 opposing comments — this is our users’ online persona and has been recognized and developed by our system. Now one day our user reads an article about gay rights and either doesn’t respond so aggressively towards opposing comments or doesn’t respond at all — an aberration that would be detected by our system and would be flagged as “open”, meaning this user is open today to opposing ideas. Taking advantage of this open mindedness, our system will subtly nudge the user towards a gay rights article written by Fox News.

 

Novelty:  The system leverages changes in moods and emotions to nudge readers towards the other side, instead of a constant nudge. A constant nudge can lead to ignorance of the nudge’s presence, frustrate the user into switching off the feature and if that is not possible, then pushing the user to a different platform. This timely nudge is important for it to be successful in promoting the user to be empathetic towards the other side and to engage in a more civil and logical conversation.

Read More

Video Reflection #9 – [09/27] – [Karim Youssef]

Partisanship is an inherently social phenomenon where people tend to form groups around different ideologies and their representatives. However, if partisans get isolated inside their groups without a constructive exposure to ideas and opinions of other groups, the society may start to deviate from being a healthy and democratic society. Talia Stroud works towards promoting a constructive engagement between partisans of different groups and mitigating the negative effect of partisanship in online news media.

The first study presented by Stroud leverages the Stereotype content model to promote the idea of distinguishing likeness and respect. Results of this study show a significant effect for changing the names of the reaction buttons on comments. Some questions could arise here such as: what is the long-term effect of such a solution in terms of actually refining any negative partisan behavior?. The results of the study partially answer this question by showing that people actually “respect” opposing ideas. But from all people who pass by an opposing comment, how many are actually willing to positively engage with an opposing comment? How to encourage people to engage and respect an opposing comment that deserves this respect?

From my perspective, I would suggest answering these questions by the following:

  1. extending the study within the context of selective exposure and selective judgment by studying the percentage of people who stop by opposing comments, read them, and give them a deserved respect.
  2.  extending the design to include a feedback to the user. For example, including a healthy engagement score that increases when a user reads and respects an opposing opinion.

The second study presented in the video analyzes the effect of incivility in online news media comments by analyzing the triggers of reward and punishment for comments. In this regard, the study compares three behavioral acts; profanity, incivility, and partisanship.  It is of no surprise that profanity is the agreed-upon rejected act by both commenters and moderators. However, it is a fact that sometimes conversations with incivility attract views and even engagement. Many among my generation grew up watching these TV shows with political opponents fighting on air. These types of media always have the good cause of promoting fruitful discussions between opposing mindsets, however, as she mentioned, there are business incentives behind promoting some controversial discussions.

In a perfect world, we may wish that fruitful interactions between partisans of different groups become as encouraging for engagement as those situations when partisans engage in a fighter mode to defend their ideologies. The question is how to encourage news organization to define clear thresholds for the amount of acceptable incivility in discussions about hot issues?. From another perspective, is it feasible to do so? Or should researchers focus on promoting desirable engagement among users rather than moving towards a stricter moderation of online comments?

From my perspective, the current model for news organizations is the best we can do in terms of having a set of rules and( humans and/or automated ) moderators enforcing these rules to some extent. The changes that we apply could be changed in the user interface design of online news organizations to promote a healthier engagement ( e.g. the first study with my suggestions added to it ) integrated with some of the ideas surveyed in the work Bursting Your (Filter) Bubble: Strategies for Promoting Diverse Exposure. Another important step could be auditing ( and maybe redesigning ) the recommendation algorithms to ensure that they do not contribute to this called filter bubble effect.

 

Read More