Experimental Design for Evaluating Experts’ Reflection on Image Geo-location with GroundTruth

Need:

For a journalist, it is essential to verify news from several sources, with as little time as possible. Especially, verifying images and videos from social media is a frequent task for modern journalists [1]. The process of verification itself is tedious and time-consuming. Often the journalists have very little time to verify breaking news, as they are required to be published as soon as possible. Often times the experts have to manually search throughout the map to find a potential match. GroundTruth [2] is crowdsource based geo-location solution, which allows the users to use crowd workers in order to find the potential location of an image. The system provides a number of features, such as uploading an aerial diagram of the mystery image, drawing an investigation area on the Google Map for searching, dividing the search space into sub-regions on which the crowd workers cross-check the satellite view with the diagram, enabling the expert user to go through the crowd feedback to find a potential match. This novel approach of geo-locating, however, has not been tested with respect to the currently existing tools. My research is to design an experiment that would evaluate the experts’ reflection using GroundTruth.

Approach:

For the research purpose, I considered the scenario where the experts need to verify an image based on a social media post. These images on the social media post are usually associated with a location. My high-level goal was to replicate similar situation for our experiment. For this reason, I needed to set some ground rules for selecting the images for our experiment. I considered both urban and rural images for my experimental design. Based on the level of details for the urban images mentioned in the paper [2], I created a similar level of details for the rural images. These details are important to draw the diagrams of the images. I chose medium-level (3 and 4 level) of detailing for drawing diagram for both urban and rural images since the findings in the paper [2] suggest medium-level details performs best with the crowd workers. I wanted all of the images, both urban and rural images, to be from the same environment. For this purpose, I selected the “Temperate Deciduous Forest” as my preferred biome. The reasons behind choosing this biome are the location being in the Eastern United States, Canada, Europe, China, and Japan, having the moderate population, having four seasons, the vegetation being similar in both urban and rural images, etc. While selecting the images, I used GeoGuessr website, which provides street view images without any labels using Google Map API v3 [3]. I created a set of guideline for selecting these images in order to recreate the images for later purposes. I based the set of guideline on two criteria: 1) the number of unique objects, and 2) the number of other objects. Firstly, I identified the objects based on the details mentioned in the paper [2]. Among all the objects, I specified some as “unique” objects based on their unique features with respect to its surrounding. I ran some initial searching where these unique features contributed significantly more than its counterparts did in geo-locating the images. The guidelines were created to categorize the images into 1) Easy, 2) Medium, and 3) Hard. I chose four final images, two from the rural area and another two from the urban area, of medium level difficulty for my experiment. Finally, I set a guideline for the information on the location that would be provided. For this purpose, I chose the town level information for urban images and county-level information for rural images. In the final images, the area for the rural area was approximately 3 to 3.5 mi2, and the area for the urban area was approximately 2 to 2.75 mi2.

Benefit:

As a geo-locating system, GroundTruth [2] focuses on using crowd’s contribution in order to minimize long extensive manual tasks that the experts have to conduct themselves. The experimental design for this research can help evaluate the performance of the crowd workers while contributing to the geo-locating system GroundTruth. The GroundTruth system can be further modified in accordance with the responses received from the expert journalists. Focusing on the features, which the experts found useful, and modifying the ones where the experts failed to operate could make the system more efficient. Furthermore, our experiment design can be used as a benchmark for future evaluation of other geo-locating systems.

Competition:

The competition for the GroundTruth system are the existing tools that the experts use to verify the image location at present. Currently, the expert journalists use various tools for their verification, starting from Tineye, Acusense, etc. for image verification to Google Map, Wikimapia, TerraServer, etc. for geo-locating images. Although crowdsourcing based site Panoramio has been shut down by Google, another crowdsourced website Wikimapia can be used for investigating the location of an image. Tomnod is another website that uses the help of volunteers to identify an important object and interesting places in satellite images. Additionally, Google’s upcoming Neural Network based PlaNet has the ability to determine the location of an image with great accuracy.

Result:

For the result analysis, I chose two metrics; one is from their performance in geo-locating an image, and another through qualitative and quantitative survey questions. The performance would be analyzed by the completion time and the distance between the selected location and the actual location. The survey questions were set keeping in focus the expects’ reflection about the process, outcome, and their subjective experience of using GroundTruth system.

Discussion:

The experimental design discussed in this research can evaluate the way experts use the GroundTruth system compared to the existing tools. However, there are some potential challenges for the research, such as appropriate image selection, short time-limit, training of the GroundTruth system, backdated satellite imagery, etc. Nonetheless, this research is an initial step to understand the expert’s reflection while using crowdsourcing in geo-locating.

References:

[1] Verification Hand Book by Craig Silverman

[2] Kohler, R., Purviance, J. and Luther, K., 2017. GroundTruth: Bringing Together Experts and Crowds for Image Geolocation.

[3] GeoGuessr.com

Read More

Demo: Snopes.com

 

Snopes.com

Category: Fact Checking

Link: https://www.snopes.com/

Demo Leader: Ri

Summary:

Snopes.com is one of the first online fact-checking websites, which was featured in NPR in August 2005 as the “Urban Legends Reference Pages”. Created by David and Barbara Mikkelson in early 1995, which later became Snopes.com. The website became popular as an early online encyclopedia focused on urban legends. It also got its very television plot in 2002 under the name Snopes: Urban Legends.

The site aims at either debunking or confirming widely spread urban legends. The site was also referenced by various news media, like CNN, MSNBC, Fortune, Forbes, and NY Times on multiple occasions.

Apart from verifying or debunking urban legends, the website also features news articles regarding various topics, like Political News, Crime, Controversy, Entertainment News, Conspiracy Theories, etc.

Reflection:

Currently, there are several fact-checking websites, like Factcheck.org, Politifact.com, etc. Among these websites, one of the earliest one is Snopes.com. Even the site has been proven to have nonpolitical affiliation by another site Factcheck.org [2].

I found the website to be richly populated by various topics and urban legends. I liked the fact that without even creating an account, one can browse through the latest urban legends and learn about their veracity. The site also verifies many of the urban legends in a short period of time like 2/3 days. One can verify the stories at a glance as they are mentioned upfront under the news’ image. Their description of the stories and the verification process are also quite elaborate.

I, however, found there “Hot 50” list a little bit confusing. They did not mention anywhere on the site (not to my knowledge) how they ranked this list. One of my intuition was that the ranks are generated automatically by the number of views, shares, and/or date. However, I found contradictions to that notion. At the time of my exploration, “Did an Iranian Woman Undergo 50 Plastic Surgeries to Resemble Angelina Jolie?” [3] was on the #1 post in the Hot 50 posts. Although there was another post, namely “Did Tokyo Open the First Human Meat Restaurant?” [4], which had more share counts and was more latest from the dating perspective. Interestingly, I also found that the later news was fact-checked by David Mikkelson, the creator of the Snopes.com website himself.

How to:

  1. Go to the URL: https://www.snopes.com/.
  2. The website allows you to search the site based on keywords or URLs.
  3. Searching something by keywords returns the urban legends which have been fact-checked by the website. The search results then can be filtered by Category (Fact Check, News), Authors, and from the certain time period (All time, Last week, Last month, Last year).
  4. The left navigation bar contains several features, which the website offers, like what’s new, hot 50, fact check, news, etc.
  5. Click on the “What’s New” option to get the latest fact checked urban legends.
  6. You can also find Most Searched urban legends on the right side of the website. Each post is assigned the number of shares they got so far. There is a similar strip on the side regarding Most Shared urban legends.
  7. By clicking the “Hot 50”, you can see the list of news that is currently ranked as top 50 posts. Inside every post, there is Claim, Rating (True/False), and Origin describing the urban legend. The post also contains the share count for that particular post along with sharing options to different social media, like Facebook, Twitter, Google+, Pinterest, etc.
  8. You can click on the “Fact Check” option to get a list of urban legends fact-checked by the Snopes associates. The list of the legends is arranged from latest to the oldest. Each of the posts contains an image above which the category of the news is also mentioned, like Fauxtography, Viral Phenomena, Technology, etc.
  9. You can click on the “News” option to get a list of news written by the Snopes associates. Sometimes, they also feature news article from other online news media, like apnews.com. The list of the news is arranged from latest to the oldest. Each of the posts contains an image above which the category of the news is also mentioned.
  10. In the “Video” option, you can find the posts containing videos.
  11. In the “Archive” option, there are many posts all listed under different categories.
  12. By clicking the “Random” option, you will be provided with a random fact-checking post from the website.
  13. There is also several tags on the top strip of the website.
  14. You can subscribe to the Snopes.com via your email to get daily updates by clicking the option “Get the Newsletter”.

References:

[1] Snopes.com: Debunking Myths in Cyberspace – NPR.org

[2] “Is Snopes.com run by “very Democratic” proprietors?” – FactCheck.org

[3] Did an Iranian Woman Undergo 50 Plastic Surgeries to Resemble Angelina Jolie? – Snopes.com

[4] Did Tokyo Open the First Human Meat Restaurant? – Snopes.com

Read More

Demo: Social Searcher

Social Searcher

Category: Tech Visualization

Link: https://www.social-searcher.com/

Demo Leader: Ri

Summary:

Social Search is a behavior of retrieving and searching on a social searching engine that mainly searches user-generated content such as news, videos and images related search queries on social media like Facebook, LinkedIn, Twitter, Instagram, and Flickr [1]. It was originally created as www.facebook-search.com on June 2010 and later migrated to www.social-searcher.com on May 2011. The site itself is not affiliated with social media companies like Facebook, Twitter, or Google. The site allows users to search publicly posted information on Twitter, Google+, Facebook, YouTube, Instagram, Tumblr, Reddit, Flickr, Dailymotion, and Vimeo. All these public information can be browsed via the site without the need of logging in or creating an account. However, a registered free user gets some benefits. Such as, saving their searches, setting up email alerts, etc. A premium-registered user can avail him/herself of premium features, such as saving social mentions history, exporting data, API integration, advanced analytics, immediate email notifications, etc. [2]

Reflection:

I think such real-time search engine for social media can be used to a great extent by the journalists. It can also reflect the public sentiment over a particular topic. As mentioned in the article [3] by Journalism UK, the Android app for social searcher became the “App of the week for journalists”.

I really liked the visualization of the data presented on the website. The interactive interface along with rich information was a delight to use.

I, however, was confused about the way they filtered the sentiment of the posts. In many cases, I found dissimilarities between the post and its assigned sentiment.

One another thing that intrigued me is that the site has this tool called “HOT Trends”, where articles about latest trends are supposedly listed. However, at the time of my personal exploration (December 2017), I found all of the hot trended articles to be dated back to 2015. I could not fathom how such outdated articles could be listed as “HOT Trends”. It also leads me to the belief that these articles might have been manually trended as such and are currently not properly monitored and updated.

Their special projects also seemed to be quite outdated having the latest project article dated March 2014.

How to:

  1. Go to the URL: https://www.social-searcher.com/
  2. Type in the search box. This invokes one of the site’s tools called “Social Buzz”. This tool can also be accessed from the footer of the website.
  3. The searches can be made based on Keywords, Exact Keywords, and Minus Keywords via the Keywords Tab.
  4. You can specify the sources from which they want the information from via Sources Tab. In addition, you can mention specific Facebook URL as the source.
  5. You can further select the types of the posts, such as link, status, photo, and/or video in the More tab.
  6. In the Filter Search, you can also provide the above parameters, like the types of the posts and selection of sources. In addition to these, you can filter the searches based on sentiment (Positive, Negative, and/or Neutral). Positive sentiment is colored by Green, whereas, Negative and Neutral are colored as Red and Grey respectively.
  7. Each of the posts allows you to go to the original post or share the post via the three-dotted-option at the right bottom corner of each post.
  8. In order to see the detailed statistics with data visualization, you can click on the button “Detailed Statistics”. This will populate the data based on the criteria, general, sentiment, users, links, types, and keywords.
  9. You can also export the data from the more option.
  10. You can finally check out the other features in the footer of the website, such as, blog, pricing, about, plugins, API, etc. to get a better idea about their system.

References:

[1] Social Search, definition by WIKIPEDIA

[2] Social Searcher, the official page.

[3] “App of the week for journalists” – by Journalism UK

Read More

Interacting with news

Article: Interacting with news: Exploring the effects of modality and perceived responsiveness and control on news source credibility and enjoyment among second screen viewers.

[Link]

Author: Michael A. Horning

Presentation Leader: Ri

Summary:

As the technology has created new media of communication, traditional news media have also sought ways to make use of them. According to a study, 46 percent of smartphone owners and 43 percent of tablet owners reported using their devices while watching TV every day. This led to the introduction of second-screen, dual screen, or multiscreen viewing, a type of user experience where the user gets the primary content from the primary screen while simultaneously involving them into user interaction via mobile/tablet device. By expanding telecasting contents on the mobile devices, the organizers get more attention from their audience, both in terms of participation and comprehension. This incident is seen in ads during live shows, sports networks’ second screen interviews, and CNN’s QR codes on live TV, which directed to online content, etc.

Although in a prior study showed that almost half of the population owning a smartphone or a tablet mentioned that they use their devices while watching TV, a later study showed that almost 80% of the users use their second screen to view unrelated contents. On top of that, 66% of all national journalists expressed their concerns about new technologies hurting coverage. In order to make newsrooms capable of adapting to innovations, this research paper [1] raises two important questions. Firstly, whether the additional second screen adds new extent to the enjoyment of the viewers. And secondly, if the second screen content adds more credibility to the source of the news.

In the recent times, several pieces of research have been conducted on second screen content viewing. In a research conducted on 2014 [2], the authors found second screening made it difficult to recall and comprehend news content by increasing cognitive loads, whereas a later research in 2016 [3] showed the second screen rather strengthened users perception of both news and drama. Some researchers have credited the novelty of the experience of new technology, whereas some the interaction, and varying levels of modality as the success of the second screening.

From all these prior researches and much more, the author of the article [1] established six hypothesis in total. The author tested his hypothesis on 83 college-aged students (32 males, 51 females) with a method involving two original news. Both of the news videos were similar in nature with the exception of the last part. In one of the videos, the anchor invites the users manually go to a website to view related stories, whereas, in the other video, the users were invited to scan the TV using an iPad. The prior video was identified as the Low Modal Interactivity and the later video as High Modal Interactivity.

 

 

The table below shows the six hypotheses and the research findings:

# Hypothesis Results
H1 Second screen experiences with higher modality will be rated as more enjoyable than second screen experiences with lower modality Contradict
H2 Second screen experiences with higher modality will be rated as more credible than second screen experiences with lower modality Contradict
H3a Second screen users that perceive the experience to be more highly interactive measured by perceived control will rate news content as more enjoyable than those who perceive it to be less interactive Support
H3b Second screen users that perceive the experience to be more highly interactive measured by perceived responsiveness will rate news content more enjoyable than those who perceive it to be less interactive Support
H4a Second screen users that perceive the experience to be more highly interactive measured by perceived control will rate news content as more credible than those who perceive it to be less interactive Support
H4b Second screen users that perceive the experience to be more highly interactive measured by perceived responsiveness will rate news content as more credible than those who perceive it to be less interactive Support
H5 Second screen experiences that have higher modality and higher perceived interactivity will be rated more positively and be perceived as more enjoyable Partial support
H6 Second screen experiences that have higher modality and higher perceived interactivity will be rated more positively and be perceived as more credible Partial support

 

Reflection:

Second screen viewing allows the user to interact with the media and giving them the opportunity to get involved with the means of communication. It transforms the lethargic viewer into somewhat active participator by providing some mean of control and interaction. Second screening may also make the contents of the primary screening more enjoyable and more credible since it allows the users with the option of elaborating on the information. Even the novelty of the experience might play some role in making the contents more enjoyable and more credible.

I liked how the author tries to explore several characteristics of second screen viewing through several related papers. The author does a good job of explaining so many prior works involving second screen viewing and even communication and journalism in general. Some of the researchers had opposite notion to each other, and I liked the author’s effort in bringing both kinds of research in the context of second screen viewing.

In my opinion, the author also did a commendable work of setting the premise of the research. I find the six hypothesis equally interesting and worth addressing. What intrigued me though is the way the author tried to design his experiment design for the research. As a second screen viewing, the author chose two scenarios where Low Model Interactivity is identified as clicking links manually with the contrast of High Model Interactivity as scanning the screen with iPad. And finally, to assess the situation, the participants were prompted with a questionnaire. The reason why it intrigued me is that among the 83 participants, 55.5% of them indicated that they had never used QR Codes prior to this research. I also find it interesting that the research did not have any gender effect according to the author.

The findings of the research were interesting in my opinion. The first two hypotheses of the experiment focuses on how structural effects impact news enjoyment and news credibility. Surprisingly, the result suggests that the modality did not emerge as a significant predictor of either news enjoyment or credibility. For the latter hypotheses, second screen users that perceive the experience to be more highly interactive measured by perceived control and by perceived responsiveness rated news content as both more enjoyable and more credible. The final two hypotheses were also partially supported by the research depicting second screen experience to more positive, enjoyable, and credible. In both cases of this two hypothesis, the interaction between modality and perceived responsiveness was not significant, however, the interaction between modality and perceived control was.

Questions:

  • Prior researches suggest higher modality in second screen experiences to be rated more enjoyable and more credible. However, the findings of this research suggest otherwise. Why do you think it is?
  • Among the participants, 55.5% mentioned they had never used QR Codes in their life. Do you reckon previous experience or the lack thereof might have impacted the results?
  • How do you think multiple interactions over a longer period of time might change our perception as an audience?

References:

[1] Horning, M.A., 2017. Interacting with news: Exploring the effects of modality and perceived responsiveness and control on news source credibility and enjoyment among second screen viewers. Computers in Human Behavior, 73, pp.273-283.

[2] Van Cauwenberge, A., Schaap, G. and Van Roy, R., 2014. “TV no longer commands our full attention”: Effects of second-screen viewing and task relevance on cognitive load and learning from news. Computers in Human Behavior, 38, pp.100-109.

[3] Choi, B. and Jung, Y., 2016. The effects of second-screen viewing and the goal congruency of supplementary content on user perceptions. Computers in Human Behavior, 64, pp.347-354.

 

Read More

The CSI Effect: The truth about forensic science

Article: The CSI Effect: The truth about forensic science

Author: Jeffrey Toobin

Presentation Leader: Ri

Summary:

The article covers several points about investigative forensic science exploring its factual sides with the comparison to its fictional representation. The article tries to reference one of the then most popular CBS television series named “CSI: Crime Scene Investigation” [2], and 2 of its spinoffs “CSI: Miami” [3] and “CSI: New York” [4]. While doing so, the author describes how the real-life crime investigations are much more tedious and fallible in contrast to its fictional representations. The article starts with giving a real-life criminologist Lisa Faber’s statement in the courtroom where after analyzing hundreds of hair and fibers she could come to the conclusion that the evidence might have originated from the source.

The author points out this cautious description in conclusion compared to the much more confident assertion shown in the fictional TV shows. Additionally, the author mentions how the general public nowadays believe that science can, with definite certainty, identify the criminal based on the limited amount of evidence.

The author then focused on the analysis of bite marks, blood spatter, handwriting, firearm and tool marks, and voices, as well as of hair and fibers – which are popular forensic-science tests depicted on “CSI”. Many of these forensic techniques are outdated and somewhat obsolete in present courts. Some of these techniques also show errors in their follow-up examinations as well.

The author tries to find the origin of these forensic techniques in the early 19th century and its successful usage in the early 20th century. The author mentions an incident where a New York doctor, Calvin Goddard, analyzed bullets to successfully identify which submachine guns they were fired from. This might be revolutionary at that time, however, the techniques remain quite the same to this date without much improvement.

The author explores more on Lisa Faber’s journey at how she first came to choosing forensic investigation as a profession. The article also describes briefly the process of analyzing hair in the criminal cases. For DNA testing with hair, the author found out, hairs whose roots are intact only have nuclear DNA, which is unique to each person. However, the hairs collected from the crime scenes often lack hair roots, to begin with. The author then learns from Lisa Faber the complicated and tedious process hair analysis – starting from hair color to its chemical composition.

The author then briefly describes the 2 types of DNA testing – nuclear DNA and mitochondrial (mt) DNA. Although mtDNA testing is more frequent and can eliminate many suspects after testing, it is greatly prone to errors. The author explores the perception of different professional experts about the use of mtDNA and its fallibility. It divides the experts into 2 groups where one group believes in the credibility of using mtDNA to further establishing findings from generic hair analysis and another group believes it should be excluded because of its flaws.

Later in the article, the author describes the recent steps taken by Faber in her lab combining traditional and modern technology for hair and fiber analysis. It is called Biotracks – a burglary program – which analyzes tissue, hair, fiber samples dropped at the crime scene by the criminal while using rubber gloves, wiping tissue, soda bottle, etc.

On the final note, the author hinted how the fictional TV representation makes the forensic profession popular and quite hipster in the eyes of public, despite having a somewhat stretching representation from the real-life scenarios.

 

Reflection:

I found the article to have an interesting way of exploring the forensic investigation while establishing a sense of dissimilarity with its fictional representations. As the television series like CSI, tend to serve the purpose of entertainment, they often seem to build their fictional world based on some real-life existence. One of the topics that are strongly existent in the fictions is that they need to have a definite finding at the end of the show. This leads them to force feed some conclusions which might not be used with such confidence in the real-life.

Additionally, in the real-life, the process of investigation can be a long, complicated, and monotonous process. Contrast to this, the fiction episodes are usually less than one hour long. Hence, many of the criminologists in the fictions possess much higher confidence in the accuracy of their findings. As described in the article – an air of glamour, an aura of infallibility. This is done mostly to make the show more interesting and entertaining to the audience. The shows, after all, are meant to serve as an entertainment, not documentary.

It was very interesting for me to learn the intrinsic details of the investigative process and the margin of error for them. I was unaware of how much effort and time are spent in collecting suitable evidence, analyzing them, and trying to come to a conclusion keeping potential errors in mind. I was really intrigued to find out that even at times follow up examinations might come to the same false conclusion. Also, the incident of the convicted suspect, Jimmy Ray Bromgard, in the rape case of an eight-year-old, where he was later found innocent. The initial trial had the manager of the Montana state crime lab, Arnold Melnikoff, testify stating that the odds against the suspect were one in ten thousand. However, the later DNA testing proved that the initial conclusion was wrong. It made me realize how difficult it is to come to a definitive conclusion in the forensic medicine sector. It also justifies how Lisa Faber phrased her conclusion in a very carefully picked words in the earlier part of the article.

The thing that intrigues me further is that the public misbelief about the accuracy of the scientific findings. I was also intrigued by the impression the jury had over Faber’s fiber analysis, as the article states “The prosecutors liked the idea of fibre evidence… it was more ‘CSI’-esque.” It raises the question in my mind whether we are mixing up real-life facts with fictional enticements. Like Michael J. Saks states, “It’s the individualization fallacy, and it’s not real science. It’s faith-based science.”

 

Questions:

  • Is fictionalizing investigative forensic science exoterically a good approach?
  • Many experts hold different opinions when it comes to mtDNA as a subsequent test to hair analysis. What are your thoughts on using mtDNA (which has a higher error rate) in the court?
  • Do you think making the general public aware of the potential error rate in forensic science might actually decrease the credibility of the whole sector?
  • With the relevance of this class, can the crowd be trusted in crime investigations given that even the experts are at times fallible?

 

References:

[1] https://www.newyorker.com/magazine/2007/05/07/the-csi-effect

The CSI Effect: The truth about forensic science by Jeffrey Toobin

[2] http://www.cbs.com/shows/csi/

CSI: Crime Scene Investigation

[3] http://www.cbs.com/shows/csi-miami/

[4] http://www.cbs.com/shows/csi-ny/

 

 

Read More

Examining Technology that Supports Community Policing

Article: Examining Technology that Supports Community Policing

Authors: Sheena Lewis and Dan A. Lewis

Leader: Ri

Summery:

Community policing is a strategy of policing that focuses on police building ties and working closely with members of the communities [2]. The paper [1] analyzes how citizen uses technology in order to support community policing by conducting a comparative study between 2 websites that were created to help citizens address crime. One of them is CLEARpath, an official website created by the Chicago police to provide information and receive tips from the citizens of Chicago. Another one is an unofficial web forum moderated by the residents of the community for having problem-solving conversations.

The motivation of the paper [1] lies in:

Designing technology to best support collective action against crime.

The paper [1] discusses 2 theory based crime prevention from 2 perspectives, i) Victimization Theory, the police perspective, and ii) Social Control Theory, the community perspective. The victimization theory focuses on understanding crime as events that occur between a potential victim, offender, and the environment, whereas, the social control theory suggests that social interactions influence criminal acts through informal enforcement of social norms. The victimization theory tries to prevent criminal behavior by educating potential victims. On the contrary, the social control theory suggests that criminal behavior can be influenced by strong application of social norms.

In the later sections of the paper, the authors examined a diverse north-side Chicago community and its online and offline discussions about crime. This particular community had a medium-level of violent crime along with a high number of property damage.

The authors found that Chicago police had smarter technology implements in CLEARpath, such as, finding crime hot spots, connecting to other law enforcement agencies, providing extensive mapping technology, etc. The website had 15 modules in total, 12 of them for providing information and 3 modules to accept its community concerns as inputs. It also had an informal community policing web forum with 221 members as of 2011. The authors also examined the community web forum, described as “community crime website” and found numerous online posts. Interestingly, the authors found only 3 community concern posts in 365 days to the police website, whereas, 10 posts in 90 days on the community web forum. This shows a significant participation difference between official police website and informal community web forum.

The research also deduct based on their findings that residents of the community use the forum to:

  • Build relationships and strengthen social ties,
  • Discuss ways that they can engage in collective action,
  • Share information and advice, and
  • Reinforce offline and online community norms.

Based on these findings, the authors suggest that there should be significant change in design in crime prevention tools. For increasing active participation, designs should focus not only on citizen-police interaction but also on citizen-citizen interaction, where relationship building can occur.

Reflection:

The paper [1], in my opinion, takes a HCI approach to address the crime theories and how these theories can be translated into design implication. The problem is important in order to share personal experiences and strengthen social ties in a community which can further address local concerns and criminal activities. The existing solution of official police website doesn’t encourage active participation. As a result, the information conveyed on the website may not get the maximum impact. The authors rather suggest that web tools to support community policing should be designed to adhere to and support communication that allow residents to engage in collective problem-solving discussions and to informally regulate social norms.

In my opinion, community policing can increase awareness among the residents of a certain community. As the paper [1] suggests, community policing can range from a member’s improper landscaping to an elderly being assaulted during home invasion. The community policing reflects the real and contemporary problems and issues faced by the community itself and their way of addressing them.

However, what I found troubling about this platform is that, the article mentions that site moderators have the power to ban members of the forum if they don’t abide by their group rules and regulations. It got me thinking, what happens after the member is been banned? Since the member is presumably still a resident of the community, he/she is a part of the community.  Is the ban temporary or permanent? Is the banned member approached in person by other members of the community for resolving the situation? Or does it create more unsettling situation in the real life?

I think the author also mentioned another important topic of legitimacy of the community policing in the eyes of the police officials. The article mentions that the moderators managed the legitimacy of the website by distancing the website from the police. Also, I think, trust and accountability are 2 very important challenges regarding community policing.

For further study, I suggest a later paper [3], named “Building group capacity for problem solving and police–community partnerships through survey feedback and training: a randomized control trial within Chicago’s community policing program”, published in 2014, which also analyzes Chicago’s community policing program and comes with a solution regarding police–community partnerships through survey feedback and training.

 

Questions:

  • Could you propose some designs that may increase the participation of community members in the official law-enforcement website?
  • Does the banning of members, who violate group rules, make the community a better place? Or does it only separate the members from the virtual world as they keep their presence in the community intact?
  • Do you think it is possible to establish legitimacy of community policing in the eyes of police officials? Can the trust on police official be increased? And can the online platform introduce accountability to the community policing?
  • What do you think can be done for the people who are not part of the online community? Does community policing explicitly need all the members of the community to actively participate in the online web forum?

 

References:

[1] Lewis, S., & Lewis, D. A. (2012). Examining technology that supports community policing. In Conference Proceedings – The 30th ACM Conference on Human Factors in Computing Systems, CHI 2012 (pp. 1371-1380). DOI: 10.1145/2207676.2208595

[2] Community policing, as defined in Wikipedia.

[3] Graziano, L.M., Rosenbaum, D.P. & Schuck, A.M. J Exp Criminol (2014) 10: 79. https://doi.org/10.1007/s11292-012-9171-y

Read More

Motivation Factors in Crowdsourced Journalism: Social Impact, Social Change, and Peer Learning

Paper:

Aitamurto, T. (2015). Motivation Factors in Crowdsourced Journalism: Social Impact, Social Change, and Peer Learning. International Journal of Communication9(0), 21. http://ijoc.org/index.php/ijoc/article/view/3481/1502

Discussion Leader:  Rifat Sabbir Mansur


Summary:

Crowdsourcing journalism has recently become a more common knowledge-search method among professional journalists where participants contribute to journalistic process by sharing their individual knowledge. Crowdsourcing can be used for both participatory, where the crowd contributes raw materials to a process run by a journalist, and citizen journalism, ordinary people adopt the role of journalist. Unlike outsourcing where the task is relied on a few known experts or sources, crowdsourcing opens up the tasks for anybody to participate voluntarily or for monetary gain. This research paper tries to explain crowd’s motivation factors for crowdsourced journalism based on social psychological theories by asking the two following questions:

  • Why do people contribute to crowdsourced journalism?
  • What manifestations of knowledge do the motivation factors depict?

The author tries to seek the motivation factors in crowdsourcing, commons-based peer production, and citizen journalism using self-determination theory in social psychology. According to the theory human motivations are either intrinsic, done for enjoyment or community-based obligation, or extrinsic, done for direct rewards such as money. The author reviews various papers where she found both forms of motivation factors for crowdsourcing, commons-based peer production, and citizen journalism. In the later part of the paper, the author introduces four journalistic processes which use crowdsources, conducted in-depth interviews with many of the participators in the crowdsourcing, and processed her findings based on that.

The cases the author uses are as following:

  1. Inaccuracies in physics schoolbooks
  2. Quality problems in Finnish products and services
  3. Gender inequalities in Math and Science Education
  4. Home Loan Interest Map

The first 3 stories were published in various magazines and the later one was conducted on Sweden’s leading daily newspaper. The first 3 stories were further categorized to Case A since the same journalists worked on all three stories. The 4th story used a crowdmap where the crowd submitted information about their mortgages and interest rates online. The information were then geographically located and visualized. It became very popular breaking online traffic records for the newspaper.

The author conducted semi-structured interviews with 22 participants in Case A and 5 online participants in Case B. The interview data were then analyzed by Strauss and Corbin’s analytical coding system. With these analyzed data the author presented her findings.

The author posits that based on her findings the main motivation factors for participating in crowdsourced journalism are as follows:

  • Possibility of having an impact
  • Ensuring accuracy and adding diversity for a balanced view
  • Decreasing knowledge and power asymmetries
  • Peer learning
  • Deliberation

The findings from the author shows that the motivations above are mostly intrinsic. Only the motivation for peer learning the participants expressed to have the desire to learn from others’ knowledge and practice their skill, making it extrinsic in nature with its intrinsic nature of a better understanding of others. None of the participants expected any financial compensation for their participation. They rather found themselves rewarded as their voices were heard. The participants also believed monetary compensation could lead to false information. The participation in crowdsourced journalism is mainly volunteer in nature having altruistic motivations. The intrinsic factors in the motivations mostly derived from the participators’ ideology, values, social and self-enhancement drives.

The nature of crowdsource journalism are to some extent different from commons-based peer production, citizen journalism, and other crowdsourcing contexts as the former have neither career advancement nor reputation enhancement unlike the later. Rather the participants perceive journalism as being a part to social change.

The author brings the theories of digital labor abuse and refutes it by showing results suggesting the argument does not fit to the empirical reality of online participation.

The author finally discusses about several limitations of the research and scope for future research using larger samples, numerous cases, and empirical contexts in several countries including both active and passive participators of the crowd.

 

Reflection:

The study in the paper had profound social psychological analysis on the motivations of the participators in crowdsourcing. Unlike prior researches the paper involves itself with motivation factors in voluntary-based crowdsourcing i.e. crowdsourcing without pecuniary rewards. The author also tried to address the motivations on crowdsourcing, commons-based peer production, and citizen journalism separately. This allowed her to dig deeper into the intrinsic and extrinsic drives of the motivations. The author also further classifies intrinsic motivations into 2 factors, such as, enjoyment-based and obligation/community-based.

The study revealed some very interesting points. As the author mentions that having an impact drives participation. This is one of the main motivations in the crowd participators. One specific comment:

“I hope my comment will end up in the story, because we have to change the conditions. Maybe I should go back to the site, to make sure my voice is heard, and comment again?”

I find this comment very interesting because this shows nature of the intrinsic motivation and the unwritten moral obligation the participators feel towards their topic of interest. Here, the participator’s motivation is clearly to bring social change.

Another interesting factor, in my opinion, is that volunteering involves sharing one’s fortune (e.g., products, knowledge, and skills) to protect oneself from feeling guilty about being fortunate. The author mentions this as the protective function that drives volunteer work.

In my opinion, one of the clear good sides of crowd participation is developing a more accurate picture of the topic and offering multiple perspectives. The role of filling the knowledge gaps in a particular topic in the journalists’ understanding helps build a more accurate and full picture. It also provides a check for yellow journalism. This also allows participators to contribute with multiple perspectives creating diverse views about controversial topics.

What I found interesting is that the participants did not expect financial compensation for their participation in crowdsourcing. On the contrary they believed if this effort was monetarily compensated, it could actually be dangerous and skew participation. However, with pecuniary rewards the tasks draws a different group of crowd who are more aware of the responsibilities. This might actually encourage people to be more attentive participators and more aware about their comments/remarks.

Another interesting notion of the paper is that, the participants in this study did not expect reciprocity in the form of knowledge exchange. This characteristic, in my opinion, could arise the situation of firmly holding onto one’s false belief. The fact that the participators want to be a part of a social change they can be disheartened if their volunteer efforts were not appropriately addressed in the journalistic process.

I liked the author’s endeavor to address the differences and similarities between motivations in crowdsourced journalism.  In crowdsourced journalism, the crowd contributes only small pieces of raw material to a journalist to consider in the story process. Cumulatively, it can produce a bigger picture of an issue. By this way participators of the crowdsourcing can be a contributing part of a social change with their respective atomic inputs.

The limitations of the study, however, has great significance. The author mentions that it is possible that only those participants who had a positive experience with crowdsourcing accepted the interview request for the study. This might have caused the motivations in the study to be more intrinsic and altruistic in nature. With a different and widespread sample, the study might reveal some more interesting factors of human psychology.

 

Questions:

  1. What do you think about the differences between voluntary and reward-based crowdsourcing in terms of social impact?
  2. What do you think about the effects of citizen journalism on professional media journalism?
  3. Given the limitations, do you the case studies had adequate data to back up its findings?
  4. What do you think the future holds about the moderation of crowdsourcing?
  5. The study suggests a wide variety of crowd contribution like crowd-mapping, citizen journalism, common-based peer production, etc. How do you think we can develop systems to better handle the crowd’s contribution?

Read More