Hollaback!

Paper:

Dimond, J. P., Dye, M., Larose, D., & Bruckman, A. S. (2013). Hollaback!: The Role of Storytelling Online in a Social Movement Organization. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work (pp. 477–490). New York, NY, USA: ACM.

Discussion Leader: Lee Lisle

Summary:

Various forms of social media have been able to assist social mobilization movements with events such as the Arab Spring to rating system for Mechanical Turk employers with Turkopticon. In particular, these platforms have allowed for people to find and help other people facing similar struggles or harassments. Hollaback is an organization that brings together victims of street harassment to share their stories and promote awareness. The main platform for their organization is a website, but they have further used mobile technologies to enhance their reach.

After some discussion on various background topics, the authors discussed their semi-structured interviews with 13 users of the platform. Each interview lasted between 30 and 90 minutes and asked users to recount the story they shared on Hollaback, along with their motivations and feelings after they shared their story. The authors then analyzed the interviews using grounded theory to find how people using the platform are affected by the presence and use of the platform. They then continue on to evaluate how sharing stories can help other “genres” of communities.

 

Reflections:

I found this paper to be a fairly interesting take on how storytelling can assist with the creation of online communities. In particular, I found their sections on previous works to be the best part of the paper, since it was so rich in providing context for the rest of their paper.

I also found it interesting that the authors brought up “slacktivism” in the paper. While they never used the term again, the authors presented details (with quotes!) on how the work of Hollaback (and storytelling communities in general) was not slacktivism despite the relatively low requirements on users. To be more specific (and to not diminish the role of these users), the people in the community don’t need to put in extreme time commitments or go to any joint location in order to “rally” or perform more traditional forms of activism. In addition, the users seemed to be allowed to be as anonymous as they desire to be in their stories, which can lower the requirements and make them feel more at ease.

I also thought the “Researcher Self Disclosure and Reflexivity” section was an interesting addition that I had not considered before this paper.   Understanding one’s own bias and discussing it is something I haven’t seen in many papers. However, I do question if this practice can reduce bias, both from the reader and the author. In this spirit, I will also disclose that I am a fan of storytelling and grounded theory and was before I read (and volunteered to lead discussion on) this paper.

One issue I had with the paper was that over half of the participants were students, while the issue at hand had no specific relation to students. Furthermore, while I recognize that this issue is not a U.S. specific issue, having less than 10% of participants be from other countries seemed like an odd choice for the interviews. The authors did not establish that the culture of the UK is not sufficiently distinct from US culture, and this should have been one of the selection criteria for participants.

Questions:

  • Do you think that this form of storytelling is “slacktivism?”
  • Beyond the two examples in this paper, what are other forms of Frame transformation and extension in social movements?
  • Does the shift from the researcher being a “friendly outsider” to an active participant change the way people should respond to this paper? Furthermore, how does the self-disclosure section impact this?
  • In the discussion I raised issues with 2 different selection criteria for participants. What do you think are appropriate selection criteria for interviewing participants for this kind of study?
  • Last class we discussed the pros and cons of anonymity, and it appears in this paper as well. How would you compare and contrast the ways anonymity helps with this paper and the 4chan paper?

Read More

Ad Hoc Crowd-sourced Reporters on the Rise

Paper:

Agapie, E., Teevan, J., & Monroy-Hernández, A. (2015). Crowdsourcing in the Field: A Case Study Using Local Crowds for Event Reporting. In Third AAAI Conference on Human Computation and Crowdsourcing.

Discussion Leader: Lawrence Warren

Summary:

In this great age of social networks and digital work, it is easy to think that any job or task can or should be done online, however there are still a few tasks which inherently require a physical presence of a real person. This paper identifies a hybrid method which allows tasks to be handled by a group of individuals in an area of interest and is supervised by an offsite coordinator. There were 4 main insights in this study

  1. Local workers needed to overcome physical limitations of the environment
  2. Local workers have greater engagement with the event attendees
  3. Local workers needed to assure that information collected fulfilled the requirements set by the remote coordinator
  4. Paid workers offer more fact based reports while volunteers offer richer context

In this hybrid model tasks were divided up and then assigned to one of four roles (reporter, curator, writer, and workforce manager) and was used on 11 local events of various size, duration, and accessibility most of which were publicly advertised and were not expected to receive much news time or blogger presence. Local reporters attended the events in question, during which they completed a set of assigned tasks which had been decomposed based on what area was trying to be covered during a particular event. The curator was the quality control portion of the model and made sure information was provided in a timely matter and was not plagiarized. Based on the curator feed, the writers then created short articles called listicles which made it easy to write and understand for anyone who was not an expert. This of course was all happening while the manager was overseeing every part of the process since they are familiar with what the requirements were for every step in the process.

 

Reflections:

This model seems to have several similarities to how news can be done correctly in my opinion. It is not feasible to have a professional reporter at every event, but it is possible to employ satellite workers for smaller events and have their work be put through a series of professionals to be published as to not miss anything which may be insignificant to someone not associated with a specific community, but is very important to those who have direct contact with the community events. The main issue with separating work tasks was also addressed within this paper and that is information fragmentation. Tasks have to be assigned in such a way that there is going to be overlap with information collection or else reporters with different writing styles or levels of experience will create discrepancies and missing information. Probably the most interesting results of this paper in my opinion are centered around the quality of the articles. I am in no way doubting the effectiveness of the technique, however the way this experiment was set up it did not really have much to compare itself to. Small local events which had no coverage were used and then articles were created and then compared to articles of past years of similar events which I believe can have some skewed results. It would have been a better comparison if they instead covered a more popular event and compared stories of similar context of the same year to compare the results.

Questions:

  • According to this paper there were a few challenges which were presented by the physical environment (mobility, preparation time, and quality assurance). Which of these do you think is the easiest to overcome? How are these problems unique to the hybrid model?
  • The workflow model in this paper describes how roles were assigned to both local and remote workers. Can you think of any possible issues with the way they have the workload broken up? How would you fix these problems?
  • Certain limitations were mentioned with this method of reporting which were mostly based on the lack of in depth training. Can you think of a way which that very training may interfere with this model of reporting?
  • Recruiting seemed to be an issue with this paper but if this model was to be widely implemented that could not be the case. There are already recruiting platforms as mentioned within the article but how can you more actively improve the participation of this kind of reporting?
  • Will this model be able to stand the test of time?

Read More

Doxing: A Conceptual analysis

Paper:
Douglas, D. M. (2016). Doxing: a conceptual analysis (Links to an external site.)Ethics and Information Technology, 18(3), 199–210.
Discussion leader: Md Momen Bhuiyan

Summary:
In this paper the author discusses doxing, intentional release of someone’s personal information onto the Internet by a third party usually with the intention to harm, from a conceptual point by categorizing it into three types: deanonymizing, targeting and delegitimizing. Although doxing is a fairly old concept, recent “Gamergate” incident has stirred public interest in this. Author also discusses how this practice is different from other privacy violation activities. Finally the author tries to justify some deanonymizing and delegitimizing doxing where it is necessary to release personal information for revealing wrongdoing.

From Marx’s point of view, revealing any personal information removes some degree of anonymity of the subject. Here Author uses Marx’s seven types of identity knowledge as a reference for types of personal information that can be used for doxing. He distinguishes doxing from blackmail, defamation and gossip as first one requires a demand to the subject, the second one requires the information to be damaging to the subject and the third one is usually some hearsay. He then uses Marx’s rationale for anonymity to discuss the value of anonymity.

Deanonymizing doxing is revealing someone’s identity who was previously anonymous. Author uses two example to illustrate this. One is “Satoshi Nakamoto”, the creator of Bitcoin. And the other is “Violentacrez”, a Reddit moderator. Targeting doxing, usually followed by deanonymizing doxing, is revealing specific information about someone that can be used to physically locate that person. Targeting doxing makes the subject vulnerable to a wide range of harassment, from pranks to assault. Delegitimizing doxing is releasing private information about someone with the intent to undermine subject’s credibility. Sexuality is commonly used in this context. Delegitmizing doxing has the potential to create “virtual captivity”. Delegimizing doxing goes hand-in-hand with targeting doxing where the first one provides the motive for harassment and the second one provides means. This combination is illustrated in the “Gamergate” incident where a former boyfriend of the subject posted her personal detail which resulted in prolonged harassment.

To justify doxing author interprets Bok’s two claims about public interest that the public has a legitimate interest in all information about matters that might affect its welfare. He puts the burden of proof on the individual who attempts doxing and claims that only the specific information relevant to revealing a wrongdoing is justified. While in case of “Satoshi Nakamoto” public interest doesn’t seem to justify doxing, in case of “Violentacrez” doxing was justified as it held him accountable and he stopped participating in hate speech. Author also comes to the conclusion that doxing doesn’t have to be accurate to be harmful.

Author then describes the objections of these justification. The first objection is that deanonymizing doxing promotes other forms of doxing. So this should be rejected on the same ground that targeting doxing is rejected. Another objection is that cost and harms of deanonymizing outweigh social benefit. For example deanonymizing doxing can be used as a tool to intimidate dissenting views. So other forms of justice should be considered. In case of “Violentacrez”, there was an alternative like deleting his comments by Reddit. Although this conflicts with freedom of expression, it is justified if freedom of expression is not considered an absolute right that can’t be limited by other rights. Another response is that accountability should go both ways in deanonymizing someone. But this accountability in itself doesn’t justify doxing as those revealing information might be able to afford other protection like costly legal battle.

Reflection:
The first thing that is noticable in the paper is that the author tries to qualify doxing to an individual. Furthermore he usually refers the victim as female which might seem appropriate for the recent doxing trend. But it ignores one of the top contributor of doxing, Anonymous. Author doesn’t note that delegitimizing doxing can be categorized as defamation. Also he discusses gossip in a similar context while by definition is doesn’t involve publicly releasing information on Internet. He could have mentioned “Boston bombing” as an example for harm of misinformed doxing.

This paper did a good job categorizing doxing using motive as the prime factor. Although author visited all of the categories with enough depth he didn’t cover many examples for them. He mentions that the burden of justification falls on the doxxer but doesn’t provide any detail from them when discussing the examples. Finally the author explanation of his justification and its critic was insightful.

Questions:
1. Is whistleblowing justified?
2. Is doxing in journalism justified?
3. How do you establish public interest in justification of doxing?
4. To what extent can crowdsourcing be used for doxing?
5. How do we prevent doxing?

Read More

Tech Demo: Snap Map

www.npr.org/sections/goatsandsoda/2017/07/06/535076690/can-snapchats-new-snap-map-bring-the-world-closer-together

Brief Overview
Snap Map
is a feature of Snap Inc.’s Snapchat application that gives users a searchable world map, and aggregates geotagged Snaps taken in the last 24 hours. Locations that are particularly popular are highlighted on the map with a heatmap gradient that ranges from sky blue to yellow to red.

Snap Map was introduced in June 2017 and received criticism on how it exacerbated existing privacy and security issues. However, an additional – perhaps, unforeseen – use is keeping tabs on loved ones in disaster-prone areas, monitoring one’s surroundings in these areas, and in investigative journalism. With Snapchat’s user base of 166 million (that is now beginning to look small in comparison to Instagram’s 250 million user base) posting at least 700 million photos per day – and especially with the introduction of Snap Map – Snapchat is increasingly becoming a source of information for journalists.

Snap Map is particularly useful because stories are geotagged and cannot be uploaded retroactively (unless the user goes to great lengths to upload old content, there is a relative amount of surety). The timestamped and geotagged content visible on Snap Map can be used to generating a timeline of events. The heatmap is helpful – if not crucial – in discovering events that may have not yet been covered by other news sites, for fact-checking, or for gaining additional insights into an emerging story. [As I will demonstrated in my demo.]

Steps to use Snap Map

  1. Make sure you have the Snapchat application installed on your Android or iOS device.
  2. Log in to Snapchat or create a new user account.
  3. From the main screen, pinch out with your fingers; this will bring up the Snap Map feature.
  4. The user is presented with a brief overview of Snap Map.
  5. The map is now displayed, zoomed-in to the user’s current location with a heatmap of the area.
  6. One can zoom out and pan to different areas, and certain hotspots are annotated with textual information.
  7. Zoom in, and long-press on a particular location.
  8. The user is presented with Snaps that were taken around that location and uploaded within the last twenty four hours.
  9. Additionally, you can search for popular locations around the world from the search bar.
  10. To upload your own content to Snap Map, take a picture or video and make sure you geotag it with your current location. Simple as that! (Eerily simple, rather.)

 

 

Read More

Shodan (tech demo)

Tool: Shodan. www.shodan.io

Shodan is arguably the most invasive tool we’ve encountered so far. In essence, it is a search engine for Internet-connected devices. Its sources are HTTP/HTTPS, FTP(port 21), SSH (port 22), Telnet (port 23), SNMP (port 161), SIP (port 5060),and Real Time Streaming Protocol (which is where things get unambiguously creepy). To my knowledge, the ports listed are all the defaults associated with those protocols.

The types of data it gathers include information about the device that it sends back to the client— including IP address, type of server, and code documents associated with the device (I personally found a lot of HTML text documents). Shodan finds this by scanning the Internet for publicly open or unsecured devices, and then providing a search engine interface to access this information. Users without Shodan accounts, which are free, can see up to ten search results; those with accounts get up to fifty. For further access, you need to pay a fee and provide a reason for use.

The “reason for use” is pretty key. From the vast array of online articles that have been published about Shodan since its launch in 2013, one gets two distinct pictures of Shodan: in the first, this is a tool that assists law enforcement officials, researchers (broadly construed) and business professionals interested in learning about how their products are being used. In the second, it’s a way to get unauthorized access to all sorts of information, including live webcam streams and other obviously invasive flows of information. It was very, very easy for me to use Shodan to access what I believe to be security cameras inside personal residences. Shodan also offers an open API to allow other tools to access its entire database.

Here’s how to get started:

  1. Sign up for an account at shodan.io. (All you need is an email address).
  2. Use the search bar at the top of the screen to input a query. Anything can go here, although for those just curious to see what Shodan can do, a geographical location or a type of device seems to make sense. Searching for “webcam” will indeed pull up live webcam streams, as well as information about the camera.
  3. (Well, 2.5). If you’re out of search query ideas, the “Explore” feature will pull up popular search terms.

That’s pretty much it!

In the space of a few minutes, I was able to spy through a Norwegian weather camera, into a hospital in Taiwan, what appeared to be an office in Russia — where I watched two bored-looking employees have a conversation — into a few houses, and in an MIT dorm room. As it is, I only got video, not audio, although Real Time Streaming Protocol appears to support audio as well. That could have been the way the cameras work.

The legality of this is questionable. But in the words of a tech-savvy friend I talked to about this, “if you’re not in the blackmail business, you probably won’t arouse any suspicion.”

I will reserve further commentary for now.

 

Read More

4chan and /b/: An Analysis of Anonymity and Ephemerality in a Large Online Community

Paper:

Bernstein, M. S., Monroy-Hernandez, A., Harry, D., André, P., & Panovich, K. (2011). 4chan and /b/: An Analysis of Anonymity and Ephemerality in a Large Online Community (Links to an external site.)Links to an external site.. In Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media.

Discussion leader: Tianyi Li

Summary:

This article explores the concepts of ephemerality and anonymity the-the, using the first and most popular board “/b/” on the imageboard website “4chan” as a lens. To better understand how the design choices we make impact the kinds of social spaces that develop, the authors perform a content analysis and two data-driven studies of /b/. Perhaps best known for its role in driving Internet culture and its involvement with the “Anonymous” group, the authors believe 4chan’s design plays a large role in its success, despite its counter-intuitiveness. In this paper, the authors quantify 4chan’s ephemerality (there are no archives; most posts are deleted in a matter of minutes) and anonymity (there are no traditional user accounts, and most posts are fully anonymous) and discuss how the community adapts to these unusual design strategies.

The authors review the prior literature on anonymity and ephemerality. First, they review communities that choose different points on the spectrum of anonymity — from completely unattributed to real names (eg. Facebook). There is previous research on online communities that use pseudonymity to build user reputation, and anonymity in small groups. The authors reconsider their results in larger online communities in their work. The authors acknowledged the mixed impact of anonymity on online-community. On the one hand, identity-based reputation systems are important in promoting pro-social behavior, and removing traditional social cues can make communication impersonal and cold, as well as undermining credibility. On the other hand, anonymity may foster stronger communal identity as opposed to bond-based attachment with inpiduals, impact participation in classrooms and email lists, and produce more ideas and overall cohesion within the groups. Second, they recognize the rarity of ephemerality on large-scale online communities and claims to be the first to study it directly in situ. Although data permanence has been the norm of online communities, it has downsides in some situations, as the example given by the authors, archiving history in chat rooms has elicited strong negative reactions. They also relate the previous related academic work to practical implications for online social environments.

4chan is composed of themed boards, each having threads of posts. The author justified the choice of /b/, the “random” board that is 4chan’s first and most active board, where “rowdiness and lawlessness” happen, and the “life force of the website”. After explaining the background about this forum and board, the authors described and discussed their methods and results of their two studies.

The first study focuses on ephemerality. The ephemerality on 4chan is enforced by thread expiration and real-time ranking and removal of threads by their replies. The authors characterized /b/’s content by the communal language used, and conducted a grounded analysis on a series of informal samples of thread-starting posts through an eight-month participant observation on the site. The authors focus collected a dataset of activity on /b/ for two weeks and conducted a content analysis of 5,576,096 posts in482,559 threads. The authors believed that the sample size is representative enough of most daily and weekly cycles. They did not capture images due to nature of the materials. They capture the daily activity in the two-week dataset by calculating the number of threads per hour, the threads lifetime in seconds, and the amount of time (in seconds) the thread stay on the first page. The amount of posting activity in one forum board is roughly the same as arenas like Usenet and Youtube. They identified the high traffic time on the website when both the lifetime and first-page exposure of threads is the lowest due to high competition. The contents deletion plays role in pushing the community quickly iterate and generate popular memes. Users can control ephemerality by bumping up threads through replies and burying it through “sage”. Such efforts raise community participation unintuitively. They also found that the users have developed mechanisms to keep valuable contents: they preserve images on their local machine, and they donate images in return for their requests.

The second study focuses on anonymity. The anonymity on 4chan plays out by not requiring an account to post and not enforcing unique pseudonym. Despite the existence of “tripcodes” for password holders, they found that this feature, as well as the pseudonyms, are largely eschewed. The authors use the two-week data sample to analyze the identity metadata of each post. They found that only 10% use pseudonyms and less than 2% posts with email, 40% of which are not even actual emails and mainly for the “sage” feature. Tripcodes are used only for users to privately keep their authorship of the the previous post. The authors found that anonymity can be feature on the dynamics of 4chan, despite usual disbelief. It provides a cover for more intimate and open conversations for advice and discussion threads; encourages experimentation with new ideas or memes by masking the sense of failure and softening the blow of being ignored of explicitly chastised. In addition, the community is able to recognize authenticity via timestamp. Furthermore, instead of building an inpidual reputation, anonymity in /b/ gives rise to community boundaries with textual, linguistic and visual cues.

Reflections:

This article had some nice strengths. It is the first to study ephemerality the in a large-scale online community directly and in situ. It provides a nice overview of an extreme of the opposite of commonly accepted on-line community norm. As the authors note, the opposite positions on user identity and data permanence have its own merits and advantages. The authors provided an in-depth literature review of the dominating belief as well as a comprehensive analysis of a representative sample of the opposite extreme.

Not aware of the existence of such communities, I was intrigued to read the paper but also got my mind blown when I try to see what 4chan looks like. The first impression I got of the /b/ board is that “this is the dark and dirty side of the cyber world”. However, after finish reading the paper and some related discussion mentioned in related literature, I appreciated the author’s professionalism and sharp insights into the online ecosystem. Also, checking other boards reshaped my impression and made me realize 4chan is a powerful platform where people care more about the truth itself than judging if things are true by who is telling them. I also learned the real-world impact of 4chan, both in US election, and the CNN blackmail scandal.

The results from the two study are interesting. The most impressive one is that the effect of ephemerality on content quality echoes my personal experience. As quoted by the authors from previous research, “social forgetfulness” has been playing an important role in human civilization. This reminds me of the saying “there’s nothing new under the sun”. The richness of information is never as valuable as the limited attention and human memory. Although I applaud the concept of ephemerality but stay suspicious with anonymity. To be honest, it is challenging for me to stay unbiased with such online community with a high degree of autonomy through anonymity. I see the value of a certain level of anonymity given the authors’ study results and discussion, but I still doubt if the good outweighs the bad. Unlike ephemerality, which leads to a competition for attention by producing high-quality and eye-catching contents, anonymity removes the burden of responsibility from the posters of the impact their posts have in the community.

I admired the methods the authors used to conduct their analysis. The statistical analysis and the daily activity graph is very straightforward and self-explanatory. I never used content analysis myself before. After researching more details, I feel like that part where the authors conducted grounded analysis using a series of informal samples of thread-starting posts on /b/ is closer to the descriptions I read about content analysis. For the two-week long data set, they mainly did a quantitive analysis of the post metadata, including the post timestamps, reply timestamps, usernames and user emails.

Last but not least, despite that I was uncomfortable with some posts on the website, I wonder if the decision of not capturing images in the posts changes the analysis fundamentally. Since intuitively, those are highly possible to be the real “life force” of the website and keep reoccurring in my limited times of visiting the website. I would appreciate it if the authors have captured that at least the metadata of part of posts, and analyzed the weight and impact of inappropriate contents on the overall website.

Questions:

* What do you think of the advantages and disadvantages of anonymity and ephemerality discussed in the paper? Do you have additional perspectives?

* How do you think such online communities as 4chan impact the overall cyber ecosystem, and real world?

* Do you trust the anonymity in online community?

* Did you know about 4chan before? What did you think of it? Does this paper influence your point of view and how?

* Where in the user identity spectrum do you think works best? What are the situations or contexts?

Read More

Check: Collaborative Fact-checking

Technology: Check:Verify breaking news online checkmedia.org

Demo leader: Tianyi Li

Summary:

Check is a web-based tool on Meedan’s platform for collaborative verification of digital media. It was founded in 2011 as Checkdesk, and adopted this new name in 2016. They have worked to build online tools, support independent journalists, and develop media literacy training resources that aim to improve the investigative quality of citizen journalism and help limit the rapid spread of rumors and misinformation online. It combines smart checklists, workflow integrations, and intuitive design to support an efficient and collaborative process. It was used during Electionland, which is a collaborative project held during the US elections to look at and report on voting access across the country on Election Day and the days leading up to it.
People can post media links in their project on Check, and invite others to investigate and verify the contents. Check provides a web interface for people to add annotation notes, set verification status, add tags (not working) and add different types of tasks for each link. To investigate in Check, you should first set up a new account and create a team. You can create multiple teams and join other people’s team. In each team, you can set up projects for specific investigations for your team. Each project allows you add items, like social media posts or web sites, that you are investigating. There are four different roles in Check, team owner, team editor, journalist and contributor. Different level of access and permissions are granted to each role. Details on user role here.
Check is a open-source project and offers its API on Github. The project uses Ruby on Rails (or simply Rails, is a server-side web application framework written in Ruby under the MIT License.) They both Docker (software container platform) based or non-Docker-based installation for you to deploy the project on your local machine. Other applications can communicate with this service (and test this communication) using the client library, which can be automatically generated. People can also use functions exposed by this application in the client library.
Limitation: They now only support Chrome.
Demo:
  • Create a new account
    • Visit https://checkmedia.org/ on Google Chrome only
    • Set up a new account. You can:
      • Authorize your account with an existing social media platform (currently that’s Facebook, Twitter or Slack)
      • Set up a new account with your email address
  • Create a team
    • Type in a Team Name.
    • Type in a Team URL.
  • Join a team: https://checkmedia.org/investigative-tech/join
  • Create a new project
    • From your project page, click on “Add Project”
    • Start typing the name of the new project. (Don’t worry, you can change this later)
    • Hit Enter/Return
  • Add a link for Investigation
    • Click on the project name on the left. This opens up your project workspace.
    • Click on the bottom line, where it says “Paste a Twitter, Instagram, Facebook or YouTube link”
    • Here, you can drop in a link from any of these social networks (soon, you’ll be able to add any link!)
    • Click “Post”
    • This will create a page for investigation of the link.
  • Annotating a link
    • Add a note:
      • In the bar at the bottom, type a note. For instance, type “I am looking into the exact location for this Tweet.”
      • Click Submit
      • This will add your note.
      • Others in your team can also add notes as they collaborate on the investigation.
    • Set verification status:
      • In the upper left hand corner of the link, click on the blue “Undetermined” dropdown.
      • Choose a status
      • This sets the status and adds a note to the log
    • Add a tag:
      • Add the bottom of your media, click on the “…” and choose edit
      • (I don’t think this function works…)
  • Add a task to the link under investigation:
    • Go to the media page and click “Add task” link.
    • Choose a type from the list

Read More

Examining Technology that Supports Community Policing

Article: Examining Technology that Supports Community Policing

Authors: Sheena Lewis and Dan A. Lewis

Leader: Ri

Summery:

Community policing is a strategy of policing that focuses on police building ties and working closely with members of the communities [2]. The paper [1] analyzes how citizen uses technology in order to support community policing by conducting a comparative study between 2 websites that were created to help citizens address crime. One of them is CLEARpath, an official website created by the Chicago police to provide information and receive tips from the citizens of Chicago. Another one is an unofficial web forum moderated by the residents of the community for having problem-solving conversations.

The motivation of the paper [1] lies in:

Designing technology to best support collective action against crime.

The paper [1] discusses 2 theory based crime prevention from 2 perspectives, i) Victimization Theory, the police perspective, and ii) Social Control Theory, the community perspective. The victimization theory focuses on understanding crime as events that occur between a potential victim, offender, and the environment, whereas, the social control theory suggests that social interactions influence criminal acts through informal enforcement of social norms. The victimization theory tries to prevent criminal behavior by educating potential victims. On the contrary, the social control theory suggests that criminal behavior can be influenced by strong application of social norms.

In the later sections of the paper, the authors examined a diverse north-side Chicago community and its online and offline discussions about crime. This particular community had a medium-level of violent crime along with a high number of property damage.

The authors found that Chicago police had smarter technology implements in CLEARpath, such as, finding crime hot spots, connecting to other law enforcement agencies, providing extensive mapping technology, etc. The website had 15 modules in total, 12 of them for providing information and 3 modules to accept its community concerns as inputs. It also had an informal community policing web forum with 221 members as of 2011. The authors also examined the community web forum, described as “community crime website” and found numerous online posts. Interestingly, the authors found only 3 community concern posts in 365 days to the police website, whereas, 10 posts in 90 days on the community web forum. This shows a significant participation difference between official police website and informal community web forum.

The research also deduct based on their findings that residents of the community use the forum to:

  • Build relationships and strengthen social ties,
  • Discuss ways that they can engage in collective action,
  • Share information and advice, and
  • Reinforce offline and online community norms.

Based on these findings, the authors suggest that there should be significant change in design in crime prevention tools. For increasing active participation, designs should focus not only on citizen-police interaction but also on citizen-citizen interaction, where relationship building can occur.

Reflection:

The paper [1], in my opinion, takes a HCI approach to address the crime theories and how these theories can be translated into design implication. The problem is important in order to share personal experiences and strengthen social ties in a community which can further address local concerns and criminal activities. The existing solution of official police website doesn’t encourage active participation. As a result, the information conveyed on the website may not get the maximum impact. The authors rather suggest that web tools to support community policing should be designed to adhere to and support communication that allow residents to engage in collective problem-solving discussions and to informally regulate social norms.

In my opinion, community policing can increase awareness among the residents of a certain community. As the paper [1] suggests, community policing can range from a member’s improper landscaping to an elderly being assaulted during home invasion. The community policing reflects the real and contemporary problems and issues faced by the community itself and their way of addressing them.

However, what I found troubling about this platform is that, the article mentions that site moderators have the power to ban members of the forum if they don’t abide by their group rules and regulations. It got me thinking, what happens after the member is been banned? Since the member is presumably still a resident of the community, he/she is a part of the community.  Is the ban temporary or permanent? Is the banned member approached in person by other members of the community for resolving the situation? Or does it create more unsettling situation in the real life?

I think the author also mentioned another important topic of legitimacy of the community policing in the eyes of the police officials. The article mentions that the moderators managed the legitimacy of the website by distancing the website from the police. Also, I think, trust and accountability are 2 very important challenges regarding community policing.

For further study, I suggest a later paper [3], named “Building group capacity for problem solving and police–community partnerships through survey feedback and training: a randomized control trial within Chicago’s community policing program”, published in 2014, which also analyzes Chicago’s community policing program and comes with a solution regarding police–community partnerships through survey feedback and training.

 

Questions:

  • Could you propose some designs that may increase the participation of community members in the official law-enforcement website?
  • Does the banning of members, who violate group rules, make the community a better place? Or does it only separate the members from the virtual world as they keep their presence in the community intact?
  • Do you think it is possible to establish legitimacy of community policing in the eyes of police officials? Can the trust on police official be increased? And can the online platform introduce accountability to the community policing?
  • What do you think can be done for the people who are not part of the online community? Does community policing explicitly need all the members of the community to actively participate in the online web forum?

 

References:

[1] Lewis, S., & Lewis, D. A. (2012). Examining technology that supports community policing. In Conference Proceedings – The 30th ACM Conference on Human Factors in Computing Systems, CHI 2012 (pp. 1371-1380). DOI: 10.1145/2207676.2208595

[2] Community policing, as defined in Wikipedia.

[3] Graziano, L.M., Rosenbaum, D.P. & Schuck, A.M. J Exp Criminol (2014) 10: 79. https://doi.org/10.1007/s11292-012-9171-y

Read More

Police and user-led investigations on social media.

Article:

Trottier, D. (2014). Police and user-led investigations on social media. JL Inf. & Sci.23, 75.

Leader: Leanna

Summary:

The article explores top-down and bottom-up policing, the former referring to traditional policing and the latter to crowdsourced policing. Because social media has increased both visibility and, consequently, access to personal information, its existence has facilitated a convergence of the police and the public. To demonstrate this point, the author notes that social media centralizes and stores intelligence in one place. And everyone, and their brother, can now surveil.  This includes surveillance for traditional policing as well as for public scrutinizing.

Continuing the discussion of everyday surveillance, Trottier discusses the domestication of social media in our lives. In particular, he points to surveillance creep, or function creep, which is the result of technology not being used for its intended function. With regards to traditional policing, the author discusses the shift in function of Facebook from a communication platform to police intelligence. And, with regards to the crowdsourcing, the public can now more easily engage with policing activities and, consequently, with fewer guiding protocols.

The author then spends the rest of his article providing three examples of policing activities with social media: police adoption of social media surveillance; crowdsourcing and the 2011 Vancouver riots; and, crowdsourced surveillance-businesses.

In the first example – police adoption of social media surveillance – Trottier outlines six different ways that the police can obtain information from social media, such as manual searches, directly via companies, combined manual and automated searches, lawful interception of messages, and embedded software.  The sixth way that the author points toward—analysis—is arguably an outlier to his list and is best described separately. He simply lists various processes of analysis, such as temporal based reconstruction and sentiment analysis.

In the next example – the 2011 Vancouver riots – he then continues to describe the crowd’s involvement in social control immediately following the Vancouver Stanley Cup Riots of 2011. The mass number of photos online provided the police with an abundance of information – often before they even knew the identities of people involved in the riots.

Lastly, in the third example –  crowdsourced surveillance-businesses – Trottier discusses various crowdsourced surveillance businesses, such as Blueservo, Internet Eye and Facewatch. Each capitalizes on the crowd to provide security services. For example, Internet Eyes uses crowdsourcing to monitor CCTVs for registered business owners. In return, after the viewer spends a fee to sign up, they receive compensation for their time and effort.  In his discussion of Internet Eyes, he notes of the relatively recent trouble that the company has gotten into to, namely, growing privacy concerns among shoppers.

Reflection:

In his conversation about surveillance entering the domestic sphere, Trottier mentions that “The homestead and other non-commercial spaces were locations where people were comparatively free from surveillance” (para 9). From a sociological perspective, this view of surveillance appears to be rather myopic. For sure, surveillance is often becoming more commonplace and domesticated. However, many groups in society have never been free from surveillance. Black Americans, for example, have been under police and state scrutiny for years not only in public spheres but also in their private lives.

The observation that historically many people have been subject to comparatively high levels of surveillance is a non-trivial one. On one level, the increased attention being paid to the domestication of surveillance makes it seem that it was fine when Black Americans were being surveiled but that now, when White Americans are being surveiled, the encroachment of surveillance into personal spaces is overreach. And if this is the case, then the issue here is not the domestication of surveillance but that surveillance is now more indiscriminate.

In addition, Internet Eyes is fascinating but not only for its application of crowdsourcing security but also of worker exploitation. Businesses are capitalizing on crowdsourcing – arguably like the early days of industrialization. With relatively new technologies/approaches, regulations and policies fall fast behind. Like other crowdsourcing platforms, such as MTurk, worker compensation often does not balance anywhere near the minimum wage. It would not be surprising if crowdsource union groups soon emerged, so workers aren’t left with the option of participating for less than livable wages (assuming they don’t also work elsewhere) or not participating at all.

Questions:

  1. Are we acclimatized to surveillance in our everyday lives? If so, do some people not see the threat it poses to our civil liberties?
  2. What does it mean to consent to surveillance in digital public spaces? Can we reasonably op-out of social media, search, or email?
  3. Should social media be a tool for the police?
  4. What are some of the ethical concerns with crowdsourced security?

 

 

 

Read More

Effects of Sensemaking Translucence on Distributed Collaborative Analysis.

Paper:  Goyal, N., & Fussell, S. R. (2016). Effects of Sensemaking Translucence on Distributed Collaborative Analysis. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (pp. 288–302). New York, NY, USA: ACM. https://doi.org/10.1145/2818048.2820071 (Links to an external site.)

Discussion Leader:  Annie Y. Patrick

Summary:

Goyal and Fussell focus on the concept of sensemaking translucence.  Sensemaking is a process utilized by crime investigators in which numerous pieces of information are collected to form multiple hypotheses that are then further examined to either confirm or disconfirm the initial hypotheses.  However, a challenge in this process is biased perception of the investigators, confirmation biases, and groupthink.  Sensemaking translucence is the process to bring awareness of the sensemaking process to analysts.

To address the challenger of cognitive biases in sensemaking, the authors created a sensemaking translucence interface of two parts:  a hypothesis window and a suspect visualization.  The hypothesis window is to facilitate the exchange of ideas of suspects’ means, motives, and alibis.  The suspect visualization provides automatic feedback of suspects via the hypothesis window, the group chat window, and a digital sticky note feature. The authors predicted that ta sensemaking translucence interface woult perform better on a collaborative analysis task than those using a standard interface (H1), would rate the tool as more useful than the standard interface (H2a), would report a higher level of activity (H2b) and would rate a higher collaborative experience (H3).

To conduct this study, 20 pairs of remote participants role-played as detectives to solve a crime.  The pairs were randomly assigned to either the standard interface or the sensemaking translucence interface. Each participant was given a set of documents about 3 cold murder cases with information about 7 murders, with 40 potential suspects, hidden in approximately 20 documents divided between the pairs. The pairs were to share their information to find the name of the serial killer in 50 minutes.  The study was analyzed by using the participants’ final reports and post-task report.

Upon analysis the use of the sensemaking translucence interface revealed that more clues were identified and the serial killer was identified in a less time than the standard interface users.  However, the interface was rated less helpful in providing support, hypothesis generation, and viewing multiple suspects.

Reflection:

This is a research article detailing a study of a sensemaking translucence interface to examine the challenges in collaborative sensemaking.  The authors validate their study by discussing the challenges of biased perception that investigators hold that at the least could delay justice or at the worst place the wrong in prison.

Though this study provided an initial platform to compare and contrast how a more collaborative sensemaking translucence interface can aid in the criminal case sensemaking, there were areas that could have strengthen the study.  This study used 40 participants ranging from 18-28 years old that were either undergraduate or graduate students.  This reflects a very limited sample that does not represent the general or the professional users of this type of data. Also, the participants were placed in pairs, I would guess that investigative situations would have information from multiple sources, thus complicating the situation.  The researchers do address these need for field research, however, changes in this study could have sufficed too.

  1. Would incorporating a more diverse sample have affected the study differently? Why or why not?
  2. How does the concept of teammate inaccuracy blindness (analyst treat all information from a partner as valid and useful, regardless of its actual quality) apply within the context of crowdsource data and information parsed through online and social media outlets?
  3. Does this sample reflect the population that would be most likely using this type of interface? How could this sample/study have been done differently?
  4. What are other areas (other than criminal investigations) that could use the sensemaking translucence interface technology?
  5. The pairs that used the interface identified more clues and solved the case in a greater proportion of the time than those using the standard interface. However, the users of the sensemaking translucence interface rated it as less helpful in providing support, hypothesis generation, and monitor multiple suspects.  Why do you think this was the case (in other words, why were there not positive responses for all the hypotheses of the project)?

Read More