Tech Demo: Snap Map

www.npr.org/sections/goatsandsoda/2017/07/06/535076690/can-snapchats-new-snap-map-bring-the-world-closer-together

Brief Overview
Snap Map
is a feature of Snap Inc.’s Snapchat application that gives users a searchable world map, and aggregates geotagged Snaps taken in the last 24 hours. Locations that are particularly popular are highlighted on the map with a heatmap gradient that ranges from sky blue to yellow to red.

Snap Map was introduced in June 2017 and received criticism on how it exacerbated existing privacy and security issues. However, an additional – perhaps, unforeseen – use is keeping tabs on loved ones in disaster-prone areas, monitoring one’s surroundings in these areas, and in investigative journalism. With Snapchat’s user base of 166 million (that is now beginning to look small in comparison to Instagram’s 250 million user base) posting at least 700 million photos per day – and especially with the introduction of Snap Map – Snapchat is increasingly becoming a source of information for journalists.

Snap Map is particularly useful because stories are geotagged and cannot be uploaded retroactively (unless the user goes to great lengths to upload old content, there is a relative amount of surety). The timestamped and geotagged content visible on Snap Map can be used to generating a timeline of events. The heatmap is helpful – if not crucial – in discovering events that may have not yet been covered by other news sites, for fact-checking, or for gaining additional insights into an emerging story. [As I will demonstrated in my demo.]

Steps to use Snap Map

  1. Make sure you have the Snapchat application installed on your Android or iOS device.
  2. Log in to Snapchat or create a new user account.
  3. From the main screen, pinch out with your fingers; this will bring up the Snap Map feature.
  4. The user is presented with a brief overview of Snap Map.
  5. The map is now displayed, zoomed-in to the user’s current location with a heatmap of the area.
  6. One can zoom out and pan to different areas, and certain hotspots are annotated with textual information.
  7. Zoom in, and long-press on a particular location.
  8. The user is presented with Snaps that were taken around that location and uploaded within the last twenty four hours.
  9. Additionally, you can search for popular locations around the world from the search bar.
  10. To upload your own content to Snap Map, take a picture or video and make sure you geotag it with your current location. Simple as that! (Eerily simple, rather.)

 

 

Read More

Shodan (tech demo)

Tool: Shodan. www.shodan.io

Shodan is arguably the most invasive tool we’ve encountered so far. In essence, it is a search engine for Internet-connected devices. Its sources are HTTP/HTTPS, FTP(port 21), SSH (port 22), Telnet (port 23), SNMP (port 161), SIP (port 5060),and Real Time Streaming Protocol (which is where things get unambiguously creepy). To my knowledge, the ports listed are all the defaults associated with those protocols.

The types of data it gathers include information about the device that it sends back to the client— including IP address, type of server, and code documents associated with the device (I personally found a lot of HTML text documents). Shodan finds this by scanning the Internet for publicly open or unsecured devices, and then providing a search engine interface to access this information. Users without Shodan accounts, which are free, can see up to ten search results; those with accounts get up to fifty. For further access, you need to pay a fee and provide a reason for use.

The “reason for use” is pretty key. From the vast array of online articles that have been published about Shodan since its launch in 2013, one gets two distinct pictures of Shodan: in the first, this is a tool that assists law enforcement officials, researchers (broadly construed) and business professionals interested in learning about how their products are being used. In the second, it’s a way to get unauthorized access to all sorts of information, including live webcam streams and other obviously invasive flows of information. It was very, very easy for me to use Shodan to access what I believe to be security cameras inside personal residences. Shodan also offers an open API to allow other tools to access its entire database.

Here’s how to get started:

  1. Sign up for an account at shodan.io. (All you need is an email address).
  2. Use the search bar at the top of the screen to input a query. Anything can go here, although for those just curious to see what Shodan can do, a geographical location or a type of device seems to make sense. Searching for “webcam” will indeed pull up live webcam streams, as well as information about the camera.
  3. (Well, 2.5). If you’re out of search query ideas, the “Explore” feature will pull up popular search terms.

That’s pretty much it!

In the space of a few minutes, I was able to spy through a Norwegian weather camera, into a hospital in Taiwan, what appeared to be an office in Russia — where I watched two bored-looking employees have a conversation — into a few houses, and in an MIT dorm room. As it is, I only got video, not audio, although Real Time Streaming Protocol appears to support audio as well. That could have been the way the cameras work.

The legality of this is questionable. But in the words of a tech-savvy friend I talked to about this, “if you’re not in the blackmail business, you probably won’t arouse any suspicion.”

I will reserve further commentary for now.

 

Read More

4chan and /b/: An Analysis of Anonymity and Ephemerality in a Large Online Community

Paper:

Bernstein, M. S., Monroy-Hernandez, A., Harry, D., André, P., & Panovich, K. (2011). 4chan and /b/: An Analysis of Anonymity and Ephemerality in a Large Online Community (Links to an external site.)Links to an external site.. In Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media.

Discussion leader: Tianyi Li

Summary:

This article explores the concepts of ephemerality and anonymity the-the, using the first and most popular board “/b/” on the imageboard website “4chan” as a lens. To better understand how the design choices we make impact the kinds of social spaces that develop, the authors perform a content analysis and two data-driven studies of /b/. Perhaps best known for its role in driving Internet culture and its involvement with the “Anonymous” group, the authors believe 4chan’s design plays a large role in its success, despite its counter-intuitiveness. In this paper, the authors quantify 4chan’s ephemerality (there are no archives; most posts are deleted in a matter of minutes) and anonymity (there are no traditional user accounts, and most posts are fully anonymous) and discuss how the community adapts to these unusual design strategies.

The authors review the prior literature on anonymity and ephemerality. First, they review communities that choose different points on the spectrum of anonymity — from completely unattributed to real names (eg. Facebook). There is previous research on online communities that use pseudonymity to build user reputation, and anonymity in small groups. The authors reconsider their results in larger online communities in their work. The authors acknowledged the mixed impact of anonymity on online-community. On the one hand, identity-based reputation systems are important in promoting pro-social behavior, and removing traditional social cues can make communication impersonal and cold, as well as undermining credibility. On the other hand, anonymity may foster stronger communal identity as opposed to bond-based attachment with inpiduals, impact participation in classrooms and email lists, and produce more ideas and overall cohesion within the groups. Second, they recognize the rarity of ephemerality on large-scale online communities and claims to be the first to study it directly in situ. Although data permanence has been the norm of online communities, it has downsides in some situations, as the example given by the authors, archiving history in chat rooms has elicited strong negative reactions. They also relate the previous related academic work to practical implications for online social environments.

4chan is composed of themed boards, each having threads of posts. The author justified the choice of /b/, the “random” board that is 4chan’s first and most active board, where “rowdiness and lawlessness” happen, and the “life force of the website”. After explaining the background about this forum and board, the authors described and discussed their methods and results of their two studies.

The first study focuses on ephemerality. The ephemerality on 4chan is enforced by thread expiration and real-time ranking and removal of threads by their replies. The authors characterized /b/’s content by the communal language used, and conducted a grounded analysis on a series of informal samples of thread-starting posts through an eight-month participant observation on the site. The authors focus collected a dataset of activity on /b/ for two weeks and conducted a content analysis of 5,576,096 posts in482,559 threads. The authors believed that the sample size is representative enough of most daily and weekly cycles. They did not capture images due to nature of the materials. They capture the daily activity in the two-week dataset by calculating the number of threads per hour, the threads lifetime in seconds, and the amount of time (in seconds) the thread stay on the first page. The amount of posting activity in one forum board is roughly the same as arenas like Usenet and Youtube. They identified the high traffic time on the website when both the lifetime and first-page exposure of threads is the lowest due to high competition. The contents deletion plays role in pushing the community quickly iterate and generate popular memes. Users can control ephemerality by bumping up threads through replies and burying it through “sage”. Such efforts raise community participation unintuitively. They also found that the users have developed mechanisms to keep valuable contents: they preserve images on their local machine, and they donate images in return for their requests.

The second study focuses on anonymity. The anonymity on 4chan plays out by not requiring an account to post and not enforcing unique pseudonym. Despite the existence of “tripcodes” for password holders, they found that this feature, as well as the pseudonyms, are largely eschewed. The authors use the two-week data sample to analyze the identity metadata of each post. They found that only 10% use pseudonyms and less than 2% posts with email, 40% of which are not even actual emails and mainly for the “sage” feature. Tripcodes are used only for users to privately keep their authorship of the the previous post. The authors found that anonymity can be feature on the dynamics of 4chan, despite usual disbelief. It provides a cover for more intimate and open conversations for advice and discussion threads; encourages experimentation with new ideas or memes by masking the sense of failure and softening the blow of being ignored of explicitly chastised. In addition, the community is able to recognize authenticity via timestamp. Furthermore, instead of building an inpidual reputation, anonymity in /b/ gives rise to community boundaries with textual, linguistic and visual cues.

Reflections:

This article had some nice strengths. It is the first to study ephemerality the in a large-scale online community directly and in situ. It provides a nice overview of an extreme of the opposite of commonly accepted on-line community norm. As the authors note, the opposite positions on user identity and data permanence have its own merits and advantages. The authors provided an in-depth literature review of the dominating belief as well as a comprehensive analysis of a representative sample of the opposite extreme.

Not aware of the existence of such communities, I was intrigued to read the paper but also got my mind blown when I try to see what 4chan looks like. The first impression I got of the /b/ board is that “this is the dark and dirty side of the cyber world”. However, after finish reading the paper and some related discussion mentioned in related literature, I appreciated the author’s professionalism and sharp insights into the online ecosystem. Also, checking other boards reshaped my impression and made me realize 4chan is a powerful platform where people care more about the truth itself than judging if things are true by who is telling them. I also learned the real-world impact of 4chan, both in US election, and the CNN blackmail scandal.

The results from the two study are interesting. The most impressive one is that the effect of ephemerality on content quality echoes my personal experience. As quoted by the authors from previous research, “social forgetfulness” has been playing an important role in human civilization. This reminds me of the saying “there’s nothing new under the sun”. The richness of information is never as valuable as the limited attention and human memory. Although I applaud the concept of ephemerality but stay suspicious with anonymity. To be honest, it is challenging for me to stay unbiased with such online community with a high degree of autonomy through anonymity. I see the value of a certain level of anonymity given the authors’ study results and discussion, but I still doubt if the good outweighs the bad. Unlike ephemerality, which leads to a competition for attention by producing high-quality and eye-catching contents, anonymity removes the burden of responsibility from the posters of the impact their posts have in the community.

I admired the methods the authors used to conduct their analysis. The statistical analysis and the daily activity graph is very straightforward and self-explanatory. I never used content analysis myself before. After researching more details, I feel like that part where the authors conducted grounded analysis using a series of informal samples of thread-starting posts on /b/ is closer to the descriptions I read about content analysis. For the two-week long data set, they mainly did a quantitive analysis of the post metadata, including the post timestamps, reply timestamps, usernames and user emails.

Last but not least, despite that I was uncomfortable with some posts on the website, I wonder if the decision of not capturing images in the posts changes the analysis fundamentally. Since intuitively, those are highly possible to be the real “life force” of the website and keep reoccurring in my limited times of visiting the website. I would appreciate it if the authors have captured that at least the metadata of part of posts, and analyzed the weight and impact of inappropriate contents on the overall website.

Questions:

* What do you think of the advantages and disadvantages of anonymity and ephemerality discussed in the paper? Do you have additional perspectives?

* How do you think such online communities as 4chan impact the overall cyber ecosystem, and real world?

* Do you trust the anonymity in online community?

* Did you know about 4chan before? What did you think of it? Does this paper influence your point of view and how?

* Where in the user identity spectrum do you think works best? What are the situations or contexts?

Read More

Check: Collaborative Fact-checking

Technology: Check:Verify breaking news online checkmedia.org

Demo leader: Tianyi Li

Summary:

Check is a web-based tool on Meedan’s platform for collaborative verification of digital media. It was founded in 2011 as Checkdesk, and adopted this new name in 2016. They have worked to build online tools, support independent journalists, and develop media literacy training resources that aim to improve the investigative quality of citizen journalism and help limit the rapid spread of rumors and misinformation online. It combines smart checklists, workflow integrations, and intuitive design to support an efficient and collaborative process. It was used during Electionland, which is a collaborative project held during the US elections to look at and report on voting access across the country on Election Day and the days leading up to it.
People can post media links in their project on Check, and invite others to investigate and verify the contents. Check provides a web interface for people to add annotation notes, set verification status, add tags (not working) and add different types of tasks for each link. To investigate in Check, you should first set up a new account and create a team. You can create multiple teams and join other people’s team. In each team, you can set up projects for specific investigations for your team. Each project allows you add items, like social media posts or web sites, that you are investigating. There are four different roles in Check, team owner, team editor, journalist and contributor. Different level of access and permissions are granted to each role. Details on user role here.
Check is a open-source project and offers its API on Github. The project uses Ruby on Rails (or simply Rails, is a server-side web application framework written in Ruby under the MIT License.) They both Docker (software container platform) based or non-Docker-based installation for you to deploy the project on your local machine. Other applications can communicate with this service (and test this communication) using the client library, which can be automatically generated. People can also use functions exposed by this application in the client library.
Limitation: They now only support Chrome.
Demo:
  • Create a new account
    • Visit https://checkmedia.org/ on Google Chrome only
    • Set up a new account. You can:
      • Authorize your account with an existing social media platform (currently that’s Facebook, Twitter or Slack)
      • Set up a new account with your email address
  • Create a team
    • Type in a Team Name.
    • Type in a Team URL.
  • Join a team: https://checkmedia.org/investigative-tech/join
  • Create a new project
    • From your project page, click on “Add Project”
    • Start typing the name of the new project. (Don’t worry, you can change this later)
    • Hit Enter/Return
  • Add a link for Investigation
    • Click on the project name on the left. This opens up your project workspace.
    • Click on the bottom line, where it says “Paste a Twitter, Instagram, Facebook or YouTube link”
    • Here, you can drop in a link from any of these social networks (soon, you’ll be able to add any link!)
    • Click “Post”
    • This will create a page for investigation of the link.
  • Annotating a link
    • Add a note:
      • In the bar at the bottom, type a note. For instance, type “I am looking into the exact location for this Tweet.”
      • Click Submit
      • This will add your note.
      • Others in your team can also add notes as they collaborate on the investigation.
    • Set verification status:
      • In the upper left hand corner of the link, click on the blue “Undetermined” dropdown.
      • Choose a status
      • This sets the status and adds a note to the log
    • Add a tag:
      • Add the bottom of your media, click on the “…” and choose edit
      • (I don’t think this function works…)
  • Add a task to the link under investigation:
    • Go to the media page and click “Add task” link.
    • Choose a type from the list

Read More

Examining Technology that Supports Community Policing

Article: Examining Technology that Supports Community Policing

Authors: Sheena Lewis and Dan A. Lewis

Leader: Ri

Summery:

Community policing is a strategy of policing that focuses on police building ties and working closely with members of the communities [2]. The paper [1] analyzes how citizen uses technology in order to support community policing by conducting a comparative study between 2 websites that were created to help citizens address crime. One of them is CLEARpath, an official website created by the Chicago police to provide information and receive tips from the citizens of Chicago. Another one is an unofficial web forum moderated by the residents of the community for having problem-solving conversations.

The motivation of the paper [1] lies in:

Designing technology to best support collective action against crime.

The paper [1] discusses 2 theory based crime prevention from 2 perspectives, i) Victimization Theory, the police perspective, and ii) Social Control Theory, the community perspective. The victimization theory focuses on understanding crime as events that occur between a potential victim, offender, and the environment, whereas, the social control theory suggests that social interactions influence criminal acts through informal enforcement of social norms. The victimization theory tries to prevent criminal behavior by educating potential victims. On the contrary, the social control theory suggests that criminal behavior can be influenced by strong application of social norms.

In the later sections of the paper, the authors examined a diverse north-side Chicago community and its online and offline discussions about crime. This particular community had a medium-level of violent crime along with a high number of property damage.

The authors found that Chicago police had smarter technology implements in CLEARpath, such as, finding crime hot spots, connecting to other law enforcement agencies, providing extensive mapping technology, etc. The website had 15 modules in total, 12 of them for providing information and 3 modules to accept its community concerns as inputs. It also had an informal community policing web forum with 221 members as of 2011. The authors also examined the community web forum, described as “community crime website” and found numerous online posts. Interestingly, the authors found only 3 community concern posts in 365 days to the police website, whereas, 10 posts in 90 days on the community web forum. This shows a significant participation difference between official police website and informal community web forum.

The research also deduct based on their findings that residents of the community use the forum to:

  • Build relationships and strengthen social ties,
  • Discuss ways that they can engage in collective action,
  • Share information and advice, and
  • Reinforce offline and online community norms.

Based on these findings, the authors suggest that there should be significant change in design in crime prevention tools. For increasing active participation, designs should focus not only on citizen-police interaction but also on citizen-citizen interaction, where relationship building can occur.

Reflection:

The paper [1], in my opinion, takes a HCI approach to address the crime theories and how these theories can be translated into design implication. The problem is important in order to share personal experiences and strengthen social ties in a community which can further address local concerns and criminal activities. The existing solution of official police website doesn’t encourage active participation. As a result, the information conveyed on the website may not get the maximum impact. The authors rather suggest that web tools to support community policing should be designed to adhere to and support communication that allow residents to engage in collective problem-solving discussions and to informally regulate social norms.

In my opinion, community policing can increase awareness among the residents of a certain community. As the paper [1] suggests, community policing can range from a member’s improper landscaping to an elderly being assaulted during home invasion. The community policing reflects the real and contemporary problems and issues faced by the community itself and their way of addressing them.

However, what I found troubling about this platform is that, the article mentions that site moderators have the power to ban members of the forum if they don’t abide by their group rules and regulations. It got me thinking, what happens after the member is been banned? Since the member is presumably still a resident of the community, he/she is a part of the community.  Is the ban temporary or permanent? Is the banned member approached in person by other members of the community for resolving the situation? Or does it create more unsettling situation in the real life?

I think the author also mentioned another important topic of legitimacy of the community policing in the eyes of the police officials. The article mentions that the moderators managed the legitimacy of the website by distancing the website from the police. Also, I think, trust and accountability are 2 very important challenges regarding community policing.

For further study, I suggest a later paper [3], named “Building group capacity for problem solving and police–community partnerships through survey feedback and training: a randomized control trial within Chicago’s community policing program”, published in 2014, which also analyzes Chicago’s community policing program and comes with a solution regarding police–community partnerships through survey feedback and training.

 

Questions:

  • Could you propose some designs that may increase the participation of community members in the official law-enforcement website?
  • Does the banning of members, who violate group rules, make the community a better place? Or does it only separate the members from the virtual world as they keep their presence in the community intact?
  • Do you think it is possible to establish legitimacy of community policing in the eyes of police officials? Can the trust on police official be increased? And can the online platform introduce accountability to the community policing?
  • What do you think can be done for the people who are not part of the online community? Does community policing explicitly need all the members of the community to actively participate in the online web forum?

 

References:

[1] Lewis, S., & Lewis, D. A. (2012). Examining technology that supports community policing. In Conference Proceedings – The 30th ACM Conference on Human Factors in Computing Systems, CHI 2012 (pp. 1371-1380). DOI: 10.1145/2207676.2208595

[2] Community policing, as defined in Wikipedia.

[3] Graziano, L.M., Rosenbaum, D.P. & Schuck, A.M. J Exp Criminol (2014) 10: 79. https://doi.org/10.1007/s11292-012-9171-y

Read More

Police and user-led investigations on social media.

Article:

Trottier, D. (2014). Police and user-led investigations on social media. JL Inf. & Sci.23, 75.

Leader: Leanna

Summary:

The article explores top-down and bottom-up policing, the former referring to traditional policing and the latter to crowdsourced policing. Because social media has increased both visibility and, consequently, access to personal information, its existence has facilitated a convergence of the police and the public. To demonstrate this point, the author notes that social media centralizes and stores intelligence in one place. And everyone, and their brother, can now surveil.  This includes surveillance for traditional policing as well as for public scrutinizing.

Continuing the discussion of everyday surveillance, Trottier discusses the domestication of social media in our lives. In particular, he points to surveillance creep, or function creep, which is the result of technology not being used for its intended function. With regards to traditional policing, the author discusses the shift in function of Facebook from a communication platform to police intelligence. And, with regards to the crowdsourcing, the public can now more easily engage with policing activities and, consequently, with fewer guiding protocols.

The author then spends the rest of his article providing three examples of policing activities with social media: police adoption of social media surveillance; crowdsourcing and the 2011 Vancouver riots; and, crowdsourced surveillance-businesses.

In the first example – police adoption of social media surveillance – Trottier outlines six different ways that the police can obtain information from social media, such as manual searches, directly via companies, combined manual and automated searches, lawful interception of messages, and embedded software.  The sixth way that the author points toward—analysis—is arguably an outlier to his list and is best described separately. He simply lists various processes of analysis, such as temporal based reconstruction and sentiment analysis.

In the next example – the 2011 Vancouver riots – he then continues to describe the crowd’s involvement in social control immediately following the Vancouver Stanley Cup Riots of 2011. The mass number of photos online provided the police with an abundance of information – often before they even knew the identities of people involved in the riots.

Lastly, in the third example –  crowdsourced surveillance-businesses – Trottier discusses various crowdsourced surveillance businesses, such as Blueservo, Internet Eye and Facewatch. Each capitalizes on the crowd to provide security services. For example, Internet Eyes uses crowdsourcing to monitor CCTVs for registered business owners. In return, after the viewer spends a fee to sign up, they receive compensation for their time and effort.  In his discussion of Internet Eyes, he notes of the relatively recent trouble that the company has gotten into to, namely, growing privacy concerns among shoppers.

Reflection:

In his conversation about surveillance entering the domestic sphere, Trottier mentions that “The homestead and other non-commercial spaces were locations where people were comparatively free from surveillance” (para 9). From a sociological perspective, this view of surveillance appears to be rather myopic. For sure, surveillance is often becoming more commonplace and domesticated. However, many groups in society have never been free from surveillance. Black Americans, for example, have been under police and state scrutiny for years not only in public spheres but also in their private lives.

The observation that historically many people have been subject to comparatively high levels of surveillance is a non-trivial one. On one level, the increased attention being paid to the domestication of surveillance makes it seem that it was fine when Black Americans were being surveiled but that now, when White Americans are being surveiled, the encroachment of surveillance into personal spaces is overreach. And if this is the case, then the issue here is not the domestication of surveillance but that surveillance is now more indiscriminate.

In addition, Internet Eyes is fascinating but not only for its application of crowdsourcing security but also of worker exploitation. Businesses are capitalizing on crowdsourcing – arguably like the early days of industrialization. With relatively new technologies/approaches, regulations and policies fall fast behind. Like other crowdsourcing platforms, such as MTurk, worker compensation often does not balance anywhere near the minimum wage. It would not be surprising if crowdsource union groups soon emerged, so workers aren’t left with the option of participating for less than livable wages (assuming they don’t also work elsewhere) or not participating at all.

Questions:

  1. Are we acclimatized to surveillance in our everyday lives? If so, do some people not see the threat it poses to our civil liberties?
  2. What does it mean to consent to surveillance in digital public spaces? Can we reasonably op-out of social media, search, or email?
  3. Should social media be a tool for the police?
  4. What are some of the ethical concerns with crowdsourced security?

 

 

 

Read More

Effects of Sensemaking Translucence on Distributed Collaborative Analysis.

Paper:  Goyal, N., & Fussell, S. R. (2016). Effects of Sensemaking Translucence on Distributed Collaborative Analysis. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (pp. 288–302). New York, NY, USA: ACM. https://doi.org/10.1145/2818048.2820071 (Links to an external site.)

Discussion Leader:  Annie Y. Patrick

Summary:

Goyal and Fussell focus on the concept of sensemaking translucence.  Sensemaking is a process utilized by crime investigators in which numerous pieces of information are collected to form multiple hypotheses that are then further examined to either confirm or disconfirm the initial hypotheses.  However, a challenge in this process is biased perception of the investigators, confirmation biases, and groupthink.  Sensemaking translucence is the process to bring awareness of the sensemaking process to analysts.

To address the challenger of cognitive biases in sensemaking, the authors created a sensemaking translucence interface of two parts:  a hypothesis window and a suspect visualization.  The hypothesis window is to facilitate the exchange of ideas of suspects’ means, motives, and alibis.  The suspect visualization provides automatic feedback of suspects via the hypothesis window, the group chat window, and a digital sticky note feature. The authors predicted that ta sensemaking translucence interface woult perform better on a collaborative analysis task than those using a standard interface (H1), would rate the tool as more useful than the standard interface (H2a), would report a higher level of activity (H2b) and would rate a higher collaborative experience (H3).

To conduct this study, 20 pairs of remote participants role-played as detectives to solve a crime.  The pairs were randomly assigned to either the standard interface or the sensemaking translucence interface. Each participant was given a set of documents about 3 cold murder cases with information about 7 murders, with 40 potential suspects, hidden in approximately 20 documents divided between the pairs. The pairs were to share their information to find the name of the serial killer in 50 minutes.  The study was analyzed by using the participants’ final reports and post-task report.

Upon analysis the use of the sensemaking translucence interface revealed that more clues were identified and the serial killer was identified in a less time than the standard interface users.  However, the interface was rated less helpful in providing support, hypothesis generation, and viewing multiple suspects.

Reflection:

This is a research article detailing a study of a sensemaking translucence interface to examine the challenges in collaborative sensemaking.  The authors validate their study by discussing the challenges of biased perception that investigators hold that at the least could delay justice or at the worst place the wrong in prison.

Though this study provided an initial platform to compare and contrast how a more collaborative sensemaking translucence interface can aid in the criminal case sensemaking, there were areas that could have strengthen the study.  This study used 40 participants ranging from 18-28 years old that were either undergraduate or graduate students.  This reflects a very limited sample that does not represent the general or the professional users of this type of data. Also, the participants were placed in pairs, I would guess that investigative situations would have information from multiple sources, thus complicating the situation.  The researchers do address these need for field research, however, changes in this study could have sufficed too.

  1. Would incorporating a more diverse sample have affected the study differently? Why or why not?
  2. How does the concept of teammate inaccuracy blindness (analyst treat all information from a partner as valid and useful, regardless of its actual quality) apply within the context of crowdsource data and information parsed through online and social media outlets?
  3. Does this sample reflect the population that would be most likely using this type of interface? How could this sample/study have been done differently?
  4. What are other areas (other than criminal investigations) that could use the sensemaking translucence interface technology?
  5. The pairs that used the interface identified more clues and solved the case in a greater proportion of the time than those using the standard interface. However, the users of the sensemaking translucence interface rated it as less helpful in providing support, hypothesis generation, and monitor multiple suspects.  Why do you think this was the case (in other words, why were there not positive responses for all the hypotheses of the project)?

Read More

Standing on the Schemas of Giants: Socially Augmented Information Foraging

Paper:

Kittur, A., Peters, A. M., Diriye, A., & Bove, M. (2014). Standing on the Schemas of Giants: Socially Augmented Information Foraging. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 999–1010). New York, NY, USA: ACM.

Leader: Emma

Summary

In this article, Aniket Kittur, Andrew M. Peter, Abdigani Diriye and Michael R. Bove describe new methods for usefully collating the “mental schemata” developed by Internet users as they work to make sense of information they gather online. They suggest that it may be useful to integrate these sense-making facilities to the extent that they can be meaningfully articulated and shared. Toward this end, they provide a number of related hypotheses that endorse a social dynamic in the production of frameworks that assist individuals in understanding web content. The authors depart from the presumption that individuals “acquire” and “develop” frameworks (which they usually refer to as “mental schemas”) as they surf the ‘net. They ask: “how can schema acquisition for novices be augmented?,” and to some degree, the rest of the article is a response to this question.

Much of this article is a technical whitepaper of sorts: the authors propose a supplement to the web tool Clipper (several variations of which I found through a Google search — this one seems exemplary: https://chrome.google.com/webstore/detail/clipper/offehabbjkpgdgkfgcmhabkepmoaednl?hl=en ) that incorporates their suspicions about the benefits of the social integration of mental schemas.  As they explain, Clipper is a web add-on (specifically, I think it’s a browser add-on) that appears as an addition to the browser interface. Displayed as a text-input box, Clipper encourages users to share their mental schemas by asking for specific types of information about the content users encounter: “item,” “valence, “dimension” (p. 1000). Here, “item” refers to the object users are researching — the authors use the example of a Canon camera — “dimension” is a feature of the item — the example is picture quality — and “valence” is a sentiment that describes the user’s experience with or opinion of the dimension (like “good” or “bad”). So the phrase “the Canon T2i [item] was good [valence] in terms of picture quality [dimension]” would be a typical Clipper input.

As the authors point out, Clipper initially worked only on an individual user → framework basis. “Users foraged for information completely independently from others,” they note (p. 1000). Their addition to Clipper is “asynchronous social aggregation,” a feature that incorporates dimensions from other users to bolster the usefulness of such a tool. With social aggregation, dimensions can be auto-suggested, and users can have access to a pool of knowledge about the “mental schemas” of so many others as they have similar experiences online. The authors offer that more frequently-input dimensions are generally more valuable in terms of sensemaking, and the augmentation to Clipper that they propose would display and collate information on dimensions according to their popularity.

After this, the authors give contextual background to their perspectives on socially augmented online sensemaking. They review relevant  contemporary research on information seeking, social data, and social and collaborative sensemaking (p. 1001) to support their hypotheses about the usefulness of socially augmenting Clipper. Then, the article moves to a discussion of the interface design and features, which include autocomplete, dimension hints, a workspace pane that hovers over web pages, and a review table where users can see a final view of the clips the user has produced during their web searching activities.

The next part of the article fully describes the multiple hypotheses that underscore the rationale of socially augmenting Clipper. The hypotheses fall into three basic categories: the first is about how the social aggregation of dimensions should lead to overlaps; the second is about the social use and virality of overlapping dimensions; the third is about the objective usefulness and timeliness of this information. The authors then describe the conditions of their experiments with the tool (p. 1004), and provide an assessment of their hypotheses based on this experiment. Overall, their hypotheses proved to be accurate while leaving some room for further research: “our results indicated that the dimensions generated by users showed significant overlap, and that dimensions with more overlap across users were rated as more useful,” they tell us (p. 1008), a prelude to this self-judgment: “our results provide an important step towards a future of distributed sensemaking.” At the end, they acknowledge a number of potential drawbacks, most of which emanate from conditions of variability and subjectivity among users. 

(This is a good place for me to begin my reflection…)

Reflection

This article is very rote and straightforward. (As I mentioned, parts of it read like a technical whitepaper). With that in mind, it’s not the kind of piece that lends itself to strong opinion. If I have any, it’s a mildly negative feeling that is not so much based on the authors’ intentions or the tool’s efficacy as on the presumptions at the core of their method. The notion of a “mental schema” in particular is an under-investigated concept. I’m not sure with what authority they make statements like “users build up mental models and rich knowledge representations that capture the structure of a domain in ways that serve their goals” (p. 999). Obviously they provide citations, but they’re now squarely in the field of psychology, where falsifiable knowledge is elusive and (I’d argue) it is unethical to present this information as fact, at least without further commentary on this. How a “rich knowledge representation” is different from that which simply goes by the name “knowledge” escapes me — honestly, I think it’s a just a convenient conflation. That type of unusual language (and a lot of vaguely-explained jargon) pervades their writing. I dislike it because 1) it offers an air of scientific dignity to some of their claims about the way humans make sense of information, whereas what’s really needed is further exploration of the psychological literature on which it’s based and 2) it’s bad writing. It sounds unnatural and confusing.

Moving away from a basic critique of writing style and language choice — I would have appreciated this more if the authors had gone into further detail about the types of information for which this is useful. I immediately took umbrage at the idea that social data necessarily means improved user experience when making sense of online content. The ethos of “social” and “sharing” underscores the business model of the web, which encourages people to constantly give their (highly profitable) data over to platforms that have a monopoly, and which function largely on network effects. Facebook and Google are as profitable as they are because they emphasize a social dynamic to user interaction, the feeling that the internet is always a community, and to not use these tools would mean being left out of the web experience. So I’m immediately suspicious of tools that simply reproduce this mindset rather than articulating and commenting on it (although I understand that social web use is now so naturalized that my take on may too erudite to be useful in a broad critique). Having said this, on a less penetrating level, I understand where this could be useful. For instance, I appreciate sites like Yelp and user product ratings when shopping online. It’s just that not everything that users do online can be analogized with wanting to make a purchase.

Questions

  1. Based on the part on p. 1003 where they discuss motivational factors in “noticing and using social data:”  why would users want to contribute to this project? Is it the same reason for working on websleuthing projects, Wikipedia, and free/open source software? If not, what are the key differences between all these tools that rely on crowdsourcing knowledge?
  2. For what types of items would this be most appropriate? The authors make frequent reference to a camera, but what about less concrete objects? Are there items that challenge hypotheses such as “dimensions that are shared across more people will be more useful,” and can we theorize why that might be?
  3. What if this leads to a winnowing effect where majority rule effectively pushes people away from domains that they may have been interested in?
  4. What is the relationship between socially augmented information foraging via the Clipper add-on and a) upvoting (à la Reddit and Metafilter, if anyone remembers what that is!) and b) algorithmic social media timeline prioritization (à la Twitter and Facebook)?
  5. Hypothesis 3.2 (p. 1006) states that “The social condition will generate more prototypical and more useful dimensions earlier than the non-social condition.” But what is this usefulness is partially a function of user suggestibility? As an appendage to this point, and a more general meta-comment on this paper — the authors are clearly addressing psychological matters when they discuss “mental schema.” What are the assumptions they are making the way “mental schemas” are created and used, and does this embed a priori bias into the tool?

Read More

Mapillary Summary

Mapillary is a startup founded in Sweden in 2013 by Jan Erik Solem, the founder of the facial recognition company, Polar Rose that was eventually acquired by Apple (Lunden, 2016).  The vision behind Mapillary was to create a more open and fluid version of Google’s Street view.  To remedy the limitations of a solo team with a camera rigged to a vehicle, Mapillary created an open platform utilizing the concept of crowdsourcing to build a better, more accurate, and personal map.  In addition to creating a better map, the company analyzes the photo to collect geospatial data.   Mapilliary uses a technology called Structure from Motion to create and reconstruct places in 3D.  By using semantic segmentation on the images, the company seeks to understand what is in the image such as buildings, pedestrians, cars, etc and build that into AI systems.  As of May 2017, their database held over 130 million images all through crowdsourcing that is also being used to train automotive AI systems (Lunden, 2017).

 

Mapillary is determined not to use advertising, but instead will focus on B2B platform to provide information for governments, business, and researchers.  Though researchers may use the data at no charge, commercial entities can purchase Mapillary’s services from $200 or $1000 a month dependent on the amount of data used.  The site list several institutions that have successfully used Mapillary.  The World Bank Transportation and ICT group utilized Mapillary to capture images for a rural accessibility project to evaluate the environment and road conditions remotely.  Westchester County in New York state has used the service to capture their trails to create interactive hikes with their park systems.

To date has mapped over 3 million kilometers of over 170 million images on all seven continents.

 

 

To explore Mapillary:

 

  1. Go to mapillary.com
  2. Create an account by clicking on the “Create Account” button on the lower left side of the page.
  3. Choose to create a Mapillary login or use either Google, Facebook, or OpenStreetMap login.
  4. Once signed in you may explore maps or create maps.
    1. To explore maps:
      1. Zoom in on an area on the map-a green line indicates that area has been mapped.
      2. Go to the magnifying glass on the upper left side of the screen and enter a location.
  • When you have located your area, place your curser on the line and a photo will pop up and the image will locate on the lower left screen. Click the forward or back arrows to move through the images or the play arrow.

 

To contribute to Mapillary:

You may upload an image to the webpage:

  1. Click on the menu arrow by your login name on the upper right screen
  2. Click on Uploads
  3. Click on “Upload Images”
  4. Upload your image according to the options and instructions
  5. Click “Review”
  6. You may click on the dot to see the image.
  7. Zoom into the location and place the dot on the map

Or use your smartphone:

  1. Download the Mapillary app on your smartphone from either Google Play or the App Store.
  2. Sign-in or create an account.
  3. Tap the camera icon
  4. Position the camera so that it is level with the horizon and nothing is obstructing view
  5. Choose your capture option: The automatic capture option will automatically capture images as you move every 5 meters OR use the manual option to capture panorams, objects, and intersections
  6. Tap the Red record button and move either by walking, driving, biking, or whichever means of movement and transportation you prefer.
  7. When done, tap on the exit arrow
  8. Tap the upload icon (the cloud icon)
  9. Upload your images-your images will be uploaded and deleted from your device
  10. Images are then processed by Mapillary
  11. You will receive a notification for when images have been uploaded, edits accepted, comments, or mentions
  12. You’re done!!

 

https://techcrunch.com/2016/03/03/mapillary-raises-8m-to-take-on-googles-street-view-with-crowsourced-photos/

https://techcrunch.com/2017/05/03/mapillary-open-sources-25k-street-level-images-to-train-automotive-ai-systems/

 

Read More

Digital Vigilantism as Weaponisation of Visibility

Paper:

Trottier, D. (2017). Digital vigilantism as weaponisation of visibility. Philosophy & Technology30(1), 55-72.

 

Discussion leader: Lee Lisle

Summary:

This paper explores a new era in vigilantism, where “criminals” are shamed and harassed through digital platforms. These offenders may have parked poorly or planted a bomb, but there is no real verification process. They are harassed through the process known as “doxing,” which is where their personal information is shared publically. The authors term this as “weaponised visibility,” and it can lead to other users on the Internet to harass or threaten the accused in person.

The authors define digital vigilantism and compare it to the more traditional vigilantes before the Internet lowered thresholds. In particular, they use Les Johnston’s six elements of vigilantism and define how digital vigilantism embodies each element. These elements and how they are enacted are in Table 1.

With the link to more traditional vigilantism established, the authors then make the argument that the lowered thresholds of the Internet increase the response to the offender’s acts. Once an idea or movement is released on the Internet, the person who started it is no longer in full control. This lack of a singular leader means the response to the offense is uncontrolled, which further means that the digital campaign can vastly exceed boundaries and have a nonproportional response to the offense. As a corollary, the authors point out that the people who start these campaigns would not be aware of how far the response will go. In fact, in the early stages of the Internet, it was considered a separate place from the real world. As time has gone on, the barriers between the digital and real worlds have decreased in scope and context. The authors point out parallels of cyber-bullying and digital vigilantism, but make the distinction that digital vigilantism occurs when citizens are collectively offended by other citizens.

The authors then point out the differences between state actors and these digital vigilantes. They state that a lowered confidence in state actors such as police is responsible for these coordinated efforts online, which then, in turn, results in less cooperation with state actors. Cyber-bullying and revenge porn are used as examples where the vigilantes are taking action since law-enforcement agencies aren’t.

Next, the authors make a comparison of how state actors and these vigilantes perform surveillance. Digital tools have made surveillance significantly easier, and the public has been shown various results of this, such as the Snowden revelations on government actions. Furthermore, the efforts of digital vigilantism can increase surveillance on private citizens when state actors look at the citizens and see that there’s a DV campaign against them. Also, users can over-share their daily lives over social media, such as detailing their exercise routines or other forms of life-logging. The authors make the point that this can even be used against the users in a DV campaign, since the visibility can lead to more doxing. The authors also write about the concept of “sousveillance,” where a less powerful actor or citizen monitors more powerful actors, such as the state. This can be seen in recordings of police responses. Lastly, the authors point out that pop-culture is likely encouraging occurences of DV. Reality-TV shows often encourage the contestants to try to catch each other engaging in “dishonest or immoral behavior.” This form of entertainment normalizes the concept of surveillance and leads to further efforts in digital vigilantism.

 

Reflections:

This article makes some interesting points about how digital vigilantism is an extension of traditional vigilante efforts. Since the Internet lowers the bar for the creation of what is essentially a mob armed with either facts or pseudo-facts, retaliation happens more easily and is less controlled. However, as this kind of reaction happens more and more frequently, the creators of these mobs should understand their actions more. The statement that DV participants “may not be aware of the actual impact of their actions” seems like less of an excuse as more of these examples come out.

Digital Vigilantism doesn’t always create poor outcomes. In some of their examples, the people targeted by the vigilantes were performing actions that should be illegal. There are now cases where cyber-bullying is a criminal act. Revenge porn is now illegal in 26 states. The digital vigilantism against these actions may have helped create the laws to make them illegal.

Questions:

  • This article, written in 2015, makes the point that white nationalism and the KKK are linked to digital vigilantism. Considering recent events, do you agree that DV has caused (or helped cause) the resurgence of these groups?
  • How do you think reality-TV shows influence the public? Do you agree with the authors statement that it encourages digital vigilantism?
  • In this class, we have gone over several cases where DV’s response has been extremely disproportionate. Are there examples where DV has helped society?
  • The authors point out that law-enforcement can easily see DV campaigns against individuals. Should state actors ignore DV campaigns?  Should they try to contain them?
  • The authors point out the concept of “sousveillance,” where less powerful actors monitor more powerful actors. This can explicitly be seen in the movements to monitor police officers and their interactions with people. What do you think about this kind of DV?

Read More