The Unfortunate Tale of the Lavender Panthers: Injustice, Revenge and the Algorithmic Source of Judicial Errors

Rev. Ray Broshear and his band of 21 homosexuals, cleverly named the Lavender Panthers, patrolled the streets of San Francisco nightly in the early 1970s with chains, batons, and an extensive knowledge of martial arts to protect others in their community from further injustices. Their “flailing ass” approach is just one response of many to when the judicial system fails. Like the Lavender Panthers, when justice is foreclosed, people might resort to self-help, or measures external to the judicial system, such as expressions of disapproval, beatings, and riots [1].

Much has changed since the days of the Lavender Panthers. Beyond a shifting social context, the source of judicial error is gradually becoming increasingly algorithmic. Many facets of the judicial system incorporate predictive risk assessments into their decision-making processes, which classify offenders into risk levels for recidivism. Nominally, the resultant scores counteract some of the errors made by humans [2, 3].  Withal these predictions have their own problems [4-7]. If past human judicial injustices lead people to self-help, will algorithmic injustices similarly motivate the Ray Brashears and Lavender Panthers of the 21st century?

Most likely not, rather it might actually motivate them even more. Psychology and human-computer interaction literature finds that people express preference for human rather than algorithmic forecasting after errors have occurred [8-12]. This preference holds up even when the human forecaster produced 90-97% more errors than the algorithmic forecaster [9]. But will this algorithm aversion apply to the criminal justice system? And, will people exposed to algorithmic-error circumvent the system even more than when faced with human-error?

To answer these questions, I turned to MTurk and randomly assigned 701 respondents into two groups before asking them to read a scenario of low-risk offenders committing more crime after being released into the community. The only difference between the groups was the decision source: either an algorithm or human forecaster. Everyone then indicated their support for extrajudicial activities (see figures 1-3) as well as their attitudes towards the judicial system.

The findings suggest that the source of judicial error does matter. People that read the algorithm error scenario had greater odds of believing revenge and naming and shaming was extremely right. The opposite held true for protesting laws or policies that you think are unjust.

At first glance, it appears that people bypass the system only some of the time. However, if we take a closer look at the motivations behind the self-help behaviors it is possible that people support circumventing the system more often when algorithmic-error is involved.

Remember the Lavender Panthers. They did not see the usefulness of protesting an unwilling system. Instead, physical violence was their approach. People protest when they believe the system can still be changed. They must believe that their actions can influence wider social structures [13, 14]. In contrast, picking up a baton or chain and hitting the streets to “flail ass” does not. Coupled with this efficacy argument, people in the algorithm group could hold the erroneous assumption that algorithms cannot learn from past mistakes [9, 15, 16]. If people hold this belief, then protesting becomes even less efficacious. In other words, why protest to a system that cannot change?

To answer the previous questions, yes and yes. Algorithm aversion does apply to the judicial system. And, yes people exposed to algorithmic-error want to circumvent the system more than when the system appears inaccessible. These findings become concerning with more and more algorithms being used within the system each year, which increases the potential for judicial error from an algorithm that could lead to greater levels of revenge and protest.

References

1          ‘The Sexes: The Lavender Panthers’, in Editor (Ed.)^(Eds.): ‘Book The Sexes: The Lavender Panthers’ (1973, edn.), pp.

2          Baird, C., Healy, T., Johnson, K., Bogie, A., Dankert, E.W., and Scharenbroch, C.: ‘A comparison of risk assessment instruments in juvenile justice’, Madison, WI: National Council on Crime and Delinquency, 2013

3          Berk, R.: ‘Criminal justice forecasts of risk: A machine learning approach’ (Springer Science & Business Media, 2012. 2012)

4          Starr, S.B.: ‘The New Profiling’, Federal Sentencing Reporter, 2015, 27, (4), pp. 229-236

5          O’Neil, C.: ‘Weapons of math destruction: How big data increases inequality and threatens democracy’ (Broadway Books, 2017. 2017)

6          Johndrow, J.E., and Lum, K.: ‘An algorithm for removing sensitive information: application to race-independent recidivism prediction’, arXiv preprint arXiv:1703.04957, 2017

7          Angwin, J., Larson, J., Mattu, S., and Kirchner, L.: ‘Machine bias: There’s software used across the country to predict future criminals. and it’s biased against blacks’, ProPublica, May, 2016, 23

8          Dietvorst, B.J., Simmons, J.P., and Massey, C.: ‘Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them’, Management Science, 2016

9          Dietvorst, B.J., Simmons, J.P., and Massey, C.: ‘Algorithm aversion: People erroneously avoid algorithms after seeing them err’, Journal of Experimental Psychology: General, 2015, 144, (1), pp. 114

10        Dzindolet, M.T., Pierce, L.G., Beck, H.P., and Dawe, L.A.: ‘The perceived utility of human and automated aids in a visual detection task’, Human Factors, 2002, 44, (1), pp. 79-94

11        Önkal, D., Goodwin, P., Thomson, M., Gönül, S., and Pollock, A.: ‘The relative influence of advice from human experts and statistical methods on forecast adjustments’, Journal of Behavioral Decision Making, 2009, 22, (4), pp. 390-409

12        Prahl, A., and Van Swol, L.: ‘Understanding algorithm aversion: When is advice from automation discounted?’, Journal of Forecasting, 2017

13        Van Stekelenburg, J., and Klandermans, B.: ‘The social psychology of protest’, Current Sociology, 2013, 61, (5-6), pp. 886-905

14        Gamson, W.A.: ‘Talking politics’ (Cambridge university press, 1992. 1992)

15        Highhouse, S.: ‘Stubborn reliance on intuition and subjectivity in employee selection’, Industrial and Organizational Psychology, 2008, 1, (3), pp. 333-342

16        Dawes, R.M.: ‘The robust beauty of improper linear models in decision making’, American psychologist, 1979, 34, (7), pp. 571

Read More

Global Database of Events, Language, and Tone

Demo: Global Database of Events, Language, and Tone (GDELT)

Summary:

The Global Database of Events, Language, and Tone (GDELT) is an open-sourced service supported by Google Ideas. It is a massive index of news media from 1979 to today, including real-time data from across the world. Articles are machine-translated into English (if not already) and then algorithms are applied to identify various events, sentiment, people, locations, themes and much more. The coded metadata is then streamed and updated every 15 minutes.

To work with the data, GDELT Analysis Service provides various ways to visualize and explore the data, such as EVENT Exporter, EVENT Geographic Network, EVENT Heatmapper, EVENT Timeline, EVENT TimeMapper, GKG Network, GKG Network, GKG Country Timeline, GKG Exporter, GKG Geographic Network, and many others. Datasets can also be moved into Google BigQuery, a cloud data warehouse, to run SQL queries, or downloaded as raw data files in CSV formats.

One of the main advantage to GDELT is the collection of real-time data from around the world. This data is coded and openly available for all to use. Not only that, but GDELT Analysis Service provides easy-use visualization tools for people not as familiar with programming. However, GDELT, like many other applications, can be used for nefarious ways. For example, regimes could track and record political protests and potentially use GDELT’s data to predict future protests. This would be particularly problematic in countries that would otherwise lack the data collection capacity to do such monitoring on their own.

Demo:

As mentioned above, GDELT is capable of so many different things. The following demonstration will be for only one of its services.  Let’s explore a geographic heat map of protests that happened in Canada in the past year …

  1. Head to GDELT Analysis Service
  2. Click EVENT HeatMapper
  3. At this stage, you need to fill in your email address (so the results can be forwarded on to you) as well as the information of interest. For this example, let’s choose a start date of 10/24/2016 and an end date of 10/24/2017. Then we choose civilian for ‘recipient/victim (Actor2) Type’. The event location should be specified to Canada and the event code should be ‘protest’. We also want the number events as the location weighting because we are interested in number of unique events rather than number of news articles. Lastly, let’s choose interactive ‘Interactive Heatmap Visualization’ and ‘.CSV File’ for the output. Then click submit.
  4. Now you wait until the results show up on your metaphorical door step…
  5. And, magic! It appears. Now you can see the results as a CSV or the Heatmap by clicking either link. Let’s look at the HeatMap first. The slide bar is to adjust display thresholds. If we zoom in, we can see protests occurred in Southern Ontario, Toronto, Ottawa, Montreal, outside Quebec City and even some in Newfoundland and Labrador.
  6. The CSV provides us with the longitude, latitude, and place name. In addition, it provides the number of events. For example, Marystown, Newfoundland had four protests. I did a simple Google search to see what was happening and it appears that fishermen were protesting at the union office.

Again, this is only one of the many tools available on GDELT.

Read More

‘I never forget a face!’

Article: Davis, J. P., Jansari, A., & Lander, K. (2013). ‘I never forget a face!’. The Psychologist26(10), 726-729.

Summary:

The authors summarize the existing literature about super-recognisers, or those individuals scoring high on face-recognition tests, into three broad areas: general information, police officers, and the general population. In the first section – general information – the authors provide information about super-recognisers and prosopagnosia. Prosopagnosia, or face blindness, is when someone loses their ability to recognize familiar faces (including their own). This disorder can either stem from brain damage or genetic inheritance, and about two percent of the population has it.

At the other end of the face recognition spectrum – or Gaussian distribution – are those that are superior at recognizing faces, with an uncanny ability for facial recognition. Davis and colleagues comment that the belief that the super-recognisers’ ability is limited to face recognitions indicates support for the face recognition spectrum (prosopagnosia at one end and super recognizers at the other). They continue to note that facial recognition by humans can be superior to machines because faces are dynamic which can trip up machines.

In the next section, Davis and colleagues focus more closely on the police and super-recognisers. The bulk of the research in this section stems from the Annual Conference of the British Psychology Society presentation “Facial identification from CCTV: Investigating predictors of exceptional face recognition performance amongst police officers” (2013) by Davis, Lander, & Evans. These authors asked police officers who were deemed super-recognisers various questions and tested them on their recall ability. Some of the findings indicate that officers’ families do not share the similar super-recogniser skills and that these officers based their identifications on distinctive facial features.  In addition, the officers that provided broad strategies for facial recognition did worse than those with more narrower strategies. Lastly, the super-recogniser officers did well on celebrity recognition tests.

In the section on the general population, the authors pull from an unpublished study by Ashok Jansari that deployed the Cambridge Face Memory Test at London’s Science Museum. Their preliminary findings indicate that face recognition falls within a Gaussian distribution and fewer than 10 people had super recognition scoring.  Davis and colleagues then presents individual differences that influences facial recognition tasks, such as introvert/extrovert personality, as well as different processing difficulties for some (holistic processing/ inverted faces/ whole-part effect).

Reflection:

I recommend taking the Cambridge Face Memory Test to see where you fall on the face recognition distribution. This is the same task that was given to the officers and general population to take in some of the mentioned studies.

Interestingly, since the article was published, the Metropolitan Police actually formed a team of super-recognisers. This team complements the millions of CCTVs scattered across London.  Although these recognizers are human, it adds another dimension of surveillance to the streets of London, something some are trying to pull back on.

The authors did mention the uniqueness of humans versus technology in facial recognition. In particular, humans might be better than machines at identifying dynamic changes in human faces. However, with increasing advancements in technologies (and specifically biometrics), I wonder, will more advanced AI facial recognition software make this new unit obsolete in the next ten years or so? Or, do humans have some unique ability to notice dynamic changes that machines will not be able to mimic?

Lastly, it was interesting that, in the presented research by Davis, Lander, & Evans (2013), mostly white male officers thought they identified more black than white offenders. The authors attempted to attribute this finding to extensive contact with certain minority groups based on their policing jurisdiction. However, it makes me wonder the role that racial discrimination plays in their identifications and if some in the super-recognizer units simply reinforce policing discrimination? I am curious on their verification methods. How do they ensure relatively accurate recognitions as well as to eliminate any potential for bias and discrimination?

Questions:

  • How does facial recognition surveillance by humans differ from facial recognition surveillance by machines?
    • And, is there one that is preferable?
    • Is one better at deterrence (both general and specific)?
    • What are the privacy/ethical implications with humans vs. machines?
  • Is there potential for reinforcing discriminatory policing practices with super-recogniser policing?
    • What verification methods would need to be put into place?
  • How can the general populations’ super-recogniser skills play into crowd-sourcing?

Read More

Doing Something Good

Article:

Chapter Six: Doing Something Good in So You’ve Been Publically Shamed by Jon Ronson

Summary:

In the sixth chapter of his book, the author opens with an account of developers Hank and Adria at Pycon. During a conference presentation, Hank and his colleague were joking together about ‘dongles’ and ‘forking,’ terms that had a clear sexual connotation.  In the row directly in front of them, Adria turned around and snapped a photo of the two developers. Not much time passed before Hank and his colleague were pulled out of the room by the conference organizers to discuss their sexual comments. After explaining the situation, not much more came of the situation during the conference.

However, when leaving the event, the two developers soon discovered that Adria posted their photo and the subject of their jokes on Twitter: “Not cool. Jokes about forking repo’s in a sexual way and ‘big’ dongles. Right behind me #pycon” (pg. 114). To further explain her tweet, she produced a more extensive blog. In conversation with Ronson, Adria further detailed her feelings of danger after hearing the comment from the men behind her.

The repercussions for Hank was termination from his work. Nothing was mentioned about the consequences for his colleague. After being terminated, Hank turned to Hacker News to publicly apologize for his lewd remarks. In his statement, he mentioned the outcomes of his actions – his termination. Adria asked to remove this portion of the apology.

The public jumped into the conversation to both defend and further shame both Hank and Adria. Adria was received rape and death threats while her work was hit by a DDoS attack. The DDoS attackers threatened to continue until Adria was fired. She was shortly fired. Hank was defended and then later insulted by men’s right bloggers. These bloggers focused on Hank’s masculinity – or lacking masculinity. Both the shamer and shamed were harmed by the actions of the crowd.

In the latter half of the chapter, Ronson switches gears slightly by writing about his interview with a 4chan user accused of DDoSing PayPal. According to the author, her motivation for shaming was “the desire to do good” (pg. 123) and stems from a place of powerlessness: “on the internet we have power in situations where we would otherwise be powerless” (pg. 123). This place of powerlessness is apparently rooted in violations, namely stop and frisk, of others constitutional rights.

Later, in their conversation about Adria, the 4chan user defends Hank claiming that Adria infringed on his and his colleague’s freedom to speech. And, that in the case of Sacco (another victim of shaming mentioned in Chapter Four), she became the symbolic enemy – rich, white person. The 4chan user claims that some “crimes” –like these – cannot be handled by the courts but by shaming.

Reflection

As we learned, 4chan/b/ is ephemeral, with threads lasting no more than a few minutes and most disappearing from the front page within hours.  Because of this, Ronson’s comment that “somebody inside 4chan was silently erasing me whenever I tried to make contact” (pg.121) seems like a misunderstanding of 4chan/b/. If so, a tad more research would have been beneficial instead of misleading readers about the nature of 4chan/b/ users or the level of administration/moderation that occurs on the forum.

In addition, the author’s connection between stop and frisks and online activism seems relatively weak. In the connection to police, Ronson makes the statement that “one by-product of [stop and frisks] was that some repeatedly frisked young people sought revenge in online activism – by joining 4chan” (pg. 126). This statement is based on conversations with only two individuals from New York City. And, in his conversation with Troy, there is no mention that Troy even engages in online activism; his activity on 4chan could simply be to have a free space without interference instead of seeking revenge.  Although the evidence supporting the association between NYC stop and frisks and online activism isn’t particularly strong in this book, the notion that powerlessness can translate over into bullying –or shaming more broadly-  does make sense.

The conversation about Hank and Adria could have been bolstered with a conversation about guilt. Shame and guilt are different. The former leads to painful feelings about our identities (we feel bad about ourselves), and the latter leads to empathic views about how we behaved poorly as well as the consequential harm.  The discussion of guilt/shame could help pull out the issues with shaming a little better to demonstrate why guilting someone might be a better alternative than shaming.

The shameful rather than guilty response can be seen for both Hank and Adria. In Adria’s response, she states: “no one would have known he got fired until he complained” (pg. 129) … and it was “his own actions that resulted in his own firing, yet he framed it in a way to blame me” (pg. 130). The result of her shaming appears to be defensive and angry rather than empathic and remorseful. Something similar can be said of Hank. In his response to Hacker News, he might not have appeared furiously angry. But his description of distancing from female co-workers shows some alteration to his self-worth. He notes that with female developers, “I’m not as friendly. There’s humor, but it’s very mundane. You just don’t know. I can’t afford another Donglegate” (pg. 130).

For both Hank and Adria, shame seems counterproductive.  I have not finished the book yet, but I hope there is a discussion about shame versus guilt.  And, in particular, how the public can elicit guilt rather than shame to help change people’s behaviors for the better.

Questions:

  1. Is shaming inherently bullying?
  2. If not, when does shaming become bullying?
  3. Is shaming justifiable for the greater good?

Read More

Police and user-led investigations on social media.

Article:

Trottier, D. (2014). Police and user-led investigations on social media. JL Inf. & Sci.23, 75.

Leader: Leanna

Summary:

The article explores top-down and bottom-up policing, the former referring to traditional policing and the latter to crowdsourced policing. Because social media has increased both visibility and, consequently, access to personal information, its existence has facilitated a convergence of the police and the public. To demonstrate this point, the author notes that social media centralizes and stores intelligence in one place. And everyone, and their brother, can now surveil.  This includes surveillance for traditional policing as well as for public scrutinizing.

Continuing the discussion of everyday surveillance, Trottier discusses the domestication of social media in our lives. In particular, he points to surveillance creep, or function creep, which is the result of technology not being used for its intended function. With regards to traditional policing, the author discusses the shift in function of Facebook from a communication platform to police intelligence. And, with regards to the crowdsourcing, the public can now more easily engage with policing activities and, consequently, with fewer guiding protocols.

The author then spends the rest of his article providing three examples of policing activities with social media: police adoption of social media surveillance; crowdsourcing and the 2011 Vancouver riots; and, crowdsourced surveillance-businesses.

In the first example – police adoption of social media surveillance – Trottier outlines six different ways that the police can obtain information from social media, such as manual searches, directly via companies, combined manual and automated searches, lawful interception of messages, and embedded software.  The sixth way that the author points toward—analysis—is arguably an outlier to his list and is best described separately. He simply lists various processes of analysis, such as temporal based reconstruction and sentiment analysis.

In the next example – the 2011 Vancouver riots – he then continues to describe the crowd’s involvement in social control immediately following the Vancouver Stanley Cup Riots of 2011. The mass number of photos online provided the police with an abundance of information – often before they even knew the identities of people involved in the riots.

Lastly, in the third example –  crowdsourced surveillance-businesses – Trottier discusses various crowdsourced surveillance businesses, such as Blueservo, Internet Eye and Facewatch. Each capitalizes on the crowd to provide security services. For example, Internet Eyes uses crowdsourcing to monitor CCTVs for registered business owners. In return, after the viewer spends a fee to sign up, they receive compensation for their time and effort.  In his discussion of Internet Eyes, he notes of the relatively recent trouble that the company has gotten into to, namely, growing privacy concerns among shoppers.

Reflection:

In his conversation about surveillance entering the domestic sphere, Trottier mentions that “The homestead and other non-commercial spaces were locations where people were comparatively free from surveillance” (para 9). From a sociological perspective, this view of surveillance appears to be rather myopic. For sure, surveillance is often becoming more commonplace and domesticated. However, many groups in society have never been free from surveillance. Black Americans, for example, have been under police and state scrutiny for years not only in public spheres but also in their private lives.

The observation that historically many people have been subject to comparatively high levels of surveillance is a non-trivial one. On one level, the increased attention being paid to the domestication of surveillance makes it seem that it was fine when Black Americans were being surveiled but that now, when White Americans are being surveiled, the encroachment of surveillance into personal spaces is overreach. And if this is the case, then the issue here is not the domestication of surveillance but that surveillance is now more indiscriminate.

In addition, Internet Eyes is fascinating but not only for its application of crowdsourcing security but also of worker exploitation. Businesses are capitalizing on crowdsourcing – arguably like the early days of industrialization. With relatively new technologies/approaches, regulations and policies fall fast behind. Like other crowdsourcing platforms, such as MTurk, worker compensation often does not balance anywhere near the minimum wage. It would not be surprising if crowdsource union groups soon emerged, so workers aren’t left with the option of participating for less than livable wages (assuming they don’t also work elsewhere) or not participating at all.

Questions:

  1. Are we acclimatized to surveillance in our everyday lives? If so, do some people not see the threat it poses to our civil liberties?
  2. What does it mean to consent to surveillance in digital public spaces? Can we reasonably op-out of social media, search, or email?
  3. Should social media be a tool for the police?
  4. What are some of the ethical concerns with crowdsourced security?

 

 

 

Read More

Digilantism: An analysis of crowdsourcing and the Boston marathon bombings

Paper:

Nhan, J., Huey, L., & Broll, R. (2017). Digilantism: An analysis of crowdsourcing and the Boston marathon bombings. The British Journal of Criminology57(2), 341-361.

Discussion leader: Leanna

Summary:

The article explores digilantism, or crowdsourced web-sleuthing, in the wake of the Boston marathon bombing. They focus on police-citizen collaboration – highlighting various crowdsourcing efforts done by the police and some successes and “failures” of online vigilantism.

The authors theoretically frame their paper around nodal governance – a theoretical marriage between security and social network analysis. In this framing, the authors combine various works. Overall, following the logic of network analysis, the theory understands organizations or security actors as nodes in a decentralized structure. The nodes (or actors) (potentially) have associations and work with each other (edges) within the network, such as police corresponding with private security forces or a Reddit community sharing information to the police. Each node has the potential to have varying degrees (weights) of capital (i.e., economic, political, social, cultural, or symbolic) that can be shared between the nodes.

The authors use threaded discussion as well as thematic analysis to examine various threads from Reddit about the Boston Marathon bombing, coming up with 20 thematic categories. For this paper, the authors are mainly interested in their theme “investigation-related information” (pg. 346). In the results, the authors note that most comments were general in nature. Some sub-themes within the investigation category included 1) public security assets, 2) civilian investigations online, 3) mishandling of clues, and 4) police work online.

The first subcategory—public security assets—discusses the vast professional backgrounds of users on Reddit and their ability to contribute based on this experience and knowledge (e.g., military forensic). In this section, the authors raise the point about the occurrence of parallel investigations and a general lack of communication between the police and web-sleuths (mainly on the part of the police). They speculate this disconnection could stem from the police subculture or legal concerns with incorporating web-sleuths into investigations.

In the next sub-theme—civilian investigations—the authors take note of the unofficial role that Reddit users had in the investigation of the Boston Marathon Bombing. This included identifying photographs of suspects and blast areas, as well as conducting background checks on suspects. Nhan and colleagues referred to this as “virtual crime scene investigation” (pg.350). In this section, the authors expanded upon the silo-effect of parallel investigations. They noted that the relationship between the police and web-sleuths were uni-directional, with users encouraging each other to report to the police with information.

In the third sub-theme—mishandling of clues—the authors focus on two consequences of web-sleuthing: 1) being suspicious of innocent acts; and 2) misidentifying potential suspects. In particular, the authors highlight the fixation of users on people carrying backpacks and the misidentification of Sunil Tripathi as a potential suspect in the bombing.

In the final sub-theme—police work online—the authors highlight police efforts to harness web-sleuths either by providing correct information or by asking people to provide police with videos from the event. The authors noted that this integration of police into the Reddit community was a way to regain control of the situation and the information being spread.

In the conclusion, the authors conclude with various policy recommendations, such as assigning police officers to be moderators on sites such as Reddit or 4Chan. In addition, the authors do acknowledge the geographical and potential cultural differences between their two examples of police crowdsource use (Boston vs. Vancouver). Lastly, the authors again note that the police have not used the expertise of the crowd.

Reflection:

When reading the paper, numerous things came to my mind. Below is a list of some of them:

  1. In the background section, the authors mentioned an article by Ericson and Haggerty (1997) that classifies the four eras of policing: political, reform, community and information. Other authors have defined this fourth era as the national security era (Willard, 2006) or militarized era (Hawdon, 2016). Hawdon (2016) argues in an ASC conference presentation, for example, that a pattern is occurring among the eras (see the first five rows below). In particular, the organizational structure, approach to citizenry, functional purpose and tactical approach of law enforcement flip flops between each era. Thinking forward, I foresee a coming era of crowdsourcing police as a continuation of the pattern Hawdon identifies (see the last row). This style would be decentralized (dispersed among the various actors), clearly integrated into the community, focused on more than law enforcement, and would intervene informally in the community members’ lives (via open-communication online). Therefore, fitting neatly into the cyclical pattern we see in policing (Hawdon, 2016).

 

Era Organizational structure Approach to Citizenry Functional Purpose Tactical Approach
Political (1860-1940) Decentralized Integrated into community Broad Service
Reform (1940-1980) Centralized Distant from community Narrow Legalistic
Community (1980 – 2000) Decentralized Integrated into community Broad Service
Militarized (1990- today) Centralized Distant from community Narrow Legalistic
Crowdsourced (?? – ??) Decentralized Integrated into community Broad Service

Note: functional: “a narrow function (i.e., law enforcement) or serving the community by fulfilling numerous and broad functions” Hawdon (2016) pg. 5; tactical: legalistic = “stresses the law-enforcement function of policing” p.5 service = intervenes frequently in the lives of residents, but officers do so informally” pg. 5.

  1. Nhan and colleagues highlight various “police-citizen collaborations” (pg. 344) with regards to social media, such as crowdsourcing face identification of the 2011 Stanley cup riots and disseminating information via Twitter. But, in many ways, these police engagement in social media appear to lack innovation. The former is like posting wanted photos on telephone poles and the latter disseminating info via a popular newspaper. Yes, the media has changed and therefore the scale of the impact has shifted, but the traditional structure hasn’t changed. The other “police-citizen collaboration” (pg. 344) that was mentioned was collecting information. This is not collaboration. In the example of Facebook, this is simply using the largest repository of available biometric data that people are willing give away for free. It’s becoming the new and improved governmental surveillance dataset, but there is nothing formally collaborative about citizen use of Facebook (even if Facebook might collaborate with law enforcement at times).
  2. The paper is missing crucial details to fully understand the authors’ numerical figures. For example, the authors noted that only a small number of individuals (n=16) appear to be experts. It would have been great to put this figure into context; how many distinct users posted in the amount of posts that were analyzed. Without a larger sense of the total n that the authors are dealing with, assessments of the external validity (generalizability) of the findings becomes difficult.
  3. The authors frame their analysis around nodal governance and the large-scale nodal security network. The guiding theory itself needs to be expanded on. The authors allude to this need but do not make the full connection. In the paper, the police and Reddit are simply being considered as the nodes. Instead, the network needs to acknowledge the organization (e.g., police or Reddit) and also the individual users. This, if my memory serves me correct, is called a multilevel network. In this model, users (nodes in one group) are connected to organizations (nodes in another group) and relationships (or edges) exist between (among actors) and within groups (organizations). The authors allude to this need when mentioning the wide breadth of knowledge and expertise that posters bring when doing web-sleuthing on Reddit, but stop there. Reddit users can be connected to the military (as mentioned) and have access to the capital that that institution brings. These individual users are then connected to two organization structures within the security network.
  4. Lastly, it was not surprising that the authors noted a “mislabelling of innocent actions as suspicious activities” (pg. 353); however, it was surprising that it was underneath the label of “the mishandling of clues” (pg. 353). In addition, the mislabelling of activities is not unique to web sleuths. I was expecting a conversation about mislabeling and its connection to a fearful/risk society. This mislabelling is all around us. Mislabelling happens in schools, for example, when nursery staff think a 4-year-old boy’s drawing of a cucumber is a cooker bomb, when the police think tourists taking photos are terrorists or when police arrest a man thinking his kitty litter was meth.

Questions

  1. Is crowdsourcing the next policing era?
  2. What drives police hesitation for police-citizen collaboration?
  3. Is police reluctance to engage in crowdsourcing harming future innovative methods of crime-fighting or police-community engagement?
  4. What are some ways police can better integrate into the community for investigation?
  5. Does the nodal governance theory fit with the crowdsourcing analysis?

Read More

Amazon Mechanical Turk

Technology: Amazon Mechanical Turk (MTurk)

Demo Leader: Leanna Ireland

Summary:

Amazon Mechanical Turk (MTurk) is a crowdsource platform which connects requesters (researchers, etc.) to a human workforce to complete tasks in exchange for money. Requesters as well as workers can be located all over the world.

Requesters provide tasks and compensation for the workers. The tasks, or human intelligence tasks (HITs), can range from identifying photos and transcribing interviews to writing reviews and taking surveys. When creating a task, requesters can specify worker requirements, such as the number of HITs undertaken, percentage of HITs approved, or a worker’s location. Other qualifications can be specified but for a fee. These options include: US political affiliation, education status, gender, and even left handedness.

Requesters set the monetary reward. Many HITs on MTurk are actually set to a relatively low reward (e.g., US $0.10). Some workers choose to pass over work with low payment; however, others will complete the low paying rewards to increase their HIT approval rates. Requesters pay workers based on the quality of their work. They approve or reject the work completed by workers and thus, if a worker’s work is rejected, the monetary reward is not given.

Overall, MTurk is an inexpensive rapid form of data collection, often resulting in participants more representative of the general population than other Internet and student samples (Buhrmester et al. 2011). However, MTurk participants can vary from the general population. Goodman and colleagues (2013) found that compared to community samples MTurk participants pay less attention to experimental materials, for example. In addition, MTurk raises some ethical issues with the often low rewards for workers. Completing three twenty-minute tasks for 1.50 a piece, for example, does not allow workers to meet many mandated hourly minimum wages.

Because MTurk is a great source for data collection, MTurk can also be used in nefarious ways. This could include being asked to take a geotagged photo of the front counter of your local pharmacy. This innocent-enough task could help determine local tobacco prices or could discover the location and front counter security measures of a store. In addition, requesters could crowdsource paid work for lower value or even crowdsource class assignments to the US population, such as the demo below…

Research about MTurk:

Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon’s Mechanical Turk a new source of inexpensive, yet high-quality, data?. Perspectives on psychological science6(1), 3-5.

Goodman, J. K., Cryder, C. E., & Cheema, A. (2013). Data collection in a flat world: The strengths and weaknesses of Mechanical Turk samples. Journal of Behavioral Decision Making26(3), 213-224.

Demo:

  1. Go to the MTurk
  2. You will be presented with two options: to either become a worker or a requester. Click ‘Get Started’ under the ‘Get Results from Mechanical Turk Workers’ to become a requester.
  3. To become a requester, for the purposes of the demo, click the ‘get started’ button under the ‘Work Distribution Made Easy’ option.
  4. You will be asked to sign-in to create a project on this page. Browse through the various available options on the left column. Requesters can launch surveys, ask workers to choose an image they prefer, or ask them their sentiments of a tweet. Simply click ‘sign in to create project’.
  5. You will then be asked to sign in or set up an amazon account.
  6. After signing into your account, you will be brought to a page with the following tabs: home, create, manage, developer, and help tabs.
  7. To create a new project, click ‘create’ and then ‘new project’ directly below. You will now need to select the type of project you wish to design from the left column. I choose ‘survey link’ as I will be embedding a link from Qualtrics (web-based survey tool) for the purposes of this demonstration, so the following instructions are for the survey link option. The survey is asking a question from our previous week’s discussion: “What do you think is a bigger problem — incorrect analysis by amateurs, or amplifying of false information by professionals?”
  8. After you have indicated your choice of project, click the ‘create project’ button.
  9. Under the ‘Enter properties’ tab, provide a project name as well as description for your HIT to the Workers. You will also need to set up your HITs. This includes indicating the reward per assignment, number of assignments per (how many people you want to complete your task), the time allotted per assignment, HIT expiration, and the time window before payments are auto-approved. Lastly, you need to indicate worker requirements (e.g., location, HIT approval rate, number of HITs approved).
  10. Under the design layout, you can use the HTML editor to layout the HIT (e.g., write up the survey instructions as well as provide the survey link).
  11. You then can preview the instructions. After you have completed setting up your HIT, you will be taken back to the create tab where your new HIT is listed. To publish the batch, simply click ‘Publish Batch’. You then need to confirm payment and publish.
  12. To view the results and to allocate payment, click ‘Manage’ and download CSV. To approve payment, place an X under the column ‘Approve’. To reject payment, place an X under the column ‘Reject’. This CVS file is then uploaded to MTurk, where approvals and rejections are processed and payment is disbursed to the workers.
  13. To download results from MTurk, under the ‘manage’ tab, click ‘results. And download the CSV. OR, you can download the results from the platform you are using (e.g., Qualtrics).
  14. Lastly, there is an entire community forum for MTurk workers entitled Turker Nation. Workers share tips and techniques and discuss all things MTurk and more (e.g., what HITs to complete but also which HITs or requesters to avoid). This can be a useful site to further advertise your HITs.

Read More