Integrating On-demand Fact-checking with Public Dialogue

Paper:

Kriplean, T., Bonnar, C., Borning, A., Kinney, B., & Gill, B. (2014). Integrating On-demand Fact-checking with Public Dialogue. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 1188–1199). New York, NY, USA: ACM. https://doi.org/10.1145/2531602.2531677 (Links to an external site.)

Discussion Leader: Sukrit V

Summary:

This article aims to understand the design space for inserting accurate information into public discourse in a non-confrontational manner. The authors integrate a request-based fact-checking service with an existing communication interface, ConsiderIt – a crowd-sourced voters guide. This integration involves the reintroduction of professionals and institutions – namely, librarians and public library systems – that have up to now been largely ignored, into crowdsourcing systems.

The authors note that existing communication interfaces for public discourse often fail to aid participants in identifying which claims are factual. The article first delves into different sources of factually correct information and the format in which it should be conveyed to participants. They then discuss who performs the work of identifying, collating and presenting this information: either professionals or via crowdsourcing. Lastly, where this information is presented by these people is crucial: through single function entities such as Snopes or Politifact, embedded responses, or overlays in chat interfaces.

Their system was deployed in the field during the course of a real election with voluntary users – the Living Voters Guide (LVG), and utilized librarians from the Seattle Public Library (SPL) as the fact-checkers.Initial results indicated that participants were not opposed to the role played by these librarians. One key point to note is the labeling of points post verification: accurate, unverifiable and questionable. The term “questionable” was specifically chosen since it is more considerate of users’ feelings – as opposed to the negative connotation associated with “wrong” or a red X.

The rest of the article discusses balancing the task of informing LVG users of which pro/con points were factual but in a non-confrontational manner. The decision to prompt a fact-check was in the hands of the LVG participants and the fact-check was performed only on the factual component of claims and presented in an easy-to-assess manner. From the perspective of the SPL librarians, they played a crucial role in determining the underlying features of the fact-checking mechanism.

In the results, the authors were successfully able to determine that there was a demand for a request-based fact-checking service, and that the SPL librarians were viewed and welcomed as trustworthy participants which simultaneously helped improve the credibility of the LVG interface. Based on Monte Carlo simulations, the authors demonstrate that there was an observable decrease in commenting rates before and after fact-checking, having taken into account temporal effects.

Conclusively, the authors note that the journalistic fact-checking framework did not interface well with librarian referencing methods. In their implementation, there was also no facility for enhanced communicating between the librarians, the user whose point was being checked, and the requester. The method in which the fact-checks were displayed tended to dominate the discussion section and possibly caused a drop in comment rates. Some librarians were of the opinion that they were exceeding their professional boundaries when determining the authenticity of certain claims – especially those pertaining to legal matters.

Reflections:

The article made good headway in creating an interface to nudge people towards finding a common ground. This was done through the use of unbiased professionals/institution vis-à-vis librarians and the Seattle Public Library, in a communication interface.

The involvement of librarians – who are still highly trusted and respected by the public – is notable. These librarians help the LVG participants finding verified information on claims, amidst a deluge of conflicting information presented to them by other users and on the internet.  One caveat – that can only be rectified through changes in existing laws – is that librarians cannot perform legal research. They are only allowed to provide links to related information.

On one hand, I commend the efforts of the authors to introduce a professional, unbiased fact-checker into a communication system filled with (possibly) misinformed and uninformed participants. On the other, I question the scalability of such efforts. The librarians set a 48-hour deadline on responding to requests, and in some cases it took up to two hours of research to verify a claim. Perhaps this system would benefit from a slightly tweaked learnersourcing approach utilizing response aggregation and subsequent expert evaluation.

Their Monte Carlo analysis was particularly useful in determining whether the fact-checking had any effect on comment frequency, versus temporal effects alone. I also appreciate the Value Sensitive Design approach the authors use to evaluate the fact-checking service from the viewpoint of the main and indirect stakeholders. The five-point Likert scale utilized by the authors also allows for some degree of flexibility in gauging stakeholder opinion, as opposed to binary responses.

Of particular mention was how ConsiderIt, their communication interface, utilized a PointRank algorithm which highlights points that were more highly scrutinized. Additionally, the system’s structure inherently disincentivizes gaming of the fact-checking service. The authors mention triaging requests to handle malicious users/pranksters. Perhaps this initial triage could be automated, instead of having to rely on human input.

I believe that this on-demand fact-checking system shows promise, but will only truly be functional at a large scale if certain processes are automated and handled by software mechanisms. Further, a messaging interface wherein the librarian, the requester of the fact-check, and the original poster can converse directly with each other would be useful. Perhaps  that would defeat the purpose of a transparent fact-checking system and eschew the whole point of a public dialogue system. Additionally, the authors note that there is little evidence to show that participants short-term opinions changed. I am unsure of how to evaluate whether or not these opinions change in the long-term.

Overall, ConsiderIt’s new fact-checking feature has considerably augmented the LVG user experience in a positive manner and integrated the work of professionals and institutions into a “commons-based peer production.”

Questions:

  • How, if possible, would one evaluate long-term change in opinion?
  • Would it be possible to introduce experts in the field of legal studies to aid librarians in the area of legal research? How would they interact with the librarians? What responsibilities do they have to the public to provide “accurate, unbiased, and courteous responses to all requests”?
  • How could this system be scaled to accommodate a much larger user base, while still allowing for accurate and timely fact-checking?
  • Are there certain types of public dialogues in which professionals/institutions should not/are unable to lend a hand?