Can History Be Open Source?

Paper: Rosenzweig, R. (2006). Can History Be Open Source? Wikipedia and the Future of the Past (Links to an external site.)Links to an external site.Journal of American History, 93(1), 117–146.

Discussion leader: Md Momen Bhuiyan:

Summary:
This article summarizes the history of Wikipedia along with its importance as a source for historical reference. The author first points out how Wikipedia is different than the traditional work which largely consists of singly authored work. On the other hand Wikipedia articles are written by the general public with very few restrictions. Wikipedia also differs from traditional work as it is completely opensource with the only restriction that the part that is copied can’t be imposed any more restrictions. This article four topics about Wikipedia: the history of its development, how it works, how good is the historical writing, and what are the potential implications for the professional community.

Wikipedia was founded by Jimmy Wales and Larry Sanger in January 2001 as an open encyclopedia. They built another encyclopedia moderated by experts in March 2000 named Nupedia which had little success. They started Wikipedia as a new approach as well as in hope that the contributors there will also contribute at Nupedia. Quickly the number of articles in Wikipedia grew. But Sanger didn’t see this success as he left due to his concern about tolerance of trolls, which the author characterizes as ‘difficult people’.

Initially, Wikipedia started with no rules. Over the time it had to set some rules to minimize difficult outcomes. Now Wikipedia has a large set of rules. But those rules can be summarized in four key policies. The first policy is its goal as an encyclopedia and nothing beyond that. So it excludes work that is personal or critical or original research. This goal is coherent with what can be accomplished by a large group. But it also puts the same weight on the work of an expert and a non-expert which lead to the departure of Sanger from the organization. The second point is that articles should be written from a neutral point of view (NPOV). This point describes Wikipedia’s stance as a third party who doesn’t take any side. But it is not always achieved for a topic even after huge discussion. The third policy is that “don’t infringe copyrights”. It also comes with the licensing term for the Wikipedia content known as GNU Free Documentation License(GFDL). Some scholars argued that the imperfect resource that is “free” to be used in any way can be more valuable than a gated resource that is better in quality. The final policy is that “respect other contributors”. Initially, wikipedia got by having a minimal set of rules. But gradually Wikipedia added rules for banning to work with difficult people. It also set structure for the administration. Considering the growth of Wikipedia, all of these has worked quite well.

The history articles in Wikipedia have various nuances. From historians point of view, articles could be viewed as incomplete and inaccurate with bland prose style. Articles also have structural issue as well as inconsistent attention to details. The author thought that the part of the problem was that people write only about things that are interesting to them. To compare the contributions in popular articles in Wikipedia with other encyclopedias, the author analyzed 25 articles related to biography from Wikipedia with Encarta and American National Biography Online. Overall, Wikipedia lags behind American National Biography Online but is comparable to Encarta. It was surprising that Wikipedia had people write large documents with reliable information. Another thing the author note is that “geek culture” has shaped the articles in Wikipedia. So there are many articles about games or science but there are not many about art, history or literature. The author found only 4 errors in 25 article with minor detail issue. One problem with people is that their writing style varies which is reflected in Wikipedia articles. Due to the NPOV policy it is hard to find any specific stance in Wikipedia. Generally the bias in an article favors the subject. At the same time collective contribution avoid controversial stand of all kinds. Vandilsm in Wikipedia articles can be erased quite easily and quickly compared to other sites. Still some of the controversy for vandalism lead Wikipedia to impose rule for registering before editing an article.

Due to the open access students are using Wikipedia as an information source regularly. Search result from wikipedia comes up at top in most search engines. Due to the large volume of the content in Wikipedia it is bound to have wrong information. To solve this problem teachers can teach thier student not to rely heavily on Wikipedia sources. Another solution is to emulate Wikipedia like democracy in content sharing and provide free resources from high quality sources. Wikipedia has many rules that are very conventional like academic lessons. So it is easier to fit there as an academic. This leads to the solution that more historians should contribute there. But they still have to worry about dealing with the original research issue and collaborating with difficult people. General problem with history in wikipedia is that it popular history rather than professional. Finally the author points to the law of large number. People in group can be as effective as an expert. So it is applicable in creating collaborative history books.

Reflection:
This article gave a good brief for Wikipedia from historical point of view. Wikipedia doesn’t seem to be attractive for professional contribution. But It can always be used as initial reference point. Authors suggestion about creating history by collaborative work seemed interesting. I haven’t heard about any such effort yet. While there is merit in such effort, at the same time it disregards authors point of view which might useful for some reader. It will be interesting to see how the policies changed after 2006. Author’s use of biography for comparision was ineteresting. But I would have wanted judgement from several people for those comparison. Finally, there has been no changes on IP rights on any business model given the amount of free resources has increased. So there might be some use to have both free and commercial resources.

Questions:
1. How would you design a collaborative work on history for any particular topic?
2. Is it possible to design micro task for this type of work? How do you apply law of large number in those tasks?
3. How do you make history interesting?
4. Do you think misinformation is Wikipedia has any real repurcussion for students?
5. Do you think giving extra privilege to expert could be useful?

Read More

Montage: Collaborative video tagging

URL: https://montage.storyful.com

Demo Leader: Md Momen Bhuiyan

Summary:
Montage is a collaborative site for publicly available video tagging. The homepage has login feature using google account and a GitHub link to the source code for the website. So anyone can host similar sites. After logging in users are shown an interface with existing projects. They can also create new projects. In each project, users can add publicly available youtube videos in their collection either by searching or by youtube URL. Users can add as many videos to their as they want. The search option has filtering feature with date and location. The project can also be filtered by keyword, date, location etc. A user can add another user to a project by inviting them. In each video, users can add a comment at any time in the video. They can also tag any segment in the video. Users can set the location of a video. A video can be starred, marked as duplicate, archived, exported as CSV/KML/youtube playlist etc. There is a tab for providing updates on the project among the collaborators.

Reflection:
The site has many features like searching, adding, and filtering video as well as exporting them. But it also lacks in many things. It allows only logging in using google account. It doesn’t allow users to chat directly during collaboration which is an essential feature for a collaborative task. Project updates allow only text message rather than reference to any modification. There is no modification history to see. Also, users have to go to “My Projects” to log out. It was interesting that one can export the video location as a KML file. In overall sense, this is a good project that can be extended for other purposes as the code is opensource.

How to:
1. First go to https://montage.storyful.com and log in.
2. Initially users will be shown a page with list of projects.
3. There is a button for creating new project.
4. Users can add a title, description and an image for a project.
5. After clicking on a project user is shown a project interface.
6. The top menu here has option for searching video, inviting users, seeing project updates etc. The side drawer has some other options like all videos, favorites, unwatched, settings etc.
7. To add a video to the project click the search button.
8. Here you can search videos in youtube and add them to the project.
9. Now to tag/comment on a video, click on it in the project interface. The browser will go to the video interface.
10. Below the video here, there are buttons called “Comment” or “Add tag”.
11. There is a slider to select the time where the comment/tag to be added.
12. To add update on the project click the “Project updates” button in the top menu.
13. To add a collaborator for the project click the “Invite a collaborator” button on the top menu. It will show a popup for the name or email address of the user.
14. Finally users can export details about a project by marking videos in a project. A drawer will emerge from below with the options for “Export to”. When clicked it will show an option for the format of the file. E.g. KML, CSV etc.

Read More

Demo: Truthfinder

Technology: Truthfinder.com
Demo leader: Md Momen Bhuiyan

Summary:
Truthfinder is a commercial website for people search in the US, especially for finding information about “long lost friend” or “scammers”. This is a fairly new website started on March, 2015. Anyone can search the website using a person’s name or a phone number. They claim to have millions of public records from local, state and federal databases as well as independent sources. Due to the Freedom of Information Act these public records are available to anyone but collecting these information from multiple sources is hard. So truthfinder like websites make it easy to search. There are two types of membership in this site: one is regular and the other is premium. Premium users get more information about a person like: Educational information, Current and former roommates, Businesses and business associates, Voter registration details, Possible neighbors, Traffic accidents, Weapons permits etc. Most of these information are just possible match and doesn’t guarantee the correctness.

Reflection:

The site has a list of purposes it can or can’t be used. But anyone can misuse this for the purpose of screening candidate, stalking etc. During search it shows the records might have various type of information which could be a way to scam people into registering in the site. The average review of the website is below 3 [1]. Some suggested the information was either not accurate or several years old or can be gathered from google search. Only thing that was better was the opting out option which is mandatory by the FTC [2]. Still this site can be used by the journalists for verification purpose.

How to use:
1. The website has 5 options for searching: People Search, Criminal Records, Court Records, Reverse Phone Search, Deep Web Search.
2. Other than the reverse phone lookup option all them require the name of the person along with an optional location information.
3. After the search button is clicked it will ask the gender of the individual.
4. Then shows some graphics with a list of things its searching and ask if the relevant person’s location information.
5. After that a list of possible matches are shown along with location, age information of the match.
6. If the option of open report is clicked for a matched name it will show some further processing, an alert that the information might potentially be embarrassing and agreement on usage policy
7. After that it will go to the page for user registration where you have to give your name, email address and payment information

The whole process takes longer than five minutes.

[1] https://www.highya.com/truthfinder-reviews
[2] https://www.ftc.gov/news-events/media-resources/protecting-consumer-privacy/enforcing-privacy-promises

Read More

Looking to the Sky: Monitoring Human Rights through Remote Sensing

Article:

Edwards, S., & Koettl, C. (2011). Looking to the Sky: Monitoring Human Rights through Remote Sensing (Links to an external site.)Links to an external site.. Harvard International Review; Cambridge, 32(4), 66–71.

Discussion leader: Md Momen Bhuiyan

Summary:
This article reviews the usage of remote sensing tools, especially space-based platforms, for the purpose of human rights research in active conflict zones. Two main challenges in human rights monitoring in a conflict zone are: observers usually don’t get access to these places, and evidence collected are mostly limited accounts from the eyewitnesses which are not powerful enough to have any significant impact. The author uses examples from Darfur, Sri Lanka and South Ossetia to persuade the readers about the necessity of remote sensing tools in such cases.

The author starts by describing the impunity that armed actors get if a conflict zone is a remote area. In these places external observers like human rights NGOs have very limited mobility that prevents them from having direct access to any information about human rights violation. As a result they have to rely on second-hand testimony. Although these testimonies are corroborated and cross-checked, they fail to make any impact as the actors responsible in these accounts have a way to frustrate these claims. The perpetrators’ standard response in these cases ranges from denial to deferral.

To overcome these situations in Sudan, in June 2007 Amnesty International launched a remote sensing project named “Eyes on Darfur” in partnership with American Association for Advancement of Science (AAAS). The project tried to attain two goals at the same. The first one was to collect and gather irrefutable evidence of destruction of villages by presenting before and after satellite images of the attacked villages. The second goal was to act as a deterrent by regularly monitoring high risked villages. Amnesty International has also used remote sensing tools in both Sri Lanka and South Ossetia to find evidence of war crimes.

Although remote sensing technology like satellite images can be used for different purposes like detecting massacre, secret detention facilities, housing demolitions, troop gathering etc., it has the limitation that the crime must have a clear physical effect in space. It cannot document atrocities like torture, systematic oppression, genocidal intent etc. So the satellite images can only be used as a complementary tool to the traditional field work.

Reflection:
In this article the author highlights the application of remote sensing technologies in human rights research till 2010. Although remote sensing has been used for a decade, it didn’t have the impact that the author was hoping for. For example, this paper [1] suggest that the government of Sudan increased violence in Darfur in retaliation to the constant monitoring. The number of conflict zones around the world has increased. Still now human rights advocacy groups have to use individual stories to raise awareness. For example, little boy in Aleppo [2].

Question:
1. Does constant monitoring provide any benefit in a conflict zone?
2. Given that the satellite images need careful analysis by the experts, Can crowdsourcing be used in this context? What are the ethical issues in that case?
3. Can drones be used as an alternative tool for monitoring? What are the benefits for that?
4. Can remote sensing tools be used to predict future conflicts?

 

[1] Gordon, Grant. “Monitoring Conflict to Reduce Violence: Evidence from a Satellite Intervention in Darfur.” (2016).
[2] http://www.cnn.com/2016/08/17/world/syria-little-boy-airstrike-victim/index.html

Read More

Doxing: A Conceptual analysis

Paper:
Douglas, D. M. (2016). Doxing: a conceptual analysis (Links to an external site.)Ethics and Information Technology, 18(3), 199–210.
Discussion leader: Md Momen Bhuiyan

Summary:
In this paper the author discusses doxing, intentional release of someone’s personal information onto the Internet by a third party usually with the intention to harm, from a conceptual point by categorizing it into three types: deanonymizing, targeting and delegitimizing. Although doxing is a fairly old concept, recent “Gamergate” incident has stirred public interest in this. Author also discusses how this practice is different from other privacy violation activities. Finally the author tries to justify some deanonymizing and delegitimizing doxing where it is necessary to release personal information for revealing wrongdoing.

From Marx’s point of view, revealing any personal information removes some degree of anonymity of the subject. Here Author uses Marx’s seven types of identity knowledge as a reference for types of personal information that can be used for doxing. He distinguishes doxing from blackmail, defamation and gossip as first one requires a demand to the subject, the second one requires the information to be damaging to the subject and the third one is usually some hearsay. He then uses Marx’s rationale for anonymity to discuss the value of anonymity.

Deanonymizing doxing is revealing someone’s identity who was previously anonymous. Author uses two example to illustrate this. One is “Satoshi Nakamoto”, the creator of Bitcoin. And the other is “Violentacrez”, a Reddit moderator. Targeting doxing, usually followed by deanonymizing doxing, is revealing specific information about someone that can be used to physically locate that person. Targeting doxing makes the subject vulnerable to a wide range of harassment, from pranks to assault. Delegitimizing doxing is releasing private information about someone with the intent to undermine subject’s credibility. Sexuality is commonly used in this context. Delegitmizing doxing has the potential to create “virtual captivity”. Delegimizing doxing goes hand-in-hand with targeting doxing where the first one provides the motive for harassment and the second one provides means. This combination is illustrated in the “Gamergate” incident where a former boyfriend of the subject posted her personal detail which resulted in prolonged harassment.

To justify doxing author interprets Bok’s two claims about public interest that the public has a legitimate interest in all information about matters that might affect its welfare. He puts the burden of proof on the individual who attempts doxing and claims that only the specific information relevant to revealing a wrongdoing is justified. While in case of “Satoshi Nakamoto” public interest doesn’t seem to justify doxing, in case of “Violentacrez” doxing was justified as it held him accountable and he stopped participating in hate speech. Author also comes to the conclusion that doxing doesn’t have to be accurate to be harmful.

Author then describes the objections of these justification. The first objection is that deanonymizing doxing promotes other forms of doxing. So this should be rejected on the same ground that targeting doxing is rejected. Another objection is that cost and harms of deanonymizing outweigh social benefit. For example deanonymizing doxing can be used as a tool to intimidate dissenting views. So other forms of justice should be considered. In case of “Violentacrez”, there was an alternative like deleting his comments by Reddit. Although this conflicts with freedom of expression, it is justified if freedom of expression is not considered an absolute right that can’t be limited by other rights. Another response is that accountability should go both ways in deanonymizing someone. But this accountability in itself doesn’t justify doxing as those revealing information might be able to afford other protection like costly legal battle.

Reflection:
The first thing that is noticable in the paper is that the author tries to qualify doxing to an individual. Furthermore he usually refers the victim as female which might seem appropriate for the recent doxing trend. But it ignores one of the top contributor of doxing, Anonymous. Author doesn’t note that delegitimizing doxing can be categorized as defamation. Also he discusses gossip in a similar context while by definition is doesn’t involve publicly releasing information on Internet. He could have mentioned “Boston bombing” as an example for harm of misinformed doxing.

This paper did a good job categorizing doxing using motive as the prime factor. Although author visited all of the categories with enough depth he didn’t cover many examples for them. He mentions that the burden of justification falls on the doxxer but doesn’t provide any detail from them when discussing the examples. Finally the author explanation of his justification and its critic was insightful.

Questions:
1. Is whistleblowing justified?
2. Is doxing in journalism justified?
3. How do you establish public interest in justification of doxing?
4. To what extent can crowdsourcing be used for doxing?
5. How do we prevent doxing?

Read More

Emerging Journalistic Verification Practices Concerning Social Media

Paper:
Brandtzaeg, P. B., Lüders, M., Spangenberg, J., Rath-Wiggins, L., & Følstad, A. (2016). Emerging Journalistic Verification Practices Concerning Social Media. Journalism Practice, 10(3), 323–342.
https://doi.org/10.1080/17512786.2015.1020331

Discussion Leader: Md Momen Bhuiyan

Summary:
Social media contents have recently been used widely as a primary source of news. In United States 49 percent of the people get breaking news from social media. One study found that 96 percent of the UK journalists use social media everyday. This paper tries to characterize journalistic values, needs and practices concerning the verification process of social media content and sources. Major contribution of this paper is the requirement analysis from a user perspective for the verification of social media content.

The authors use a qualitative approach to find answers to several questions like how journalists identify contributor, how they verify content and what the obstacles for verification are. By interviewing 24 journalists working with social media in major news organizations in Europe they divided verification practices into five categories. Firstly, if a content is published by a trusted source like popular news organization, Police, fire department, politician, celebrity etc. , they are usually considered reliable. Secondly, journalists use social media to get in touch with eyewitnesses. The reliability of the eyewitness is verified by checking if a trusted organization follows him and by their previous record. They also have to check if there are conflicting stories. However, journalists prefer to use traditional methods like direct contact with people. Furthermore, for multimodal contents like text, picture, audio, video etc. they usually use different tools like Google, NameChecker, Google Reverse Image Search, TinEye, Google Maps, Streetview etc. But they have huge gap in knowledge about these tools. Finally if they cannot verify a content they use workaround like disclaimers.

By looking into user group characteristics of the journalists and their context the authors find several potential user requirements for verification tools. They need efficient and easy to use tool to verify content. It has to organize huge amount of data and make sense of them. They also need it to be integrated into their current workflow. The tool need to offer high-speed verification and publication and accessibility from different types of devices. Another requirement is that the journalists need to understand how verification takes place. Furthermore, it needs to support verification of multimodal contents.

Finally the authors discuss limitations for both the study sample and findings. In spite of limitations this study provides a valuable basis for requirement for verification process of social media content.

Reflection:
Although the study made good contribution regarding requirements of verification tools for news organizations, it has several short comings. The study sample was taken from several countries and several organizations, but they don’t include any major organizations. Which begs the question how does major organizations like BBC, CNN, AP, Reuters verify social media contents? How do they define trusted sources? How do they follow private citizen?

The study also doesn’t make much comparison between younger and older journalists and how thier verification process differs. It was noted that young and female journalists have better experience with technologies. But the study doesn’t look if there are differences in thier respective verification process. All in all, further research is necessary to address these question.

Questions:
1. Can verification tools help gain public trust in news media?
2. What are the limitations of verification tools for multimodal content?
3. Can AI automate verification process?
4. Can journalism be replaced by AI?

Read More