A way to quantify media bias?

Paper:

Ceren Budak, Sharad Goel, Justin M. Rao; Fair and Balanced? Quantifying Media Bias through Crowdsourced Content Analysis (Links to an external site.)Links to an external site.Public Opinion Quarterly, Volume 80, Issue S1, 1 January 2016, Pages 250–271.

Discussion Leader:

Lawrence

Summary:

We all believe that certain news organizations have a certain agenda they try to push but there is no real way to quantify it since it would be easy to add personal bias to a situation. According to this paper however, through the combination of machine learning and crowd sourced work, selection and framing characteristics can be brought to light. Judging fifteen media outlets and using 749 human judges, over 110,000 articles were classified as political out of over 800,000 in order to find out that with the exception of certain types of news reports (mostly political scandals) most major news operations give an unbiased opinion and in the event of a political scandal, organizations tend to criticize the opposing ideology rather than directly advocating for whatever they believe.

On a scale of -1 to 1, news organizations were graded based on how much slant they had to either the left of right respectively. News stories were surprisingly closely slanted as opposed to opinion based stories. Outside of the blog sites, results were as close as 0.16 points apart (out of 2).

Reflections:

For the first time it has been shown with numbers that there is a difference between news reports regarding articles of an opinionated nature. This study helps to point out the fact that there are several instances which are not giving pure information to viewers. It is a good starting point for recognizing and categorizing media bias on either political side. The issues of not being well informed of both sides of an issue however are much more apparent since there is now a way to clearly define the line between what news you here and what news is available. My biggest criticism is that there was a complaint about how much more time was needed to properly run the study, and as there was no one else running a similar study, I think they could have taken the time they needed. They went through over 800,000 articles and then complained about the amount of time they had left to run the study but I was unable to find any sort of follow up.

There was also a great deal of room for error as users were at times, only given 3 choices of the correct answer. This means there is a 66% chance that the answer was not the favorable answer but 33% is still a big chance of guessing correctly.

Questions:

  • Since there is an actual line which divides left and right news feeds, would it benefit a viewer to watch a news feed which opposes their own views?
  • Was it a surprise that non opinion based news fell closer to neutral on both sides?
  • Is it the responsibility of a news network to make sure they are being neutral in the information they give to their viewers?

Read More

Demo: Kali Linux

Technology: www.kali.org

Demo Leader: Lawrence

Disclaimer: Though possible, it is currently illegal to preform hacks, crack, and penetration testing on any network or system which you do not own. The purpose of this OS is to assist in personal testing as to protect against adversaries.

Summary:

Kali Linux is a Debian-based Linux distribution aimed at advanced Penetration Testing and Security Auditing. Kali contains several hundred tools which are geared towards various information security tasks, such as Penetration Testing, Security research, Computer Forensics and Reverse Engineering. Released in March 2013, Kali Linux comes complete with several hundred pre-installed penetration tools including but not limited to injection testing, password cracker, GPS packages, vulnerability analysis, sniffing and spoofing. There is a detailed list on the website if you would like to browse them.

Reflection:

Kali Linux was created as an offensive strategy to allow users to effectively test security in their homes and on there own devices. It is specifically designed to meet the requirements of professional penetration testing and security auditing which is why it is made slightly different than the average OS. It is not recommended for anyone looking for a general operation OS of even to be functional outside of penetration testing as there are a very limited number of repositories which are trusted to work on Kali. While Kali Linux is made with a high level of customization, so you will not be able to add random unrelated packages and repositories and have it work without a fight. In particular, there is absolutely no support whatsoever for the apt-add-repository command, LaunchPad, or PPAs. Trying to install popular programs such as Steam on your Kali Linux desktop will not be something you will be able to do more than likely. Even for experienced Linux users, Kali can pose some challenges. Although Kali is an open source project, it’s not a wide-open source project, for reasons of security. It is a rule of this OS that not knowing what you are doing is no excuse for doing irreversible damage to a system, so use at your own risk.

How To Use:

As Kali is natively compatible with ARMEL and ARMHF boards i will be displaying the process by which to install onto a Raspberry Pi 3 Model B.

  1. Obtain raspberry pi kit. There are several options depending on your likes.
  2. Obtain a fast micro SD card of at least 8 GB capacity.
  3. Download the special Kali Raspberry Pi2 image from the downloads area.
  4. Image this file to the SD card (be extremely careful as this step can erase your hard drive if you select the wrong drive to flash).
  5. Viola! You may now purchase a wireless adapter capable of wireless injection for testing, or just run the tools using your given install wireless card.

Read More

Demo:WiiBrew

Technology: Wiibrew.org

Demo Leader: Lawrence

Disclaimer: Doing actions such as this can void the warranty on your system and allow you to do things which are not legal under The Digital Millennium Copyright Act (DMCA). I do not endorse nor encourage these actions be taken and would advise against any of it if you are not sure what you are doing.

Summary:

Wiibrew is a special hack you can do to a Nintendo Wii Game System. First, you run one of several available exploits the the exploit crashes the Wii and runs unsigned code in the form of a .dol/.elf file on the root of your SD Card. All you need is an SD card which is bigger than 256MB and a game title from the list provided at the wiki page or you can run a sequence called letterbomb to do that hack if you do not have one of the listed games. The website makes sure to discourage against piracy and will not troubleshoot issues when trying to use a pirated game to do an exploit. The act of installing Wiibrew and using the intended apps is not inherently illegal, the issue comes when people use the unlocked systems to attempt to play ROMs. It is currently illegal to play non authentic copies of games even if you own the original copy and got it legally.

Reflections:

Wiibrew is a way to unlock the full potential of your Wii system and even allows you to play DVD movies (can not on a standard system). The problem is once this is done your Wii is now technically a computer system and has the same capabilities as any other home computer for the most part. You can even load a special version of Linux which is compatible with all Wii peripherals as well as a standard mouse and keyboard while also maintaining the dvd drive functionality.

Quick comparison to super portable computers.

Nintendo Wii Intel Compute Stick Acer Chromebook
Price $79.99 $258.99 $249.99
CPU Power-PC Core m3 Intel Celeron
# of Cores 1 2 4
CPU Speed 729MHz 1.6 GHz 1.6 GHz
GPU ATI Hollywood Integrated Integrated
GPU Speed 243 MHz N/A N/A
RAM 2GB 4GB 4GB
Disk Drive 512MB 64GB 32GB
Disc Drive Yes No No
Card Reader Yes SDXC Yes SDXC Yes
LAN No No No
WLAN Yes Yes Yes
USB 2.0 2 1 1
USB 3.0 0 1 1

How to install using Letterbomb:

  1. Go to please.hackmii.com
  2. Select your region
  3. Enter your MAC Address from the Wii into the boxes
  4. Make sure the HackMii installer is bundled with your download (check box)
  5. Cut the red wire to download the .zip file and unzip it
  6. Copy “private” folder and “boot.elf” to your SD Card (recommended 4GB non SDHC)
  7. Insert the SD card to your Wii and go to Wii Messageboard
  8. You should see a red envelope

Read More

Its Not Steak and Lobster But Maybe It Can Become That!

Paper:

Gang Wang, Christo Wilson, Xiaohan Zhao, Yibo Zhu, Manish Mohanlal, Haitao Zheng, and Ben Y. Zhao. 2012. Serf and turf: crowdturfing for fun and profit

Discussion Leader:

Lawrence Warren

Summary:

Remarkable things can be done with the internet at your side but of course with great power comes great criminal activity. Yes it is true that crowd sourcing systems pose a unique threat to security mechanisms due to the nature of how security is approached and this paper points out the existence of malicious crowd sourcing systems. Due to the crowd sourcing systems astroturfing (refers to information dissemination
campaigns that are sponsored by an organization, but are obfuscated so as to appear spontaneous) nature, they are referred to as crowdturf systems and more specifically they are defined as systems where a customer can initiate a campaign and users receive money for completing tasks which go against accepted user policies. There are two types of crowdturfing structures described in the paper which are distributed and centralized and both of these structures need three key actors in order to operate (customers, agents, workers).

Image result for crowdturfing structure

Distributed structure is organized around small groups hosted by the group leader and is resistant to external threats and are easy to dissolve and redeploy but is not popular due to the fragmented nature and lack of accountability.

Centralized structure is more like Mechanical Turk and is more streamlined and popular because of this. It is also because of the open availability of system which allows for infiltration.

Well known crowdturfing sites are running strong and is a global issue  and several have adopted Paypal which extends their reach to more countries than the one they are run within.

Reflections:

This paper shows the darker side of crowd source work and the results are astonishing. Millions of dollars have been spent by companies in order to bypass spam filters in order to get more exposure and crowdturfing seems to be the new way to spread information. Sites like this are not afraid of legal backlash and have increased in popularity despite threats from law enforcement. Large information cascades have been pushed and have gone around most filters designed to catch automatized spam meanwhile it is the clicks from users which is the end goal.

Questions:

  • Will machine learning need to be advanced in order to filter out spam from crowdturfing?
  • Is there any fault of users when it comes to the success of the such systems?
  • Why do you think Weibo and ZBJ was used as opposed to twitter and Mechanical Turk (other than research location)?
  • In this growing industry are there any negative results which can be seen from a company’s point of view?

Read More

Crowd Powered Threat

Paper:

Lasecki, W. S., Teevan, J., & Kamar, E. (2014). Information Extraction and Manipulation Threats in Crowd-powered Systems

Discussion Leader:

Lawrence Warren

Summary:

In automated systems there is sometimes a gap which machine learning has not had the ability to overcome as of current technology standards. Crowd sourcing seems to be the popular solution in some of these cases since tasking can become cumbersome and overwhelming for a single individual to handle. Systems such as Legion: AR or VizWiz use human intelligence to solve problems and can potentially share sensitive information. This could lead to several issues if a single malicious person is used in a session. They can have access to addresses, phone numbers, birthdays, and in some cases can possibly extract credit card numbers. There is also the possibility of a user attempting to sway results in a specific way in the event the session is not incognito. This paper describes an experiment to see how likely it is that a user would have malicious intent and how likely is it that a person will pass on information they should not and another to see how likely a user would be to manipulate test results using a few Mechanical Turk surveys.

Reflections:

This paper brought up a few good issues as far as information security involving crowd sourced information. My biggest criticism of this paper would be that there were no innovative mitigations created or even possibilities mentioned to protect against the attacks. Machine learning was mentioned as a method to blank out possibly sensitive information but other than that this paper makes it seem as if there is no way to stop it other than removing the information from the view of the user. Finding reliable workers was mentioned as a solution but that entails interviews and finding people which removes the benefits of crowd sourcing the work. This paper though informative, in my opinion did not make any headway in providing an answer, nor did it actually dig up any new threats, it just listed ones which we were already aware of and gave generic solutions which are in no way innovative.

Questions:

  • This paper describes 2 different types of vulnerabilities to which a crowd powered system is vulnerable. Can you think of any other possible threats?
  • Is there any directions crowd sourced work can take to better protect individual’s information?
  • Crowd sourcing work is becoming increasingly popular in many situations, is there a way to completely remove either of the 2 two potential attack scenarios listed in this paper aside from automation?

Read More

Ad Hoc Crowd-sourced Reporters on the Rise

Paper:

Agapie, E., Teevan, J., & Monroy-Hernández, A. (2015). Crowdsourcing in the Field: A Case Study Using Local Crowds for Event Reporting. In Third AAAI Conference on Human Computation and Crowdsourcing.

Discussion Leader: Lawrence Warren

Summary:

In this great age of social networks and digital work, it is easy to think that any job or task can or should be done online, however there are still a few tasks which inherently require a physical presence of a real person. This paper identifies a hybrid method which allows tasks to be handled by a group of individuals in an area of interest and is supervised by an offsite coordinator. There were 4 main insights in this study

  1. Local workers needed to overcome physical limitations of the environment
  2. Local workers have greater engagement with the event attendees
  3. Local workers needed to assure that information collected fulfilled the requirements set by the remote coordinator
  4. Paid workers offer more fact based reports while volunteers offer richer context

In this hybrid model tasks were divided up and then assigned to one of four roles (reporter, curator, writer, and workforce manager) and was used on 11 local events of various size, duration, and accessibility most of which were publicly advertised and were not expected to receive much news time or blogger presence. Local reporters attended the events in question, during which they completed a set of assigned tasks which had been decomposed based on what area was trying to be covered during a particular event. The curator was the quality control portion of the model and made sure information was provided in a timely matter and was not plagiarized. Based on the curator feed, the writers then created short articles called listicles which made it easy to write and understand for anyone who was not an expert. This of course was all happening while the manager was overseeing every part of the process since they are familiar with what the requirements were for every step in the process.

 

Reflections:

This model seems to have several similarities to how news can be done correctly in my opinion. It is not feasible to have a professional reporter at every event, but it is possible to employ satellite workers for smaller events and have their work be put through a series of professionals to be published as to not miss anything which may be insignificant to someone not associated with a specific community, but is very important to those who have direct contact with the community events. The main issue with separating work tasks was also addressed within this paper and that is information fragmentation. Tasks have to be assigned in such a way that there is going to be overlap with information collection or else reporters with different writing styles or levels of experience will create discrepancies and missing information. Probably the most interesting results of this paper in my opinion are centered around the quality of the articles. I am in no way doubting the effectiveness of the technique, however the way this experiment was set up it did not really have much to compare itself to. Small local events which had no coverage were used and then articles were created and then compared to articles of past years of similar events which I believe can have some skewed results. It would have been a better comparison if they instead covered a more popular event and compared stories of similar context of the same year to compare the results.

Questions:

  • According to this paper there were a few challenges which were presented by the physical environment (mobility, preparation time, and quality assurance). Which of these do you think is the easiest to overcome? How are these problems unique to the hybrid model?
  • The workflow model in this paper describes how roles were assigned to both local and remote workers. Can you think of any possible issues with the way they have the workload broken up? How would you fix these problems?
  • Certain limitations were mentioned with this method of reporting which were mostly based on the lack of in depth training. Can you think of a way which that very training may interfere with this model of reporting?
  • Recruiting seemed to be an issue with this paper but if this model was to be widely implemented that could not be the case. There are already recruiting platforms as mentioned within the article but how can you more actively improve the participation of this kind of reporting?
  • Will this model be able to stand the test of time?

Read More

The Verification Handbook

Paper:

Chapters 1, 3, 6, and 7 of:
Silverman, C. (Ed.). (2014). The Verification Handbook: A Definitive Guide to Verifying Digital Content for Emergency Coverage. Retrieved from The Verification Handbook
Discussion Leader: Lawrence Warren
 
Summary:
“This book is a guide to help everyone gain the skills and knowledge necessary to work together during critical events to separate news from noise, and ultimately to improve the quality of information available in our society, when it matters most.”

Chapter 1: When Emergency News Breaks

 This section of the book dealt with the perpetuation of rumors whenever a disaster strikes. According to the 8 1/2 Laws of Rumor Spread it is easy to get a good rumor going when we are already anxious about a situation. This problem existed long before the current world of high speed networks and social media and has become a serious thorn in the sides of information verification associates. People at times intentionally spread false rumors at times to be apart of the hot topic and to bolster attention to a social media account or cause which adds yet another layer of problems for information verification. This epidemic is intensified during actual times of crisis when lives hang within the balance of having the correct information. One would think the easiest way to verify data is for professionals to be the ones to disperse information, but the problem is that many times an eye witness will observe a situation long before an actual journalist, and at times a journalist may not have access to the things which are seen first hand. People rely on official sources to provide accurate information in a timely fashion while simultaneously those agencies rely on ordinary people to help source information as well as bring it to context.

Chapter 3: Verifying User Generated Content

The art of gathering news has been transformed by two significant developments; mobile technology and the ever developing social network. In 2013 it was reported that over half of phones sold were smartphones which meant several ordinary people had the capability of recording incidents and taking them to any number of media outlets to be shared with the world. People normally send things to social media as many do not understand the process of handing something off to a news station and they feel more comfortable within their own network of chosen friends. It is also for this same feeling of security why people normally tune into social media during a breaking news update, which is where some people are fed fake news reports because of malicious users intentionally creating fake pages and sites to create a buzz around false facts. Then there are people who find content and claim it as their own which makes it harder to find the original sources at times of inspection. Verification is a skill which all professionals must have in order to help prevent fake news from circulating and it involves 4 items to check and confirm:

  1. Provenance: Is this the original piece of content?
  2. Source: Who uploaded the content?
  3. Date: When was the content created?
  4. Location: Where was the content created?

Chapter 6: Putting the Human Crowd to Work

Crowd sourcing is by no means a new concept and has always been a part of information gathering, but with the rise of social media dynamos, we can now do this on a much larger scale than before. This section of the book lists a few of the best practices for crowd source verification.

Chapter 7: Adding the computer Crowd to the Human Crowd

This section of the book is about the possibility of automating the verification process of information. Advanced computing (human computing and machine computing) is on the rise as machine learning becomes more advanced. Human computing has not yet been used to verifying social media information but with the direction of technology it is not too far away. Machine computing could be used to create verification plug-ins which would help to verify if an entry is likely to be credible.

Reflections:

The book does a good job of trying to be a centralized guideline for information verification in all aspects of the professional world. If all people and agencies used these guidelines then I believe it would remove a great deal of misinformation and would save time of any emergency efforts trying to assist. Decreasing the number of fake reports would help increase productivity of people who are actually trying to help.

This collection of ideals and practices run under the umbrella that professionals do not purposely spread false rumors because they are ethically not supposed to do so. We have seen very extreme views given by several news anchors and show hosts, mostly built on self opinion and have had no backlash or repercussions for what they say. It is my belief that as long as there are people involved in information distribution, there is no real way to stop misinformation from being spread. Ultimately as long as there is a person with an opinion behind some information gathering or distribution it will be impossible to eradicate fake news reports, or even embellished stories.

 Questions:
  • What can we do as individuals to prevent the spread of false reports within our social networks?
  • There is a debate on the effectiveness of algorithms and automated searches against the human element. Will machines ever completely replace humans?
  • Should there be a standard punishment for creating false reports or are the culprits protected by their 1st amendment rights? Are there any exceptions to your position on that idea?
  • Verification is a difficult job that many people work together to get accurate information. Can you imagine a way (other than automation) to streamline information verified?

Read More