Combining crowdsourcing and learning to improve engagement and performance.

Dontcheva, Mira, et al. “Combining crowdsourcing and learning to improve engagement and performance.” Proceedings of the 32nd annual ACM conference on Human factors in computing systems. ACM, 2014.

Discussion Leader (Pro): Sanchit

Summary

This paper presented a crowdsourcing platform called LevelUp For Photoshop. This tool helps workers learn Photoshop skills and tools through a series of tutorials and then allows them to apply these skills to real world image examples from several non-profit organizations that require image touchups before uploading the images for use.

This sort of crowdsourcing platform is different in that it is aimed at completing creative tasks through the crowd but also allowing the crowd to learn a valuable skill that they can apply to other fields and scenarios outside of this crowdsourcing platform. The platform starts off every user with a series of very interactive and step-by-step guiding tutorials. These tutorials are implemented as an extension for Adobe Photoshop which allows the extension to monitor what tools and actions the users have taken. This creates a very easy-to-use and learn-from tutorial system because every action has some sort of feedback associated. The only thing this tool can’t do is judge the quality of the transformations of these images. That task however is extended onto other Amazon MTurk workers who look at a before/after set of images to determine the quality and usefulness of the picture editing job done by a crowd worker in LevelUp.

This paper presented a very thorough and detailed evaluation and study of this project. It involved 3 deployments where each contribution of the approach was added onto the plugin for user testing. The first deployment was only of the interactive tutorial. The authors measured the number of levels the players completed and got helpful feedback and opinions about the tutorial system. The second deployment added the challenge mode and evaluated the results with logs, MTurk worker quality checks and expert quality examination. These photo edits were scored using a point system between 1-3 for usefulness and novelty. The last deployment added real images from non-profit organizations. The test was to determine whether different organizations have a different effect on a user’s editing motivation and skills. The results weren’t as spectacular, but they were still positive in that the skills learned by the users were helpful.

Reflection

Usually crowdsourcing involves menial tasks that have little to no value outside of the platform service, but the authors in this paper designed a very unique and impressive methodology for users to both learn a new and useful skill like photo editing and then applying the skills to complete existing real-world photo editing tasks. They took advantage of the need for certain people to learn Photoshop or image editing and while teaching them were also able to accomplish a real-life photo editing task, thus killing two birds with one stone. Crowdsourcing doesn’t necessarily have to involve monotonous tasks and nor do crowd workers have to be paid monetarily. This is a creative approach where the incentive is the teaching and skills developed for photo editing along with having achievements and badges for completing specific tasks. They may not be as valuable as money, but it is enough incentive to garner interest and leverage the newly learned skills to accomplish an existing task.

The authors conducted extremely detailed surveys and collected feedback from a pool of approximately ten thousand works over a period of 3 years. This type of dedication for evaluation and the associated results of this study prove the usefulness of this type of crowdsourcing platform. It shows that not all crowd work has to be menial tasks and that users can actually learn a new skill and apply the work outside of crowd platforms. However, I do admit that the way these results were presented were non-trivial. The inclusion of graphs, charts or tables would have made it easier to follow along instead of interpreting the numerous percentages within the paragraphs.

By having MTurk workers and experts judge photo edits, they bring in the perspective of an average user and what their perception of quality or usefulness is and they also bring in the perspective of a professional to see how quality or usefulness is judged through their eyes. That, in my opinion, is a pretty strong group of evaluators for such a project especially considering the massive scale at which people volunteered and completed these evaluation tasks on MTurk.

Lastly, I was really impressed by the Photoshop extension that the authors developed. It looked very clean, sleek and easy to learn from because it doesn’t seem to intimidate users like the palette of tools that Photoshop presents. This sleekness can allow workers to retain the skills learned and apply it to future projects that they may have. I think photo editing is a fabulous skill to have for anyone. You can edit your photos to focus or highlight different areas of the pictures or to remove unwanted noise or extraneous subjects from an image. By having a straightforward, step-by-step and interactive tool such as LevelUp, one can really increase their Photoshop skillset by a huge margin.

Questions

  • How many of you have edited pictures on Photoshop and are “decent” at it? How many would like to increase your skills and try out a learning tool like this?
  • Having a great tutorial is a necessity for such a concept to work where people can both learn and apply those skills elsewhere without their hands being held. What features do you think such tutorials should have to make them successful?

Read More

Understanding the Role of Community in Crowdfunding Work

Hui, Greenberg, Gerber. “Understanding the Role of Community in Crowdfunding Work” Proceedings of the 17th ACM Conference on Computer supported cooperative work & social computing. ACM, 2014.

Discussion Leader: Sanchit

Crowdsourcing example: Ushahidi – Website

Summary:

This paper discusses several popular crowdfunding platforms and the common interactions that project designers have with  the crowd in order for them to get properly funded and supported. The authors describe crowdfunding as a practice designed to solicit financial support from a distributed network of several hundred to thousands of supporters on the internet. The practice is a type of entrepreneurial work in that they both require “discovery, evaluation, and exploitation of opportunities to introduce novel products, services, and organizations”. With crowdfunding, a niche has to be discovered and evaluated so the target crowd can be convinced to provide financial support in return for a novel product or service that would benefit both the supporters and the project initiator.

Crowdfunding in recent times is almost entirely dependent on online communities like Facebook, Twitter and Reddit. The authors talk about the importance of having a large online presence because word of mouth through the internet travels much faster than any other medium. By personally reaching out to people in social media, project creators allow a trustworthy relationship to develop between the crowd and them and this can lead to more people funding the project.

The authors conducted a survey of 47 crowdfunding project creators that ranged from a variety of different project ideas and backgrounds. Some creators ended up having successful crowdfunding projects and made a good margin to continue developing and distributing their proposed product. Others weren’t as lucky since some people lacked a strong online presence which turns out to be one of the most important aspects of having a successful crowdfunding project.

According to the authors, a crowdfunding project requires five tasks in the project’s lifespan. (1) Preparing the initial campaign design and ideas, (2) testing the campaign material, (3) publicizing the project the public through social media, (4) following through with project promises and goals, and (5) giving back to the crowdfunding community. It turns out that coming up with a novel idea or product is a very small portion of the entire story of crowdfunding. The process of designing an appealing campaign was very daunting for several creators because they had never worked with video editing or design software before. Ideas for design and promotion mostly came from inspiration blogs and even paid mentors. Testing these campaign ideas was done through an internal network of supporters and some even skipped the step to instead gain feedback when they eventually got supporters. Publicizing depended largely on weather or not the product got picked up by a popular news source or social media account. If creators got lucky, they would have enough funding to support their project and be able to deliver the product to the supporters. However, even this task was difficult for the majority of the creators who were working alone on the project and didn’t have enough resources to add additional people for assistance. Lastly, almost all creators wished to give back to the crowdfunding community by funding projects that their supporters create in the future or by providing advice to future crowdfunding creators.

 

Reflection:

Overall, I thought the paper was a fairly straightforward summary and overview of what happens behind-the-scenes in a crowdfunding project. I have personally seen several Kickstarter campaigns for cool and nifty gadgets primarily through Reddit or Facebook. This shows that unless someone actively looks for crowdsourcing projects, a majority of these projects are stumbled upon through social media websites by other people. Popularity plays a huge part in the success of a crowdfunding project and it makes perfect sense that it does. Having a product that is popular amongst a majority of people will become funded quicker, so creating a product and convincing campaign associated with it is equally important. These social engineering tasks aren’t everyone’s cup of tea though. I can totally relate to the author’s comments on artistic people having a better fundraising background than scientific researchers which allows them to create a much more convincing campaign and have a very forward approach in trying to recruit support using social media platforms. These skills aren’t really drilled into researchers to convince peers that their research is important since their work should speak for itself.

While reading through the paper I also noticed how much additional baggage and onus one has to take responsibility for in order to get a project funded. Creating videos, posters, graphics, t-shirts, gifts and eventually/hopefully delivering the final product to customers is a very demanding process. It’s no wonder that some of these people spend part-time job hours just maintaining their online presence. I personally don’t see this being used as a primary source of income because there is way too much overhead and risk involved to expect any sort of reasonable payback. This is especially true when most of the funded money is used for creating and delivering product and then eventually giving back the money to other community projects. With crowdsourcing platforms such as Amazon MTurk, there is at least a guarantee that some amount of money will be made, no matter how small. If you play the game smart, then at the very least it’s easy beer money. With crowdfunding, a project gaining enormous traction, let alone reaching its goal is a big gamble dependent on a lot of other variable factors than just pure objective work-skill.

The tools and websites designed to aid crowdfunding campaigns are definitely helpful and are honestly expected to exist at this point. Whenever there is a crowd-based technology, I feel like Reddit immediately forms a subreddit dedicated to it and there is constant chatter, suggestions and ideas for success. Similarly, people who want to help themselves and others develop tools to make project development easier and stress free. These tools and forums are great places for general advice, but I agree with the authors in that it is not personal. The idea of having an MTurk based feedback system for crowdfunding campaigns is a brilliant and easy-to-implement one. Just linking the project page and asking for feedback for a higher than average cost will provide a lot of detailed suggestions to help convince future supporters to fund a project.

Overall, the idea of crowdfunding is great, but I wish the paper touched on the fees that Kickstarter and some other crowdfunding platforms take to provide this service to people. It is a cost that people should consider when and if deciding to start a crowdfunding project no matter how big or small.

Discussion:

  1. Have you guys contributed to crowdfunding projects? Or ever created a project? Any interesting project ideas that you found?
  2. Do you agree with the occupational gap the author hinted at? i.e. Artistic project creators have an easier time than scientific project creators for crowdfunding.
  3. Thoughts on having incentives for donating or funding a larger amount than other people? Good idea or will people be skeptical of the success of the project regardless and still donate the minimum amount?
  4. Would you use Kickstarter to donate for poverty/disaster-stricken areas than donate to a reputable charity? There are several donation based projects and I wonder why people would trust those more than charities.

Read More

VizWiz: Nearly Real-time Answers to Visual Questions

Bigham, Jeffrey P., et al. “VizWiz: Nearly Real-time Answers to Visual Questions” Proceedings of the 23nd annual ACM symposium on User interface software and technology. ACM, 2010

Discussion Leader: Sanchit

Crowdsourcing Example: Couch Surfing

Youtube video for a quick overview: VizWiz Tutorial

Summary

VizWiz is a mobile application designed to answer visual questions for blind people, in real time by taking advantage of existing crowdsourcing technologies such as Amazon’s Mechanical Turk. Existing software and hardware to aid blind people solve visual problems are either too costly or too cumbersome to use. OCR is not advanced enough and reliable to completely solve vision-based problems and existing text to speech software only helps solve a singular issue of reading text back to the blind user. The application interface is designed to take advantage of Apple’s accessibility service called VoiceOver which allows the operating system to talk to the user and describe what the current selected option or view is on the screen. Touch based gestures are used to navigate the application so that users may easily take a picture, ask a question and receive answers from remote workers in real time.

The authors also present an abstraction layer on top of the Mechanical Turk API called quikTurkit. This allows requesters to create their own website on which Mechanical Turk workers are recruited and are able to answer questions posed by users of the VizWiz application. There is a constant stream of HITs being posted on Mech Turk so that a pool of workers is available to work as soon as a new question is posed by the user. While the user is taking a picture and recording their question, VizWiz sends a notification to quikTurkit which allows the API to start recruiting workers and therefore reduce the overall time and latency in waiting for an answer to come back.

VizWiz also featured a second version which detected blurry or dark images and asked users to retake them in order to get more accurate results. The authors also developed a use case VizWiz:LocateIt which allows blind users to locate an object in 3D space. They take a picture of an area where the desired object is located and then pose a question asking for the location of the specific object. Crowdworkers then highlight the object and the application processes the camera properties, the user’s location and the highlighted object to determine how much the user should turn and how far the user should walk in order to reach the general vicinity of the object. A lot of favorable responses were generated from the post user study surveys which showed that this technology is definitely in demand by blind users and may set up future research to automate the answering process without human interaction.

Reflections

I thought the concept in itself was brilliant. It is a problem that not many people think about in their daily lives, but when you sit down and really start to ponder on how trivial tasks such as finding the right object in a space can be nearly impossible for blind people, you realize the potential of such an application. The application design was very solid. Apple designed the VoiceOver API for vision-impaired people to begin with, so using it in such an application was the best choice. Employing large gestures for UI navigation is also smart because it can be very difficult or impossible for vision-impaired people to click a specific button or option on a touch based screen/device.

QuikTurkit was in my opinion a good foundation and beginning as the backend model for this application. It can definitely be improved by not placing too much focus on speech recognition, or not bombarding Mechnical Turk with too many HITs. Finding the right balance between the number of active workers in the pool and the number of HITs to be posted will really benefit both the system load and the cost the user has to incur in the long run.

A minor observance that I noticed was that the study had 11 blind users with 5 females initially, but later on in the paper there were 3 females. Probably a typo, but thoughts? Speaking of their study, I think the heuristics made a lot of sense and the survey results were generally favorable for the application. A latency of 2-3 minutes on average is not too bad considering the helpless situation of a vision-impaired person. Any amount of additional information or answering of a question that the user can get will only be helpful. I honestly didn’t see the point for speech recognition to be a focus for their product. If workers can just listen to the question, then that should be sufficient enough to answer it. There is no need to introduce errors with failed speech recognition attempts.

In my opinion, VisWiz:LocateIt was too complicated of a system with too many external variables to worry about so that a visually-impaired user can successfully find an object. The location detection and mapping is based only on the picture taken by the user which is not guaranteed to be perfect (more often than not). Although they have several algorithms and techniques to combat ineffective pictures, I still think there are potential hazards and accidents waiting to happen based on the direction ques provided by the application. Not entirely convinced on this use case.

Overall it was a solid concept and execution in terms of the mobile application. It looks like the software is public and is being used by over 5000 blind people right now, so that is pretty impressive.

Questions:

  1. One aspect that confused me about quikTurkit was who actually deployed the server or made the website for Mechical Turk workers to use this service. As in, was it the VizWiz people who created the server or can requesters build their own websites using this service as well? And who would the requesters be? Blind people?
  1. Besides human compassion and empathy, what is stopping users from giving wrong answers? Also, who determines whether the answer was correct or not?
  1. If a handheld barcode scanner works fairly well to locate a specific product in an area, then why couldn’t the authors just use a barcode scanning API on the iPhone along with the existing voice over technology to help locate a specific product? Do you foresee any problems with this approach?

 

Read More