Show Me the Money! An Analysis of Project Updates during Crowdfunding Campaigns

Xu, A., Yang, X., Rao, H., Huang, S.W., Fu, W.-T., Bailey, B.P.: Show me the Money! An Analysis of Project Updates during Crowdfunding Campaigns. In: Accepted: The Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Toronto, Canada (2014)

Discussion Leader: Mauricio

Summary

This paper presents an analysis on project updates for crowdfunding campaigns and the role they play in the outcome of the campaign. Project updates’ original intent is to be a form of communication from project creators to keep funders aware of the progress of the campaign. The authors analyzed the content and usage patterns of updates on Kickstarter campaigns, and elaborated a taxonomy of seven types of updates. Furthermore, they found that specific uses of updates had stronger associations with campaign success than the actual project’s description. They conclude the paper by talking about design implications for designers of crowdfunding systems in order to better support the use of updates.

The authors sampled 8,529 campaigns and found that the chance of success of a project without updates was 32.6% vs. 58.7% when the project had updates. By analyzing how creators use updates, they identified the following themes in updates: Progress Report, Social Promotion, New Content, New Reward, Answer Questions, Reminders, and Appreciation. In their study, they collected 21,234 publicly available updates, and then proceeded to assign themes to these updates.

They also divided campaign duration in three phases: initial, middle, and final; and each update was assigned to one of them. Taking into account the theme of the update and when the update was posted, they arrived at very interesting findings. Reminder updates offered the most significant influence when it comes to campaign success and Answer Questions updates had the least influence. New Reward Updates were more likely to increase the chance of success than New Content updates. These two kinds of updates indicate that the project creators have revised the project in some way; so this shows that offering new rewards is more effective than changing the project itself. Looking into the representation of the project, they found that update representation is more predictive of success than the representation of the project page. In terms of timing, they found that high number of Social Promotion updates in the initial phase, high number of Progress Report updates in the middle phase, and high number of New Reward updates in the final phase all are positively correlated with success.

Finally, the authors discuss design implications for crowdfunding systems to better support campaigns. They suggest that these systems should provide templates for each of the types of updates available. They also mention that these platforms should offer guidance to project creators so that they better elaborate their updates, e.g., provide update guidelines, allow creators to learn from prior successful examples, help creators develop strategies for advertising their campaigns, and guide creators as to when to post what type of update.

Reflections

This paper shows a very interesting take on crowdsourcing campaigns. Though prior work shows that project representation is important, the authors point that more emphasis should be put into the representation, themes, and timing of updates for campaign success. I think this is very interesting because crowdsourcing platforms don’t put too much emphasis on updates. For example, for Kickstarter, their top rule for success is to create a project video on the project page. Though they found that number of updates was higher in successful campaigns than unsuccessful campaigns, I do wonder if there can be “too many” updates and if this can lead to campaign failure. It would be interesting to see if a very high number of updates can become annoying to funders to the point of causing a negative correlation with campaign success; and if so, what type of updates are the ones that annoy people the most. I imagine it can be very difficult to design an experiment around this, since researchers would have to complete the proposed project if they get the funding.

One of the most interesting finding that the authors arrived at is the difference between posting New Reward and New Content updates. New Content updates are for changes in the project itself; this can be viewed as improving the product to attract customers. New Reward updates are for new rewards to attract funders; this can be viewed as offering discounts to attract customers. When the authors first posted the question of which one would be more effective (before arriving to their results), I thought that New Content updates would be more effective, as I saw New Rewards as a form of desperation by the project creators to try to reach their funding goal, which would show that the project is not going well. But I was proven wrong, as New Reward updates were shown to be more likely to increase the chance of success. This seems to indicate that, since people already pledged to the content of the project, they are not really interested in more new content, but in new rewards. However, according to their findings, there were more New Content than New Reward updates. Project creators, therefore, would need to focus more on revising reward levels when it comes to having more chances of success.

In addition, for New Reward updates, a high number of updates in the final phase was positively correlated with campaign success. One reason could be that the initial reward offered served as a reference point, and additional rewards change funders’ perceptions and affects their pledge decisions. I think this is related to the “anchoring effect”, which refers to the human tendency to rely heavily on the first piece of information in making subsequent judgments.

I also like the design implications that they elaborated, but I wonder if they can become too much of a burden for crowdfunding platforms to implement. In addition, it can also become an annoyance for the project creators, as being prompted when to post what kind of update, being given guidelines as to what and when to say things in social media, etc., can become too intrusive.

Questions

  • Do you think that a crowdfunding campaign can provide “too many” updates? If so, what type of updates should creators avoid posting in high numbers and high frequency?
  • If you were to start a crowdfunding campaign to fund the project related to your research or your project for this class, what types of updates and rewards would you give your funders and potential funders?
  • From the perspective of crowdfunding platforms such as Kickstarter, do you think it is worth it to implement all the design implications mentioned in this paper?
  • If you have contributed to a crowdfunding campaign in the past, what were the reasons that you contributed? And, did the updates the creators provided influenced you one way or the other?

Read More

CrowdScape: interactively visualizing user behavior and output

Rzeszotarski, Jeffrey, and Aniket Kittur. “CrowdScape: interactively visualizing user behavior and output.” Proceedings of the 25th annual ACM symposium on User interface software and technology. ACM, 2012.

Discussion Leader: Mauricio

Summary

This paper presents CrowdScape, a system that supports the evaluation of complex crowd work through mixed-initiative machine learning and interactive visualization. This system aims to solve the challenges in quality control that arise in crowdsourcing platforms. Researchers previously have developed approaches for quality control based on worker outputs or on worker behavior. However, these two by themselves have limitations for evaluating complex work. Subjective tasks such as writing or drawing may have no single “right” answer and no two answers may be identical. In regards to behavior, two workers might complete a task in a different manner yet provide valid output. CrowdScape combines worker behavior with worker output information in its visualizations to address these limitations in the evaluation of complex crowd work. CrowdScape’s features allow users to make hypotheses about their crowd, test them, and refine their selections based on machine learning and visual feedback. Its interface allows interactive exploration of worker results and it supports the development of insights on worker performance. CrowdScape is built on top of Amazon Mechanical Turk and it captures data from both the Mechanical Turk API in order to obtain the products of work and from Rzeszotarski and Kittur’s Task Fingerprinting system in order to capture worker behavioral traces (such as time spent on tasks, key presses, clicking, browser focus shifts, and scrolling). It uses these two information sources to create an interactive data visualization of workers. To illustrate the different use cases of the system, they posted four varieties of tasks on Mechanical Turk and solicited submissions. The tasks were: translating text from Japanese to English, pick a color from an HSV color picker and write its name, describing their favorite place, and tagging science tutorial videos. In the end paper they conclude that the linking of behavioral information about workers with data about their output is beneficial in reinforcing or contradicting our own initial conception of the cognitive process workers use when completing tasks and in developing and testing our own mental model of the behavior of workers who have good (or bad) outputs.

Reflections

I think CrowdScape presents a very interesting hybrid approach to address low quality in crowdsourcing work, which according to the authors comprises about one third of all submissions. When starting to read the paper, I got the impression that logging behavioral traces of crowd workers when completing tasks would be a bit of an intrusive way to address this issue. But the explanations they give as to why this approach is more appropriate for assessing the quality of creative tasks (such as writing) than post-hoc output evaluations (such as gold standard questions) was really convincing.

I liked how they were self-critical about the many limitations that CrowdScape has, such as its need for workers to have JavaScript enabled, or how there are cases in which behavioral traces aren’t indicative of the work done, such as if users complete a task in a text editor and then paste it on Mechanical Turk. I would like to see how further research addresses these issues.

I found it curious that in the first task (translation) that, even though the workers were told that their behavior would be captured, they still went ahead and used translators for the task. I would have liked to see what wording the authors used in their tasks when giving this warning, and also in describing compensation. For instance, if the authors told workers that they were going to log the workers’ moves, but that they would be paid regardless, then that gives the workers no incentive to do the translation correctly, which is why the majority (all but one) of the workers might have ended up using Google Translate or another translator for the task. In the other hand, if the authors just told workers that their moves were going to be recorded, I would imagine that would cause the workers to think that not only their output will be evaluated but also their behavior, which would cause them to perform a better job. The wording in the task when they tell workers that their behavioral traces are being logged I think is important, because it might skew the results one way or the other.

Questions

  • What wording would you use to tell the workers that their behavioral traces would be captured when completing a task?
  • What do you think about looking at a worker’s behavior to determine the quality of their work? Do you think it might be ineffective or intrusive in some cases?
  • The authors combine worker behavior and worker output to control quality. What other measure(s) could they have integrated in CrowdScape?
  • How can CrowdScape address the issue of cases in which behavioral traces aren’t indicative of the work done (e.g. writing the task’s text in another text editor)?

Read More