Feedback and Timing in a Crowdsourcing Game

Reading time: 5 minute
...

📝 Original Info

  • Title: Feedback and Timing in a Crowdsourcing Game
  • ArXiv ID: 1609.02182
  • Date: 2016-09-09
  • Authors: Gili Freedman, Sukdith Punjasthitkul, Max Seidman, and Mary Flanagan

📝 Abstract

The present research examines two problems inherent to the creation of crowdsourcing games: how to give feedback when the right answer is not always known by the game and how much time to give players without sacrificing data quality. Taken together, the present research provides an important first step in considering how to create fun, challenging crowdsourcing games that generate quality data.

💡 Deep Analysis

Figure 1

📄 Full Content

Feedback and Timing in a Crowdsourcing Game Gili Freedman, Sukdith Punjasthitkul, Max Seidman, and Mary Flanagan Tiltfactor Lab, 245 BFVAC, HB 6194 Dartmouth College, Hanover, NH 03755 contact@tiltfactor.org

Abstract The present research examines two problems inherent to the creation of crowdsourcing games: how to give feedback when the right answer is not always known by the game and how much time to give players without sacrificing data quality. Taken together, the present research provides an important first step in considering how to create fun, chal- lenging crowdsourcing games that generate quality data. Introduction
Crowdsourcing games have the potential to appeal to wide audiences and generate data in a fun way (Law & von Ahn 2009; von Ahn & Dabbish 2008), but there are important challenges inherent to creating games that rely on human computation. One challenge is that of feedback. Games generally involve feedback: players know when they have performed correctly or incorrectly based on how they are rewarded (Hunicke, LeBlanc, & Zubeck 2004). However, in a crowdsourcing game, the game creators typically do not know the correct answer ahead of time, and rewards are thus not always perfectly matched to player accuracy. Therefore, the present study examined how varying levels of feedback influenced players’ perceptions of the game. A second problem with crowdsourcing games is that of tim- ing. Games often use time limits to increase the challenge (Hunicke et al. 2004), but crowdsourcing games require the best quality answer, which may require providing more time. Therefore, we examined how varying levels of time influenced both player perception of the game and quality of the data.
Methods Feedback Study One hundred fifty (64 female; Mage = 34.88, SDage = 10.98) Amazon Mechanical Turk (MTurk) workers completed this study and were compensated $1 USD.

In both the Feedback Study and the Timing Study (see below), participants played a sorting game in which they viewed 20 images and had to indicate whether the images contained an element described by a text keyword. After indicating their choice, they were given audio (correct “bing” vs. incorrect “buzz”) and visual feedback (a green vs. red button outline; score went up as well in correct) as to whether they were correct. In the Feedback Study, par- ticipants were randomly assigned to one of three condi- tions: 50% correct feedback (i.e., half of the time if they got the correct answer they were told it was incorrect and vice versa), 90% correct feedback, or 100% correct feed- back. Then participants completed a questionnaire. Timing Study Two hundred fourteen (101 female, 1 did not report gen- der; Mage = 34.84, SDage = 11.19) Amazon MTurk workers completed this study and were compensated $1. Players in the Timing Study saw the same set of images in the same sequence as in the Feedback Study and were only given correct feedback. However, the amount of time they had to answer each question varied. Participants were randomly assigned to 2, 4, or 10 seconds, or unlimited time. After sorting images, participants completed the post- game questionnaire. Results Feedback Study The level of correct feedback did not significantly influ- ence participants’ performance quality: F(2, 147) = 1.36, p = .26. However, the level of correct feedback did influence perceptions of how complicated (F(2, 147) = 15.48, p < .001) and challenging (F(2, 147) = 14.95, p < .001) the game was as well as whether they felt they could master the game (F(2, 147) = 5.78, p = .004). Specifically, partici- pants in 50% correct feedback found the game more com- plicated (M = 4.50, SD = 2.30) than participants in 90% correct feedback (M = 2.98, SD = 1.70; t(94) = 3.71, p < .001) or 100% correct feedback (M = 2.48, SD = 1.58; t(98) = 5.19, p < .001). There was no difference in partici- pants’ perceptions of how complicated the game was be- tween those in 90% vs. 100% correct feedback (t(102) = 1.55, p = .12). A similar pattern was found for how chal- lenging the game was perceived to be. Participants in 50% correct feedback found the game more challenging (M = 4.33, SD = 2.29) than participants in 90% correct feedback (M = 2.90, SD = 1.46; t(94) = 3.67, p < .001) or 100% cor- rect feedback (M = 2.44, SD = 1.50; t(98) = 4.92, p < .001). There was no difference in participants’ perceptions of how challenging the game was between 90% vs. 100% correct feedback (t(102) = 1.57, p = .12). Participants in 50% correct feedback felt that they were less likely to mas- ter the game (M = 5.93, SD = 3.01) than participants in 90% correct feedback (M = 7.14, SD = 2.40; t(94) = 2.18, p = .032) or 100% correct feedback (M = 7.65, SD = 2.26; t(98) = 3.24, p = .002). There was no difference in partici- pants’ perceptions of how likely they were to master the game between those

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut