Abstract
As the use of crowdsourcing spreads, the need to ensure the quality of crowdsourced work is magnified. While quality control in crowdsourcing has been widely studied, established mechanisms may still be improved to take into account other factors that affect quality. However, since crowdsourcing relies on humans, it is difficult to identify and consider all factors affecting quality. In this study, we conduct an initial investigation on the effect of crowd type and task complexity on work quality by crowdsourcing a simple and more complex version of a data extraction task to paid and unpaid crowds. We then measure the quality of the results in terms of its similarity to a gold standard data set. Our experiments show that the unpaid crowd produces results of high quality regardless of the type of task while the paid crowd yields better results in simple tasks. We intend to extend our work to integrate existing quality con-trol mechanisms and perform more experiments with more varied crowd members.
Original language | English |
---|---|
Title of host publication | Proceedings of the 20th International Database Engineering and Applications Symposium, IDEAS 2016 |
Publisher | Association for Computing Machinery |
Pages | 70-76 |
Number of pages | 7 |
Volume | 11-13-July-2016 |
ISBN (Electronic) | 9781450341189 |
DOIs | |
Publication status | Published - 2016 Jul 11 |
Event | 20th International Database Engineering and Applications Symposium, IDEAS 2016 - Montreal, Canada Duration: 2016 Jul 11 → 2016 Jul 13 |
Other
Other | 20th International Database Engineering and Applications Symposium, IDEAS 2016 |
---|---|
Country/Territory | Canada |
City | Montreal |
Period | 16/7/11 → 16/7/13 |
Keywords
- Crowdsourcing
- Task complexity
- Text extraction
ASJC Scopus subject areas
- Human-Computer Interaction
- Computer Networks and Communications
- Computer Vision and Pattern Recognition
- Software