Automatic vs. crowdsourced sentiment analysis

Ria Mae Borromeo, Motomichi Toyama

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Citations (Scopus)

Abstract

Due to the amount of work needed in manual sentiment analysis of written texts, techniques in automatic sentiment analysis have been widely studied. However, compared to manual sentiment analysis, the accuracy of automatic systems range only from low to medium. In this study, we solve a sentiment analysis problem by crowdsourcing. Crowdsourcing is a problem solving approach that uses the cognitive power of people to achieve specific computational goals. It is implemented through an online platform, which can either be paid or volunteer-based. We deploy crowdsourcing applications in paid and volunteer-based platforms to classify teaching evaluation comments from students. We present a comparison of the results produced by crowdsourcing, manual sentiment analysis, and an existing automatic sentiment analysis system. Our findings show that the crowdsourced sentiment analysis in both paid and volunteer-based platforms are considerably more accurate than the automatic sentiment analysis algorithm but still fail to achieve high accuracy compared to the manual method. To improve accuracy, the effect of increasing the size of the crowd could be explored in the future.

Original languageEnglish
Title of host publicationACM International Conference Proceeding Series
PublisherAssociation for Computing Machinery
Pages90-95
Number of pages6
EditionCONFCODENUMBER
DOIs
Publication statusPublished - 2015 Jul 13
Event19th International Database Engineering and Applications Symposium, IDEAS 2015 - Yokohama, Japan
Duration: 2015 Jul 132015 Jul 15

Other

Other19th International Database Engineering and Applications Symposium, IDEAS 2015
CountryJapan
CityYokohama
Period15/7/1315/7/15

Fingerprint

Teaching
Students

Keywords

  • Crowdsourcing
  • Sentiment analysis
  • Text tagging

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Computer Networks and Communications
  • Computer Vision and Pattern Recognition
  • Software

Cite this

Borromeo, R. M., & Toyama, M. (2015). Automatic vs. crowdsourced sentiment analysis. In ACM International Conference Proceeding Series (CONFCODENUMBER ed., pp. 90-95). [2790761] Association for Computing Machinery. https://doi.org/10.1145/2790755.2790761

Automatic vs. crowdsourced sentiment analysis. / Borromeo, Ria Mae; Toyama, Motomichi.

ACM International Conference Proceeding Series. CONFCODENUMBER. ed. Association for Computing Machinery, 2015. p. 90-95 2790761.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Borromeo, RM & Toyama, M 2015, Automatic vs. crowdsourced sentiment analysis. in ACM International Conference Proceeding Series. CONFCODENUMBER edn, 2790761, Association for Computing Machinery, pp. 90-95, 19th International Database Engineering and Applications Symposium, IDEAS 2015, Yokohama, Japan, 15/7/13. https://doi.org/10.1145/2790755.2790761
Borromeo RM, Toyama M. Automatic vs. crowdsourced sentiment analysis. In ACM International Conference Proceeding Series. CONFCODENUMBER ed. Association for Computing Machinery. 2015. p. 90-95. 2790761 https://doi.org/10.1145/2790755.2790761
Borromeo, Ria Mae ; Toyama, Motomichi. / Automatic vs. crowdsourced sentiment analysis. ACM International Conference Proceeding Series. CONFCODENUMBER. ed. Association for Computing Machinery, 2015. pp. 90-95
@inproceedings{e9d21c44f31749f7ac8ac558b9ff0983,
title = "Automatic vs. crowdsourced sentiment analysis",
abstract = "Due to the amount of work needed in manual sentiment analysis of written texts, techniques in automatic sentiment analysis have been widely studied. However, compared to manual sentiment analysis, the accuracy of automatic systems range only from low to medium. In this study, we solve a sentiment analysis problem by crowdsourcing. Crowdsourcing is a problem solving approach that uses the cognitive power of people to achieve specific computational goals. It is implemented through an online platform, which can either be paid or volunteer-based. We deploy crowdsourcing applications in paid and volunteer-based platforms to classify teaching evaluation comments from students. We present a comparison of the results produced by crowdsourcing, manual sentiment analysis, and an existing automatic sentiment analysis system. Our findings show that the crowdsourced sentiment analysis in both paid and volunteer-based platforms are considerably more accurate than the automatic sentiment analysis algorithm but still fail to achieve high accuracy compared to the manual method. To improve accuracy, the effect of increasing the size of the crowd could be explored in the future.",
keywords = "Crowdsourcing, Sentiment analysis, Text tagging",
author = "Borromeo, {Ria Mae} and Motomichi Toyama",
year = "2015",
month = "7",
day = "13",
doi = "10.1145/2790755.2790761",
language = "English",
pages = "90--95",
booktitle = "ACM International Conference Proceeding Series",
publisher = "Association for Computing Machinery",
edition = "CONFCODENUMBER",

}

TY - GEN

T1 - Automatic vs. crowdsourced sentiment analysis

AU - Borromeo, Ria Mae

AU - Toyama, Motomichi

PY - 2015/7/13

Y1 - 2015/7/13

N2 - Due to the amount of work needed in manual sentiment analysis of written texts, techniques in automatic sentiment analysis have been widely studied. However, compared to manual sentiment analysis, the accuracy of automatic systems range only from low to medium. In this study, we solve a sentiment analysis problem by crowdsourcing. Crowdsourcing is a problem solving approach that uses the cognitive power of people to achieve specific computational goals. It is implemented through an online platform, which can either be paid or volunteer-based. We deploy crowdsourcing applications in paid and volunteer-based platforms to classify teaching evaluation comments from students. We present a comparison of the results produced by crowdsourcing, manual sentiment analysis, and an existing automatic sentiment analysis system. Our findings show that the crowdsourced sentiment analysis in both paid and volunteer-based platforms are considerably more accurate than the automatic sentiment analysis algorithm but still fail to achieve high accuracy compared to the manual method. To improve accuracy, the effect of increasing the size of the crowd could be explored in the future.

AB - Due to the amount of work needed in manual sentiment analysis of written texts, techniques in automatic sentiment analysis have been widely studied. However, compared to manual sentiment analysis, the accuracy of automatic systems range only from low to medium. In this study, we solve a sentiment analysis problem by crowdsourcing. Crowdsourcing is a problem solving approach that uses the cognitive power of people to achieve specific computational goals. It is implemented through an online platform, which can either be paid or volunteer-based. We deploy crowdsourcing applications in paid and volunteer-based platforms to classify teaching evaluation comments from students. We present a comparison of the results produced by crowdsourcing, manual sentiment analysis, and an existing automatic sentiment analysis system. Our findings show that the crowdsourced sentiment analysis in both paid and volunteer-based platforms are considerably more accurate than the automatic sentiment analysis algorithm but still fail to achieve high accuracy compared to the manual method. To improve accuracy, the effect of increasing the size of the crowd could be explored in the future.

KW - Crowdsourcing

KW - Sentiment analysis

KW - Text tagging

UR - http://www.scopus.com/inward/record.url?scp=85007424310&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85007424310&partnerID=8YFLogxK

U2 - 10.1145/2790755.2790761

DO - 10.1145/2790755.2790761

M3 - Conference contribution

AN - SCOPUS:85007424310

SP - 90

EP - 95

BT - ACM International Conference Proceeding Series

PB - Association for Computing Machinery

ER -