Automatic vs. crowdsourced sentiment analysis

Ria Mae Borromeo, Motomichi Toyama

研究成果: Conference contribution

8 引用 (Scopus)

抄録

Due to the amount of work needed in manual sentiment analysis of written texts, techniques in automatic sentiment analysis have been widely studied. However, compared to manual sentiment analysis, the accuracy of automatic systems range only from low to medium. In this study, we solve a sentiment analysis problem by crowdsourcing. Crowdsourcing is a problem solving approach that uses the cognitive power of people to achieve specific computational goals. It is implemented through an online platform, which can either be paid or volunteer-based. We deploy crowdsourcing applications in paid and volunteer-based platforms to classify teaching evaluation comments from students. We present a comparison of the results produced by crowdsourcing, manual sentiment analysis, and an existing automatic sentiment analysis system. Our findings show that the crowdsourced sentiment analysis in both paid and volunteer-based platforms are considerably more accurate than the automatic sentiment analysis algorithm but still fail to achieve high accuracy compared to the manual method. To improve accuracy, the effect of increasing the size of the crowd could be explored in the future.

元の言語English
ホスト出版物のタイトルACM International Conference Proceeding Series
出版者Association for Computing Machinery
ページ90-95
ページ数6
エディションCONFCODENUMBER
DOI
出版物ステータスPublished - 2015 7 13
イベント19th International Database Engineering and Applications Symposium, IDEAS 2015 - Yokohama, Japan
継続期間: 2015 7 132015 7 15

Other

Other19th International Database Engineering and Applications Symposium, IDEAS 2015
Japan
Yokohama
期間15/7/1315/7/15

Fingerprint

Teaching
Students

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Computer Networks and Communications
  • Computer Vision and Pattern Recognition
  • Software

これを引用

Borromeo, R. M., & Toyama, M. (2015). Automatic vs. crowdsourced sentiment analysis. : ACM International Conference Proceeding Series (CONFCODENUMBER 版, pp. 90-95). [2790761] Association for Computing Machinery. https://doi.org/10.1145/2790755.2790761

Automatic vs. crowdsourced sentiment analysis. / Borromeo, Ria Mae; Toyama, Motomichi.

ACM International Conference Proceeding Series. CONFCODENUMBER. 編 Association for Computing Machinery, 2015. p. 90-95 2790761.

研究成果: Conference contribution

Borromeo, RM & Toyama, M 2015, Automatic vs. crowdsourced sentiment analysis. : ACM International Conference Proceeding Series. CONFCODENUMBER Edn, 2790761, Association for Computing Machinery, pp. 90-95, 19th International Database Engineering and Applications Symposium, IDEAS 2015, Yokohama, Japan, 15/7/13. https://doi.org/10.1145/2790755.2790761
Borromeo RM, Toyama M. Automatic vs. crowdsourced sentiment analysis. : ACM International Conference Proceeding Series. CONFCODENUMBER 版 Association for Computing Machinery. 2015. p. 90-95. 2790761 https://doi.org/10.1145/2790755.2790761
Borromeo, Ria Mae ; Toyama, Motomichi. / Automatic vs. crowdsourced sentiment analysis. ACM International Conference Proceeding Series. CONFCODENUMBER. 版 Association for Computing Machinery, 2015. pp. 90-95
@inproceedings{e9d21c44f31749f7ac8ac558b9ff0983,
title = "Automatic vs. crowdsourced sentiment analysis",
abstract = "Due to the amount of work needed in manual sentiment analysis of written texts, techniques in automatic sentiment analysis have been widely studied. However, compared to manual sentiment analysis, the accuracy of automatic systems range only from low to medium. In this study, we solve a sentiment analysis problem by crowdsourcing. Crowdsourcing is a problem solving approach that uses the cognitive power of people to achieve specific computational goals. It is implemented through an online platform, which can either be paid or volunteer-based. We deploy crowdsourcing applications in paid and volunteer-based platforms to classify teaching evaluation comments from students. We present a comparison of the results produced by crowdsourcing, manual sentiment analysis, and an existing automatic sentiment analysis system. Our findings show that the crowdsourced sentiment analysis in both paid and volunteer-based platforms are considerably more accurate than the automatic sentiment analysis algorithm but still fail to achieve high accuracy compared to the manual method. To improve accuracy, the effect of increasing the size of the crowd could be explored in the future.",
keywords = "Crowdsourcing, Sentiment analysis, Text tagging",
author = "Borromeo, {Ria Mae} and Motomichi Toyama",
year = "2015",
month = "7",
day = "13",
doi = "10.1145/2790755.2790761",
language = "English",
pages = "90--95",
booktitle = "ACM International Conference Proceeding Series",
publisher = "Association for Computing Machinery",
edition = "CONFCODENUMBER",

}

TY - GEN

T1 - Automatic vs. crowdsourced sentiment analysis

AU - Borromeo, Ria Mae

AU - Toyama, Motomichi

PY - 2015/7/13

Y1 - 2015/7/13

N2 - Due to the amount of work needed in manual sentiment analysis of written texts, techniques in automatic sentiment analysis have been widely studied. However, compared to manual sentiment analysis, the accuracy of automatic systems range only from low to medium. In this study, we solve a sentiment analysis problem by crowdsourcing. Crowdsourcing is a problem solving approach that uses the cognitive power of people to achieve specific computational goals. It is implemented through an online platform, which can either be paid or volunteer-based. We deploy crowdsourcing applications in paid and volunteer-based platforms to classify teaching evaluation comments from students. We present a comparison of the results produced by crowdsourcing, manual sentiment analysis, and an existing automatic sentiment analysis system. Our findings show that the crowdsourced sentiment analysis in both paid and volunteer-based platforms are considerably more accurate than the automatic sentiment analysis algorithm but still fail to achieve high accuracy compared to the manual method. To improve accuracy, the effect of increasing the size of the crowd could be explored in the future.

AB - Due to the amount of work needed in manual sentiment analysis of written texts, techniques in automatic sentiment analysis have been widely studied. However, compared to manual sentiment analysis, the accuracy of automatic systems range only from low to medium. In this study, we solve a sentiment analysis problem by crowdsourcing. Crowdsourcing is a problem solving approach that uses the cognitive power of people to achieve specific computational goals. It is implemented through an online platform, which can either be paid or volunteer-based. We deploy crowdsourcing applications in paid and volunteer-based platforms to classify teaching evaluation comments from students. We present a comparison of the results produced by crowdsourcing, manual sentiment analysis, and an existing automatic sentiment analysis system. Our findings show that the crowdsourced sentiment analysis in both paid and volunteer-based platforms are considerably more accurate than the automatic sentiment analysis algorithm but still fail to achieve high accuracy compared to the manual method. To improve accuracy, the effect of increasing the size of the crowd could be explored in the future.

KW - Crowdsourcing

KW - Sentiment analysis

KW - Text tagging

UR - http://www.scopus.com/inward/record.url?scp=85007424310&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85007424310&partnerID=8YFLogxK

U2 - 10.1145/2790755.2790761

DO - 10.1145/2790755.2790761

M3 - Conference contribution

AN - SCOPUS:85007424310

SP - 90

EP - 95

BT - ACM International Conference Proceeding Series

PB - Association for Computing Machinery

ER -