Evaluating Commonsense Knowledge with a Computer Game - Human-Computer Interaction – INTERACT 2011
Conference Papers Year : 2011

Evaluating Commonsense Knowledge with a Computer Game

Juan F. Mancilla-Caceres
  • Function : Author
  • PersonId : 1016835
Eyal Amir
  • Function : Author
  • PersonId : 1016836

Abstract

Collecting commonsense knowledge from freely available text can reduce the cost and effort of creating large knowledge bases. For the acquired knowledge to be useful, we must ensure that it is correct, and that it carries information about its relevance and about the context in which it can be considered commonsense. In this paper, we design, and evaluate an online game that classifies, using the input from players, text extracted from the web as either commonsense knowledge, domain-specific knowledge, or nonsense. A continuous scale is defined to classify the knowledge as nonsense or commonsense and it is later used during the evaluation of the data to identify which knowledge is reliable and which one needs further qualification. When comparing our results to other similar knowledge acquisition systems, our game performs better with respect to coverage, redundancy, and reliability of the commonsense acquired.
Fichier principal
Vignette du fichier
978-3-642-23774-4_28_Chapter.pdf (283.28 Ko) Télécharger le fichier
Origin Files produced by the author(s)
Loading...

Dates and versions

hal-01590550 , version 1 (19-09-2017)

Licence

Identifiers

Cite

Juan F. Mancilla-Caceres, Eyal Amir. Evaluating Commonsense Knowledge with a Computer Game. 13th International Conference on Human-Computer Interaction (INTERACT), Sep 2011, Lisbon, Portugal. pp.348-355, ⟨10.1007/978-3-642-23774-4_28⟩. ⟨hal-01590550⟩
402 View
53 Download

Altmetric

Share

More