Reproducibility of Experiments in Recommender Systems Evaluation - Artificial Intelligence Applications and Innovations (AIAI 2018) Access content directly
Conference Papers Year : 2018

Reproducibility of Experiments in Recommender Systems Evaluation

Nikolaos Polatidis
  • Function : Author
  • PersonId : 1033462
Elias Pimenidis
  • Function : Author
  • PersonId : 1033463
Konstantinos Kosmidis
  • Function : Author
  • PersonId : 1033464

Abstract

Recommender systems evaluation is usually based on predictive accuracy metrics with better scores meaning recommendations of higher quality. However, the comparison of results is becoming increasingly difficult, since there are different recommendation frameworks and different settings in the design and implementation of the experiments. Furthermore, there might be minor differences on algorithm implementation among the different frameworks. In this paper, we compare well known recommendation algorithms, using the same dataset, metrics and overall settings, the results of which point to result differences across frameworks with the exact same settings. Hence, we propose the use of standards that should be followed as guidelines to ensure the replication of experiments and the reproducibility of the results.
Fichier principal
Vignette du fichier
467708_1_En_34_Chapter.pdf (291.95 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01821035 , version 1 (22-06-2018)

Licence

Attribution

Identifiers

Cite

Nikolaos Polatidis, Stelios Kapetanakis, Elias Pimenidis, Konstantinos Kosmidis. Reproducibility of Experiments in Recommender Systems Evaluation. 14th IFIP International Conference on Artificial Intelligence Applications and Innovations (AIAI), May 2018, Rhodes, Greece. pp.401-409, ⟨10.1007/978-3-319-92007-8_34⟩. ⟨hal-01821035⟩
48 View
58 Download

Altmetric

Share

Gmail Facebook X LinkedIn More