About this Research Topic
Evaluation is essential when conducting rigorous research in recommender systems (RS). It may span the evaluation of early ideas and approaches up to elaborate systems in operation; it may target a wide spectrum of different aspects being evaluated. Naturally, we do (and have to) take various perspectives on the evaluation of RS. Thereby, the term “perspective” may, for instance, refer to various purposes of an RS, the various stakeholders affected by an RS, or the potential risks that ought to be minimized. Further, we have to consider that various methodological approaches and experimental designs represent different perspectives on evaluation. The perspective on the evaluation of RS may also be substantially characterized by the available resources.
The goal of this Research Topic is to capture the current state of evaluation, and gauge whether there is, or should be, a different target that RS evaluation should strive for.
Topics of interest include, but are not limited to, the following:
- Case studies of difficult, hard-to-evaluate scenarios
- Evaluations with contradicting results
- Showcasing (structural) problems in RS evaluation
- Integration of offline and online experiments
- Multi-Stakeholder evaluation
- The divergence between evaluation goals and what is actually captured by the evaluation
- Nontrivial and unexpected experiences from practitioners
Keywords: recommender systems, evaluation, metrics, measures
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.