Performance prediction and evaluation in Recommender Systems: an Information Retrieval perspective
Supervised by Prof. Pablo Castells and Dr. Iván Cantador.
Submitted in October 2012. Public defense in November 30, 2012.
Personalised recommender systems aim to help users access and retrieve relevant information or items from large collections, by automatically finding and suggesting products or services of likely interest based on observed evidence of the users’ pref-erences. For many reasons, user preferences are difficult to guess, and therefore recommender systems have a considerable variance in their success ratio in estimating the user’s tastes and interests. In such a scenario, self-predicting the chances that a recommendation is accurate before actually submitting it to a user becomes an interesting capability from many perspectives. Performance prediction has been studied in the context of search engines in the Information Retrieval field, but there is little if any prior research of this problem in the recommendation domain.
This thesis investigates the definition and formalisation of performance predic-tion methods for recommender systems. Specifically, we study adaptations of search performance predictors from the Information Retrieval field, and propose new predictors based on theories and models from Information Theory and Social Graph Theory. We show the instantiation of information-theoretical performance prediction methods on both rating and access log data, and the application of social-based predictors to social network structures.
Recommendation performance prediction is a relevant problem per se, because of its potential application to many uses. Thus, we primarily evaluate the quality of the proposed solutions in terms of the correlation between the predicted and the observed performance on test data. This assessment requires a clear recommender evaluation methodology against which the predictions can be contrasted. Given that the evaluation of recommender systems is an open area to a significant extent, the thesis addresses the evaluation methodology as a part of the researched problem. We analyse how the variations in the evaluation procedure may alter the apparent behaviour of performance predictors, and we propose approaches to avoid misleading observations.
In addition to the stand-alone assessment of the proposed predictors, we re-search the use of the predictive capability in the context of one of its common applications, namely the dynamic adjustment of hybrid methods combining several recommenders. We research approaches where the combination leans towards the algorithm that is predicted to perform best in each case, aiming to enhance the per-formance of the resulting hybrid configuration.
The thesis reports positive empirical evidence confirming both a significant pre-dictive power for the proposed methods in different experiments, and consistent improvements in the performance of dynamic hybrid recommenders employing the proposed predictors.
You may download the whole document
, or each part separately:
- Part I Introduction and Context
- Part II Evaluating performance in recommender systems
- Chapter 3 Evaluation of recommender systems
- Chapter 4 Ranking-based evaluation of recommender systems: experimental designs and biases
- Part III Predicting performance in recommender systems
- Chapter 5 Performance prediction in Information Retrieval
- Chapter 6 Performance prediction in recommender systems
- Part IV Applications
- Chapter 7 Dynamic recommender ensembles
- Chapter 8 Neighbour selection and weighting in user-based collaborative filtering
- Part V Conclusions
- Part VI Appendices
Here you can find a PDF version of the slides