Évènements

12 jan
12/01/2021 14:00

Sciences & Société

Soutenance de thèse en visioconférence : Corentin LONJARRET

Sequential recommendation and explanations

Doctorant : Corentin LONJARRET

Laboratoire INSA : LIRIS

Ecole doctorale : ED512 : Informatique et Mathématiques de Lyon

Recommender systems have received a lot of attention over the past decades with the proposal of many models that take advantage of the most advanced models of Deep Learning and Machine Learning. With the automation of the collect of user actions such as purchasing of items, watching movies, clicking on hyperlinks, the data available for recommender systems is becoming more and more abundant. These data, called implicit feedback, keeps the sequential order of actions. It is in this context that sequence-aware recommender systems have emerged. Their goal is to combine user preference (long- term users' profiles) and sequential dynamics (short-term tendencies) in order to predict the next action(s) of a user.
In this thesis, we investigate sequential recommendation that aims to predict the user's next item/action from implicit feedback. Our main contribution is REBUS, a new metric embedding model, where only items are projected to integrate and unify user preferences and sequential dynamics. To capture sequential dynamics, REBUS uses frequent sequences in order to provide personalized order Markov chains. We have carried out extensive experiments and demonstrate that our method outperforms state- of-the-art models, especially on sparse datasets. Moreover we share our experience on the implementation and the integration of REBUS in myCADservices, a collaborative platform of the French company Visiativ.
We also propose methods to explain the recommendations provided by recommender systems in the research line of explainable AI that has received a lot of attention recently. Despite the ubiquity of recommender systems only few researchers have attempted to explain the recommendations according to user input. However, being able to explain a recommendation would help increase the confidence a user can have in a recommendation system. This why we propose a method based on subgroup discovery that provides interpretable explanations of a recommendation for models that use implicit feedback.