A visual-semantic approach for building content-based recommender systems

Mounir M. Bendouch, Flavius Frasincar, Tarmo Robal*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

13 Citations (Scopus)
51 Downloads (Pure)

Abstract

The topic of recommending items based on multimodal content has been addressed to a limited extent, and yet this could be a potential solution to the data bottleneck problem. Content-based semantics-driven recommender systems are often applied in the small-scale news recommendation domain, founded on the TF-IDF measure but also taking into account domain semantics through semantic lexicons or ontologies. In this work, we explore the application of content-based semantics-driven recommender systems to large-scale recommendations and focus on using both textual information and visual information to recommend items that have multimodal content. We propose methods to extract semantic features from various item descriptions, including digital images. In particular, we use computer vision to extract visual-semantic features from images and use these for movie recommendation together with various features extracted from textual information. The visual-semantic approach is scaled up with pre-computation of the cosine similarities and gradient learning of the model. The results of the study on a large-scale MovieLens dataset of user ratings demonstrate that semantics-driven recommenders can be extended to visual-semantic recommenders suitable for more complex domains than news recommendation, and which outperform TF-IDF-based recommenders on ROC, PR, F1, and Kappa metrics.

Original languageEnglish
Article number102243
JournalInformation Systems
Volume117
DOIs
Publication statusPublished - Jul 2023

Bibliographical note

Publisher Copyright:
© 2023 Elsevier Ltd

Fingerprint

Dive into the research topics of 'A visual-semantic approach for building content-based recommender systems'. Together they form a unique fingerprint.

Cite this