Semantic deep learning and adaptive clustering for handling multimodal multimedia information retrieval

Saeid Sattari, Adnan Yazici

Research output: Contribution to journalArticlepeer-review

Abstract

Multimedia data encompasses various modalities, including audio, visual, and text, necessitating the development of robust retrieval methods capable of harnessing these modalities to extract and retrieve semantic information from multimedia sources. This paper presents a highly scalable and versatile end-to-end framework for multimodal multimedia information retrieval. The core strength of this system lies in its capacity to learn semantic contexts within individual modalities and across different modalities, achieved through the utilization of deep neural models. These models are trained using combinations of queries and relevant shots obtained from query logs. One of the distinguishing features of this framework is its ability to create shot templates, representing videos that have not been encountered previously. To enhance retrieval performance, the system employs clustering techniques to retrieve shots similar to these templates. To address the inherent uncertainty in multimodal concepts, an improved variant of fuzzy clustering is applied. Additionally, a fusion method incorporating an OWA operator is introduced. This method employs various measures to aggregate ranked lists produced by multiple retrieval systems. The proposed approach leverages parallel processing and transfer learning to extract features from three distinct modalities, ensuring the adaptability and scalability of the framework. To assess its effectiveness and efficiency, the system is rigorously evaluated through experiments conducted on six widely recognized multimodal datasets. Remarkably, our approach outperforms previous studies in the literature on four of these datasets, achieving performance improvements ranging from 1.5% to 10.1% over the best reported results in those studies. The experimental findings, substantiated by statistical tests, conclusively establish the effectiveness of the proposed approach in the field of multimodal multimedia information retrieval.

Original languageEnglish
JournalMultimedia Tools and Applications
DOIs
Publication statusAccepted/In press - 2024

Keywords

  • Adaptive fuzzy clustering
  • Deep semantic learning
  • Information fusion
  • Multimodal multimedia retrieval
  • Ranked lists fusion

ASJC Scopus subject areas

  • Software
  • Media Technology
  • Hardware and Architecture
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Semantic deep learning and adaptive clustering for handling multimodal multimedia information retrieval'. Together they form a unique fingerprint.

Cite this