Abstract
Most semi-supervised learning methods are based on extending existing supervised or unsupervised techniques by incorporating additional information from unlabeled or labeled data. Unlabeled instances help in learning statistical models that fully describe the global property of our data, whereas labeled instances make learned knowledge more human-interpretable. In this paper we present a novel way of extending conventional non-negativematrix factorization (NMF) and probabilistic latent semantic analysis (pLSA) to semi-supervised versions by incorporating label information for learning semantics. The proposed algorithm consists of two steps, first acquiring prior bases representing some classes from labeled data and second utilizing them to guide the learning of final bases that are semantically interpretable.
Original language | English |
---|---|
Pages (from-to) | 366-374 |
Number of pages | 9 |
Journal | Journal of Advanced Computational Intelligence and Intelligent Informatics |
Volume | 18 |
Issue number | 3 |
DOIs | |
Publication status | Published - May 2014 |
Keywords
- Non-negative matrix factorization
- Probabilistic latent semantic analysis
- Semi-supervised learning
ASJC Scopus subject areas
- Human-Computer Interaction
- Computer Vision and Pattern Recognition
- Artificial Intelligence