TY - GEN
T1 - Generating expressive summaries for speech and musical audio using self-similarity clues
AU - Sert, Mustafa
AU - Baykal, Buyurman
AU - Yazici, Adnan
PY - 2006
Y1 - 2006
N2 - We present a novel algorithm for structural analysis of audio to detect repetitive patterns that are suitable for content-based audio information retrieval systems, since repetitive patterns can provide valuable information about the content of audio, such as a chorus or a concept. The Audio Spectrum Flatness (ASF) feature of the MPEG-7 standard, although not having been considered as much as other feature types, has been utilized and evaluated as the underlying feature set. Expressive summaries are chosen as the longest patterns by the k-means clustering algorithm. Proposed approach is evaluated on a test bed consisting of popular song and speech clips based on the ASF feature. The well known Mel Frequency Cepstral Coefficients (MFCCs) are also considered in the experiments for the evaluation of features. Experiments show that, all the repetitive patterns and their locations are obtained with the accuracy of 93% and 78% for music and speech, respectively.
AB - We present a novel algorithm for structural analysis of audio to detect repetitive patterns that are suitable for content-based audio information retrieval systems, since repetitive patterns can provide valuable information about the content of audio, such as a chorus or a concept. The Audio Spectrum Flatness (ASF) feature of the MPEG-7 standard, although not having been considered as much as other feature types, has been utilized and evaluated as the underlying feature set. Expressive summaries are chosen as the longest patterns by the k-means clustering algorithm. Proposed approach is evaluated on a test bed consisting of popular song and speech clips based on the ASF feature. The well known Mel Frequency Cepstral Coefficients (MFCCs) are also considered in the experiments for the evaluation of features. Experiments show that, all the repetitive patterns and their locations are obtained with the accuracy of 93% and 78% for music and speech, respectively.
UR - http://www.scopus.com/inward/record.url?scp=34247632245&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=34247632245&partnerID=8YFLogxK
U2 - 10.1109/ICME.2006.262675
DO - 10.1109/ICME.2006.262675
M3 - Conference contribution
AN - SCOPUS:34247632245
SN - 1424403677
SN - 9781424403677
T3 - 2006 IEEE International Conference on Multimedia and Expo, ICME 2006 - Proceedings
SP - 941
EP - 944
BT - 2006 IEEE International Conference on Multimedia and Expo, ICME 2006 - Proceedings
T2 - 2006 IEEE International Conference on Multimedia and Expo, ICME 2006
Y2 - 9 July 2006 through 12 July 2006
ER -