Visual and auditory data fusion for energy-efficient and improved object recognition in wireless multimedia sensor networks

Murat Koyuncu, Adnan Yazici, Muhsin Civelek, Ahmet Cosar, Mustafa Sert

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Automatic threat classification without human intervention is a popular research topic in wireless multimedia sensor networks (WMSNs) especially within the context of surveillance applications. This paper explores the effect of fusing audio-visual multimedia and scalar data collected by the sensor nodes in a WMSN for the purpose of energy-efficient and accurate object detection and classification. In order to do that, we implemented a wireless multimedia sensor node with video and audio capturing and processing capabilities in addition to traditional/ordinary scalar sensors. The multimedia sensors are kept in sleep mode in order to save energy until they are activated by the scalar sensors which are always active. The object recognition results obtained from video and audio applications are fused to increase the object recognition performance of the sensor node. Final results are forwarded to the sink in text format, and this greatly reduces the size of data transmitted in network. Performance test results of the implemented prototype system show that the fusing audio data with visual data improves automatic object recognition capability of a sensor node significantly. Since auditory data requires less processing power compared to visual data, the overhead of processing the auditory data is not high, and it helps to extend network lifetime of WMSNs.

Original languageEnglish
Article number8565958
Pages (from-to)1839-1849
Number of pages11
JournalIEEE Sensors Journal
Volume19
Issue number5
DOIs
Publication statusPublished - Mar 1 2019

Fingerprint

multisensor fusion
Object recognition
Data fusion
multimedia
Sensor nodes
Sensor networks
sensors
Sensors
Processing
energy
scalars
audio data
sleep
performance tests
surveillance
sinks
format
prototypes
life (durability)

Keywords

  • object detection
  • visual and auditory data fusion
  • Wireless multimedia sensor
  • WMSN

ASJC Scopus subject areas

  • Instrumentation
  • Electrical and Electronic Engineering

Cite this

Visual and auditory data fusion for energy-efficient and improved object recognition in wireless multimedia sensor networks. / Koyuncu, Murat; Yazici, Adnan; Civelek, Muhsin; Cosar, Ahmet; Sert, Mustafa.

In: IEEE Sensors Journal, Vol. 19, No. 5, 8565958, 01.03.2019, p. 1839-1849.

Research output: Contribution to journalArticle

Koyuncu, Murat ; Yazici, Adnan ; Civelek, Muhsin ; Cosar, Ahmet ; Sert, Mustafa. / Visual and auditory data fusion for energy-efficient and improved object recognition in wireless multimedia sensor networks. In: IEEE Sensors Journal. 2019 ; Vol. 19, No. 5. pp. 1839-1849.
@article{d628fef478fe42f2bed80025608f5335,
title = "Visual and auditory data fusion for energy-efficient and improved object recognition in wireless multimedia sensor networks",
abstract = "Automatic threat classification without human intervention is a popular research topic in wireless multimedia sensor networks (WMSNs) especially within the context of surveillance applications. This paper explores the effect of fusing audio-visual multimedia and scalar data collected by the sensor nodes in a WMSN for the purpose of energy-efficient and accurate object detection and classification. In order to do that, we implemented a wireless multimedia sensor node with video and audio capturing and processing capabilities in addition to traditional/ordinary scalar sensors. The multimedia sensors are kept in sleep mode in order to save energy until they are activated by the scalar sensors which are always active. The object recognition results obtained from video and audio applications are fused to increase the object recognition performance of the sensor node. Final results are forwarded to the sink in text format, and this greatly reduces the size of data transmitted in network. Performance test results of the implemented prototype system show that the fusing audio data with visual data improves automatic object recognition capability of a sensor node significantly. Since auditory data requires less processing power compared to visual data, the overhead of processing the auditory data is not high, and it helps to extend network lifetime of WMSNs.",
keywords = "object detection, visual and auditory data fusion, Wireless multimedia sensor, WMSN",
author = "Murat Koyuncu and Adnan Yazici and Muhsin Civelek and Ahmet Cosar and Mustafa Sert",
year = "2019",
month = "3",
day = "1",
doi = "10.1109/JSEN.2018.2885281",
language = "English",
volume = "19",
pages = "1839--1849",
journal = "IEEE Sensors Journal",
issn = "1530-437X",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "5",

}

TY - JOUR

T1 - Visual and auditory data fusion for energy-efficient and improved object recognition in wireless multimedia sensor networks

AU - Koyuncu, Murat

AU - Yazici, Adnan

AU - Civelek, Muhsin

AU - Cosar, Ahmet

AU - Sert, Mustafa

PY - 2019/3/1

Y1 - 2019/3/1

N2 - Automatic threat classification without human intervention is a popular research topic in wireless multimedia sensor networks (WMSNs) especially within the context of surveillance applications. This paper explores the effect of fusing audio-visual multimedia and scalar data collected by the sensor nodes in a WMSN for the purpose of energy-efficient and accurate object detection and classification. In order to do that, we implemented a wireless multimedia sensor node with video and audio capturing and processing capabilities in addition to traditional/ordinary scalar sensors. The multimedia sensors are kept in sleep mode in order to save energy until they are activated by the scalar sensors which are always active. The object recognition results obtained from video and audio applications are fused to increase the object recognition performance of the sensor node. Final results are forwarded to the sink in text format, and this greatly reduces the size of data transmitted in network. Performance test results of the implemented prototype system show that the fusing audio data with visual data improves automatic object recognition capability of a sensor node significantly. Since auditory data requires less processing power compared to visual data, the overhead of processing the auditory data is not high, and it helps to extend network lifetime of WMSNs.

AB - Automatic threat classification without human intervention is a popular research topic in wireless multimedia sensor networks (WMSNs) especially within the context of surveillance applications. This paper explores the effect of fusing audio-visual multimedia and scalar data collected by the sensor nodes in a WMSN for the purpose of energy-efficient and accurate object detection and classification. In order to do that, we implemented a wireless multimedia sensor node with video and audio capturing and processing capabilities in addition to traditional/ordinary scalar sensors. The multimedia sensors are kept in sleep mode in order to save energy until they are activated by the scalar sensors which are always active. The object recognition results obtained from video and audio applications are fused to increase the object recognition performance of the sensor node. Final results are forwarded to the sink in text format, and this greatly reduces the size of data transmitted in network. Performance test results of the implemented prototype system show that the fusing audio data with visual data improves automatic object recognition capability of a sensor node significantly. Since auditory data requires less processing power compared to visual data, the overhead of processing the auditory data is not high, and it helps to extend network lifetime of WMSNs.

KW - object detection

KW - visual and auditory data fusion

KW - Wireless multimedia sensor

KW - WMSN

UR - http://www.scopus.com/inward/record.url?scp=85058082195&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85058082195&partnerID=8YFLogxK

U2 - 10.1109/JSEN.2018.2885281

DO - 10.1109/JSEN.2018.2885281

M3 - Article

VL - 19

SP - 1839

EP - 1849

JO - IEEE Sensors Journal

JF - IEEE Sensors Journal

SN - 1530-437X

IS - 5

M1 - 8565958

ER -