Multivariate Machine Learning Methods for Fusing Multimodal Functional Neuroimaging Data

Sven Dahne, Felix Bieszmann, Wojciech Samek, Stefan Haufe, Dominique Goltz, Christopher Gundlach, Arno Villringer, Siamac Fazli, Klaus Robert Muller

Research output: Contribution to journalArticle

40 Citations (Scopus)

Abstract

Multimodal data are ubiquitous in engineering, communications, robotics, computer vision, or more generally speaking in industry and the sciences. All disciplines have developed their respective sets of analytic tools to fuse the information that is available in all measured modalities. In this paper, we provide a review of classical as well as recent machine learning methods (specifically factor models) for fusing information from functional neuroimaging techniques such as: LFP, EEG, MEG, fNIRS, and fMRI. Early and late fusion scenarios are distinguished, and appropriate factor models for the respective scenarios are presented along with example applications from selected multimodal neuroimaging studies. Further emphasis is given to the interpretability of the resulting model parameters, in particular by highlighting how factor models relate to physical models needed for source localization. The methods we discuss allow for the extraction of information from neural data, which ultimately contributes to 1) better neuroscientific understanding; 2) enhance diagnostic performance; and 3) discover neural signals of interest that correlate maximally with a given cognitive paradigm. While we clearly study the multimodal functional neuroimaging challenge, the discussed machine learning techniques have a wide applicability, i.e., in general data fusion, and may thus be informative to the general interested reader.

Original languageEnglish
Article number7182735
Pages (from-to)1507-1530
Number of pages24
JournalProceedings of the IEEE
Volume103
Issue number9
DOIs
Publication statusPublished - Sep 1 2015
Externally publishedYes

Fingerprint

Functional neuroimaging
Learning systems
Neuroimaging
Data fusion
Electric fuses
Electroencephalography
Computer vision
Robotics
Communication
Industry

Keywords

  • data fusion
  • EEG
  • fMRI
  • fNIRS
  • Machine learning
  • MEG
  • multimodal neuroimaging
  • review

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Cite this

Dahne, S., Bieszmann, F., Samek, W., Haufe, S., Goltz, D., Gundlach, C., ... Muller, K. R. (2015). Multivariate Machine Learning Methods for Fusing Multimodal Functional Neuroimaging Data. Proceedings of the IEEE, 103(9), 1507-1530. [7182735]. https://doi.org/10.1109/JPROC.2015.2425807

Multivariate Machine Learning Methods for Fusing Multimodal Functional Neuroimaging Data. / Dahne, Sven; Bieszmann, Felix; Samek, Wojciech; Haufe, Stefan; Goltz, Dominique; Gundlach, Christopher; Villringer, Arno; Fazli, Siamac; Muller, Klaus Robert.

In: Proceedings of the IEEE, Vol. 103, No. 9, 7182735, 01.09.2015, p. 1507-1530.

Research output: Contribution to journalArticle

Dahne, S, Bieszmann, F, Samek, W, Haufe, S, Goltz, D, Gundlach, C, Villringer, A, Fazli, S & Muller, KR 2015, 'Multivariate Machine Learning Methods for Fusing Multimodal Functional Neuroimaging Data', Proceedings of the IEEE, vol. 103, no. 9, 7182735, pp. 1507-1530. https://doi.org/10.1109/JPROC.2015.2425807
Dahne S, Bieszmann F, Samek W, Haufe S, Goltz D, Gundlach C et al. Multivariate Machine Learning Methods for Fusing Multimodal Functional Neuroimaging Data. Proceedings of the IEEE. 2015 Sep 1;103(9):1507-1530. 7182735. https://doi.org/10.1109/JPROC.2015.2425807
Dahne, Sven ; Bieszmann, Felix ; Samek, Wojciech ; Haufe, Stefan ; Goltz, Dominique ; Gundlach, Christopher ; Villringer, Arno ; Fazli, Siamac ; Muller, Klaus Robert. / Multivariate Machine Learning Methods for Fusing Multimodal Functional Neuroimaging Data. In: Proceedings of the IEEE. 2015 ; Vol. 103, No. 9. pp. 1507-1530.
@article{04b83e3186684adf8cec087c7bf5ffff,
title = "Multivariate Machine Learning Methods for Fusing Multimodal Functional Neuroimaging Data",
abstract = "Multimodal data are ubiquitous in engineering, communications, robotics, computer vision, or more generally speaking in industry and the sciences. All disciplines have developed their respective sets of analytic tools to fuse the information that is available in all measured modalities. In this paper, we provide a review of classical as well as recent machine learning methods (specifically factor models) for fusing information from functional neuroimaging techniques such as: LFP, EEG, MEG, fNIRS, and fMRI. Early and late fusion scenarios are distinguished, and appropriate factor models for the respective scenarios are presented along with example applications from selected multimodal neuroimaging studies. Further emphasis is given to the interpretability of the resulting model parameters, in particular by highlighting how factor models relate to physical models needed for source localization. The methods we discuss allow for the extraction of information from neural data, which ultimately contributes to 1) better neuroscientific understanding; 2) enhance diagnostic performance; and 3) discover neural signals of interest that correlate maximally with a given cognitive paradigm. While we clearly study the multimodal functional neuroimaging challenge, the discussed machine learning techniques have a wide applicability, i.e., in general data fusion, and may thus be informative to the general interested reader.",
keywords = "data fusion, EEG, fMRI, fNIRS, Machine learning, MEG, multimodal neuroimaging, review",
author = "Sven Dahne and Felix Bieszmann and Wojciech Samek and Stefan Haufe and Dominique Goltz and Christopher Gundlach and Arno Villringer and Siamac Fazli and Muller, {Klaus Robert}",
year = "2015",
month = "9",
day = "1",
doi = "10.1109/JPROC.2015.2425807",
language = "English",
volume = "103",
pages = "1507--1530",
journal = "Proceedings of the IEEE",
issn = "0018-9219",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "9",

}

TY - JOUR

T1 - Multivariate Machine Learning Methods for Fusing Multimodal Functional Neuroimaging Data

AU - Dahne, Sven

AU - Bieszmann, Felix

AU - Samek, Wojciech

AU - Haufe, Stefan

AU - Goltz, Dominique

AU - Gundlach, Christopher

AU - Villringer, Arno

AU - Fazli, Siamac

AU - Muller, Klaus Robert

PY - 2015/9/1

Y1 - 2015/9/1

N2 - Multimodal data are ubiquitous in engineering, communications, robotics, computer vision, or more generally speaking in industry and the sciences. All disciplines have developed their respective sets of analytic tools to fuse the information that is available in all measured modalities. In this paper, we provide a review of classical as well as recent machine learning methods (specifically factor models) for fusing information from functional neuroimaging techniques such as: LFP, EEG, MEG, fNIRS, and fMRI. Early and late fusion scenarios are distinguished, and appropriate factor models for the respective scenarios are presented along with example applications from selected multimodal neuroimaging studies. Further emphasis is given to the interpretability of the resulting model parameters, in particular by highlighting how factor models relate to physical models needed for source localization. The methods we discuss allow for the extraction of information from neural data, which ultimately contributes to 1) better neuroscientific understanding; 2) enhance diagnostic performance; and 3) discover neural signals of interest that correlate maximally with a given cognitive paradigm. While we clearly study the multimodal functional neuroimaging challenge, the discussed machine learning techniques have a wide applicability, i.e., in general data fusion, and may thus be informative to the general interested reader.

AB - Multimodal data are ubiquitous in engineering, communications, robotics, computer vision, or more generally speaking in industry and the sciences. All disciplines have developed their respective sets of analytic tools to fuse the information that is available in all measured modalities. In this paper, we provide a review of classical as well as recent machine learning methods (specifically factor models) for fusing information from functional neuroimaging techniques such as: LFP, EEG, MEG, fNIRS, and fMRI. Early and late fusion scenarios are distinguished, and appropriate factor models for the respective scenarios are presented along with example applications from selected multimodal neuroimaging studies. Further emphasis is given to the interpretability of the resulting model parameters, in particular by highlighting how factor models relate to physical models needed for source localization. The methods we discuss allow for the extraction of information from neural data, which ultimately contributes to 1) better neuroscientific understanding; 2) enhance diagnostic performance; and 3) discover neural signals of interest that correlate maximally with a given cognitive paradigm. While we clearly study the multimodal functional neuroimaging challenge, the discussed machine learning techniques have a wide applicability, i.e., in general data fusion, and may thus be informative to the general interested reader.

KW - data fusion

KW - EEG

KW - fMRI

KW - fNIRS

KW - Machine learning

KW - MEG

KW - multimodal neuroimaging

KW - review

UR - http://www.scopus.com/inward/record.url?scp=85027950170&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85027950170&partnerID=8YFLogxK

U2 - 10.1109/JPROC.2015.2425807

DO - 10.1109/JPROC.2015.2425807

M3 - Article

VL - 103

SP - 1507

EP - 1530

JO - Proceedings of the IEEE

JF - Proceedings of the IEEE

SN - 0018-9219

IS - 9

M1 - 7182735

ER -