Skip to content

sellamiakrem/deep-multi-view-representation-learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 

Repository files navigation

Mapping individual differences in cortical architecture using multi-view representation learning

============

This is an implementation of the multi-view representation learning and trace regression models for mapping individual differences using multimdodal fMRI data as described in our paper:

A. Sellami et al. Mapping-individual-differences-in-cortical-architecture-using-multi-view-representation-learning, IJCNN conference (2020)

We proposed a multimodal deep autoencoder (MDAE) framework that allows combining the activation- and connectivity-based fMRI protocols to identify markers of individual differences. MAEs are trainable neural network models for unsupervised learning and dimensionality reduction.

Collaborations

This research work is carried out jointly with Qarma (Machine Learning Team), LIS laboratory and the Banco (Neural Bases of Communication) team at Institute of Neuroscience of Timone (Institut de Neurosciences de la Timone a.k.a. INT ). This project is funded by the Institute of Language, Communication and the Brain (ILCB)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published