Skip to content

Code for processing and managing data for EEG-based emotion recognition of individuals with and without Autism. EEG and other clinical data were collected in StonyBrook Social Competence Treatment Lab, for data request evaluation please contact professor Matthew D. Lerner [email protected]

Notifications You must be signed in to change notification settings

meiyor/Deep-Learning-Emotion-Decoding-using-EEG-data-from-Autism-individuals

Repository files navigation

Deep Learning Emotion decoding using EEG data from Autism individuals

This repository includes the python and matlab codes using for processing EEG 2D images on a customized Convolutional Neural Network (CNN) to decode emotion visual stimuli on individuals with and without Autism Spectrum Disorder (ASD).

If you would like to use this repository to replicate our experiments with this data or use your our own data, please cite the following paper. More details about this code and implementation are described there as well:

Mayor Torres, J.M. ¥, Clarkson, T.¥, Hauschild, K.M., Luhmann, C.C., Lerner, M.D., Riccardi, G., Facial emotions are accurately encoded in the brains of those with autism: A deep learning approach. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging,(2021).

Requirements:

  • Tensorflow >= 1.20
  • Keras >= 2.2
  • innvestigate >= 1.0.9
  • sklearn
  • subprocess
  • numpy
  • PRTools http://prtools.tudelft.nl/
  • csv
  • Matlab > R2018b

For the python code we provide:

1. A baseline code to evaluate a Leave-One-Trial-Out cross-validation from two csv files. One including all the trials for train with their corresponding labels and other with the test features of the single trial you want to evaluate. The test and train datafile should have an identifier to be paired by the for loop used for the cross validation. The code to run the baseline classifiier is located on the folder classifier_EEG_call.

Pipeline for EEG Emotion Decoding

To run the classifier pipeline simply download the .py files on the folder classifier_EEG_call and execute the following command on your bash prompt:

   python ./classifier_EEG_call/LOTO_lauch_emotions_test.py "data_path_file_including_train_test_files"

Please be sure your .csv files has a flattened time-points x channels EEG image after you remove artifacts and noise from the signal. Using the ADJUST EEGlab pipeline preferrably (https://sites.google.com/a/unitn.it/marcobuiatti/home/software/adjust).

The final results will be produced in a .txt file in the output folder of your choice. Some metrics obtained from a sample of 88 ADOS-2 diagnosed participants 48 controls, and 40 ASD are the following:

Metrics/Groups FER CNN
Acc Pre Re F1 Acc Pre Re F1
TD 0.813 0.808 0.802 0.807 0.860 0.864 0.860 0.862
ASD* 0.776 0.774 0.768 0.771 0.934 0.935 0.933 0.934

Face Emotion Recognition (FER) task performance is denoted as the human performance obtained when labeling the same stimuli presented to obtain the EEG activity.

2. A code for using the package the iNNvestigate package (https://github.com/albermax/innvestigate) Saliency Maps and unify them from the LOTO crossvalidation mentioned in the first item. Code is located in the folder iNNvestigate_evaluation

Before running the averaging across the whole methods and the resulting relevance maps you must run the equivalent keras-backend code to calculate the relevance-maps itself. For calculating the XAI relevance maps please check the current version of iNNvestigate and the newer paths of the utils directory. When you check everything is order please run the following command on terminal:

   python ./iNNvestigate_evaluation/CNN_innvestigate_calc_feature_rel_maps.py "data_path_file_including_train_test_files"

To subsequently run the investigate evaluation simply download the .py files on the folder iNNvestigate_evaluation and execute the following command on your bash prompt:

   python ./iNNvestigate_evaluation/LOTO_lauch_emotions_test_innvestigate.py "data_path_file_including_train_test_files" num_method

The value num_method is defined based on the order of the iNNvestigate package process saliency maps. For our specific case the number concordance is:

'Original Image'-> 0, 'Gradient' -> 1, 'SmoothGrad'-> 2, 'DeconvNet' -> 3, 'GuidedBackprop' -> 4, 'PatterNet' -> 5, 'PatternAttribution' -> 6, 'DeepTaylor' -> 7, 'Input * Gradient' -> 8, 'Integrated Gradients' -> 9, 'LRP-epsilon' -> 10, 'LRP-Z' -> 11, 'LRP-APresetflat' -> 12, 'LRP-BPresetflat' -> 13.

An example from saliency maps obtained from LRP-B preset are shown below ->

significant differences are observed on 750-1250 ms relative to the onset between the relevance of Controls and ASD groups!

alt text alt text alt text

For the Matlab code we provide the repository for reading the resulting output performance files for the CNN baseline classifier Reading_CNN_performances, and for the iNNvestigate methods using the same command call due to the output file is composed of the same syntax.

To run a performance checking first download the files on Reading_CNN_performances folder and run the following command on your Matlab prompt sign having the results in the .csv files on a folder of your choice.

   read_perf_convnets_subjects('suffix_file','performance_data_path')

About

Code for processing and managing data for EEG-based emotion recognition of individuals with and without Autism. EEG and other clinical data were collected in StonyBrook Social Competence Treatment Lab, for data request evaluation please contact professor Matthew D. Lerner [email protected]

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published