EEG-Based Emotion Recognition Beyond Standardized Labels

An investigation into the neural correlates of visual emotional processing and the critical role of personalized labels in Affective Computing.

By Daniele Lozzi, Enrico Mattei, Giuseppe Placidi, and Selina C. Wriessnegger

Abstract

This study investigates the Electroencephalogram (EEG) correlates of emotional responses to visual stimuli from the NAPS-BE and SFIP datasets. We found that Deep Learning (DL) models trained with the original dataset labels performed poorly. However, when the same models were trained using personalized labels from participants' own ratings, their accuracy in classifying emotional states improved dramatically. These findings underscore the highly subjective nature of emotional reactions and highlight the critical need for personalized data to advance EEG-based emotion recognition systems.

Results Dashboard

EEG Data Preprocessing Pipeline

Signal Acquisition
1-100 Hz Filtering
PREP Pipeline
ICA (Extended Infomax)
Remove Artifacts (ICLabel)
4-64 Hz Filtering
Epoching (-1s to +3s)
Scaling

Deep Learning Architectures

We employed a suite of state-of-the-art DL models designed for EEG signal processing to ensure a robust analysis.

  • EEGNetv4: A compact CNN for EEG-based BCIs, effective at capturing temporal and spatial features.
  • ContraNet: A hybrid network designed for classifying EEG and EMG signals, especially with limited data.
  • Conformer: A model combining convolution and transformers, tailored for motor imagery and emotion paradigms.
  • EEGVIT: A Vision Transformer (ViT) based architecture adapted for visual task EEG data.
  • EEG-deformer: A dense convolutional transformer for detecting cognitive attention and mental workload.