EEG-based Graph-guided Domain Adaptation for Robust Cross-Session Emotion Recognition

EEG-based Graph-guided Domain Adaptation for Robust Cross-Session Emotion Recognition
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Accurate recognition of human emotional states is critical for effective human-machine interaction. Electroencephalography (EEG) offers a reliable source for emotion recognition due to its high temporal resolution and its direct reflection of neural activity. Nevertheless, variations across recording sessions present a major challenge for model generalization. To address this issue, we propose EGDA, a framework that reduces cross-session discrepancies by jointly aligning the global (marginal) and class-specific (conditional) distributions, while preserving the intrinsic structure of EEG data through graph regularization. Experimental results on the SEED-IV dataset demonstrate that EGDA achieves robust cross-session performance, obtaining accuracies of 81.22%, 80.15%, and 83.27% across three transfer tasks, and surpassing several baseline methods. Furthermore, the analysis highlights the Gamma frequency band as the most discriminative and identifies the central-parietal and prefrontal brain regions as critical for reliable emotion recognition.


💡 Research Summary

The paper addresses a critical bottleneck in EEG‑based emotion recognition: the severe distribution shift that occurs across recording sessions, which hampers the generalization of models trained on a single session. To mitigate this, the authors propose EGDA (EEG‑based Graph‑guided Domain Adaptation), a unified framework that simultaneously aligns global (marginal) and class‑specific (conditional) distributions while preserving the intrinsic geometric structure of the EEG data through graph regularization.

The method starts by learning a linear projection matrix A that maps both source (labeled) and target (unlabeled) EEG samples into a shared low‑dimensional subspace. Marginal alignment is enforced by minimizing the Maximum Mean Discrepancy (MMD) between the projected source and target means (Eq. 1‑3). To avoid the pitfall of aligning only the marginal distributions, EGDA also aligns conditional distributions for each emotion class. This is achieved by constructing class‑wise MMD matrices M_c based on pseudo‑labels for the target data and minimizing the class‑specific distance (Eq. 4‑6) in an EM‑like iterative scheme that repeatedly updates the pseudo‑labels.

Two regularization terms are added to stabilize the adaptation. First, a within‑class scatter matrix S_w computed from the source domain is minimized (Eq. 7‑9) to enforce compact clusters for each emotion class in the latent space. Second, a data‑driven similarity graph S is learned by solving a constrained quadratic problem (Eq. 10‑12) that balances Euclidean distances with a regularization parameter γ. The resulting graph Laplacian regularizer preserves local neighborhood relationships, preventing the pseudo‑label propagation from drifting.

Optimization proceeds by alternating updates: with a fixed A, the graph S and target pseudo‑labels are refined; with S fixed, A is re‑estimated by solving a generalized eigenvalue problem that incorporates marginal, conditional, within‑class, and graph terms. This alternating scheme converges after a few iterations, yielding a subspace where source and target distributions are closely matched, class separability is high, and local manifold structure is retained.

Experiments are conducted on the SEED‑IV dataset (62 channels, 4 emotion classes, 3 sessions). Features are extracted for five conventional frequency bands (Delta to Gamma). EGDA achieves cross‑session accuracies of 81.22 % (session 1→2), 80.15 % (1→3), and 83.27 % (2→3), outperforming strong baselines such as Transfer Component Analysis (TCA), Joint Distribution Adaptation (JDA), Visual Domain Adaptation (VDA), Coupled Projection Transfer Metric Learning (CPTML), and Joint Adaptation with Graph Propagation (JAGP) by 2–5 percentage points. Frequency‑band analysis reveals that the Gamma band is the most discriminative, and graph importance scores highlight central‑parietal and prefrontal regions as key contributors to emotion discrimination.

A sensitivity analysis shows that the graph regularization weight γ must be balanced: too small values lead to overly sparse graphs and unstable alignment, while too large values dilute the local structure. Dimensionality reduction experiments indicate that projecting to 30–50 dimensions (from the original ~310) yields the best trade‑off between computational efficiency and classification performance.

In summary, EGDA provides a computationally efficient yet powerful solution for cross‑session EEG emotion recognition. By jointly aligning marginal and conditional distributions and embedding a robust graph‑based manifold regularizer, it reduces session‑induced variability while preserving discriminative class structures. The framework is readily applicable to real‑time human‑machine interaction systems and opens avenues for extending domain adaptation to cross‑subject scenarios and online adaptive settings.


Comments & Academic Discussion

Loading comments...

Leave a Comment