Explainable AI Using Inherently Interpretable Components for Wearable-based Health Monitoring

Explainable AI Using Inherently Interpretable Components for Wearable-based Health Monitoring
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The use of wearables in medicine and wellness, enabled by AI-based models, offers tremendous potential for real-time monitoring and interpretable event detection. Explainable AI (XAI) is required to assess what models have learned and build trust in model outputs, for patients, healthcare professionals, model developers, and domain experts alike. Explaining AI decisions made on time-series data recorded by wearables is especially challenging due to the data’s complex nature and temporal dependencies. Too often, explainability using interpretable features leads to performance loss. We propose a novel XAI method that combines explanation spaces and concept-based explanations to explain AI predictions on time-series data. By using Inherently Interpretable Components (IICs), which encapsulate domain-specific, interpretable concepts within a custom explanation space, we preserve the performance of models trained on time series while achieving the interpretability of concept-based explanations based on extracted features. Furthermore, we define a domain-specific set of IICs for wearable-based health monitoring and demonstrate their usability in real applications, including state assessment and epileptic seizure detection.


💡 Research Summary

The paper tackles the pressing need for explainable artificial intelligence (XAI) in wearable‑based health monitoring, where deep neural networks often achieve high performance on raw time‑series data but remain opaque to clinicians, patients, and developers. Existing XAI approaches fall into two categories: saliency‑based methods that assign importance to individual timestamps or frequency bands, and concept‑based methods that rely on high‑level human‑understandable concepts. Saliency methods struggle with the temporal dependencies and distributed patterns typical of physiological signals, while concept‑based methods either require additional concept labels during training (e.g., Concept Bottleneck Models) or post‑hoc analysis that can be computationally complex and may not capture non‑linear interactions, often sacrificing predictive accuracy.

To bridge this gap, the authors introduce a model‑agnostic framework that leverages Inherently Interpretable Components (IICs) within a reversible explanation space. A time‑series (x) is transformed by an invertible function (F) into a set of (d) components (C_x = {c_{x,1}, …, c_{x,d}}). Each component corresponds to a domain‑specific, clinically meaningful metric derived from wearable modalities (e.g., heart‑rate variability, respiratory‑cardiac coupling, electrodermal activity, temperature trends). Because (F) is invertible, the original signal can be perfectly reconstructed via (F^{-1}).

The core of the explanation method is a weight vector (\mathbf{w} \in


Comments & Academic Discussion

Loading comments...

Leave a Comment