Explaining Anomalies with Tensor Networks

Explaining Anomalies with Tensor Networks
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Tensor networks, a class of variational quantum many-body wave functions have attracted considerable research interest across many disciplines, including classical machine learning. Recently, Aizpurua et al. demonstrated explainable anomaly detection with matrix product states on a discrete-valued cyber-security task, using quantum-inspired methods to gain insight into the learned model and detected anomalies. Here, we extend this framework to real-valued data domains. We furthermore introduce tree tensor networks for the task of explainable anomaly detection. We demonstrate these methods with three benchmark problems, show adequate predictive performance compared to several baseline models and both tensor network architectures’ ability to explain anomalous samples. We thereby extend the application of tensor networks to a broader class of potential problems and open a pathway for future extensions to more complex tensor network architectures.


💡 Research Summary

The paper extends the quantum‑inspired tensor‑network framework for unsupervised anomaly detection from discrete to real‑valued data. Building on the recent work of Aizpurua et al., which used matrix product states (MPS) for a cyber‑security dataset, the authors introduce two tensor‑network architectures—MPS and tree tensor networks (TTN)—and adapt them to handle continuous features. Real‑valued inputs are first normalized to the interval


Comments & Academic Discussion

Loading comments...

Leave a Comment