Modeling Endogenous Logic: Causal Neuro-Symbolic Reasoning Model for Explainable Multi-Behavior Recommendation

Modeling Endogenous Logic: Causal Neuro-Symbolic Reasoning Model for Explainable Multi-Behavior Recommendation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Existing multi-behavior recommendations tend to prioritize performance at the expense of explainability, while current explainable methods suffer from limited generalizability due to their reliance on external information. Neuro-Symbolic integration offers a promising avenue for explainability by combining neural networks with symbolic logic rule reasoning. Concurrently, we posit that user behavior chains inherently embody an endogenous logic suitable for explicit reasoning. However, these observational multiple behaviors are plagued by confounders, causing models to learn spurious correlations. By incorporating causal inference into this Neuro-Symbolic framework, we propose a novel Causal Neuro-Symbolic Reasoning model for Explainable Multi-Behavior Recommendation (CNRE). CNRE operationalizes the endogenous logic by simulating a human-like decision-making process. Specifically, CNRE first employs hierarchical preference propagation to capture heterogeneous cross-behavior dependencies. Subsequently, it models the endogenous logic rule implicit in the user’s behavior chain based on preference strength, and adaptively dispatches to the corresponding neural-logic reasoning path (e.g., conjunction, disjunction). This process generates an explainable causal mediator that approximates an ideal state isolated from confounding effects. Extensive experiments on three large-scale datasets demonstrate CNRE’s significant superiority over state-of-the-art baselines, offering multi-level explainability from model design and decision process to recommendation results.


💡 Research Summary

The paper tackles a fundamental tension in multi‑behavior recommendation (MBR): recent models have pushed predictive performance to new heights—often by leveraging contrastive learning and complex neural architectures—while largely abandoning explainability. Existing explainable recommender systems typically rely on external side information such as textual reviews, item attributes, or knowledge‑graph triples. This reliance limits generalizability, incurs high acquisition costs, and often yields explanations that merely restate the external data without revealing the model’s internal reasoning.

To overcome these drawbacks, the authors propose a novel paradigm: the user’s own behavior chain (e.g., view → cart → buy) encodes an endogenous logic that directly reflects the intensity of user preference. A complete chain signals strong intent, a partial chain signals medium intent, and a single weak behavior signals low intent. However, observational interaction data are confounded: latent factors (e.g., marketing campaigns, item popularity) simultaneously affect both the observed behaviors and the target outcome, leading to spurious correlations.

The authors embed causal inference into a Neuro‑Symbolic (NeSy) framework, yielding the Causal Neuro‑Symbolic Reasoning model for Explainable multi‑behavior recommendation (CNRE). The model operationalizes the front‑door adjustment (FDA) causal graph: confounders C affect both user preference U and the final prediction Y; U generates auxiliary behaviors A (weak) which in turn influence the target behavior T (strong); the model encodes (A, T) into a causal mediator M, and finally predicts Y from M alone. By constructing M solely from the embeddings of A and T, the model blocks the back‑door path C → Y, satisfying the first FDA condition. By learning P(Y | do(M)) directly, it satisfies the second FDA condition, thus approximating the true causal effect while remaining computationally tractable.

Architecture Overview

  1. Hierarchical Preference Propagation (HPP) – For each behavior type b∈{view, cart, …} a separate hypergraph H⁽ᵇ⁾ is learned for users and items. A behavior‑aware parallel encoder first captures intra‑behavior heterogeneity, then a cascading structure propagates embeddings from upstream behaviors to downstream ones. An adaptive projection mechanism explicitly suppresses confounding signals that may leak from upstream to downstream embeddings, thereby producing cleaner representations for the subsequent causal reasoning stage.

  2. Causal Neuro‑Symbolic Reasoning (CNSR) – The model assesses the preference strength of a user‑item pair by examining which behaviors are present. Three reasoning paths are dynamically dispatched:

    • Direct processing for strong preferences (full chain) – the mediator M is essentially the concatenation of the downstream embedding, bypassing symbolic composition.
    • Conjunctive (∧) inference for medium preferences – embeddings of A and T are combined via a differentiable AND operator, modeling a “confirmatory” logic that both conditions must hold.
    • Disjunctive (∨) inference for weak preferences – embeddings are merged with a differentiable OR operator, capturing a “supplementary” logic where either weak signal suffices.
      This adaptive dispatch mirrors human decision making: stronger evidence leads to more decisive reasoning, while weaker evidence triggers more inclusive logic. The output of this stage is the causal mediator M, a deterministic, high‑dimensional vector that approximates the ideal do‑intervention on (A, T).
  3. Prediction Layer – A simple feed‑forward network consumes M and outputs the probability of the target behavior (e.g., purchase). Because the prediction depends only on M, the model’s design guarantees that all information from (A, T) passes through the mediator, thereby isolating the prediction from confounding paths.

Explainability is provided at three levels:

  • Model‑design explainability – The FDA graph and HPP architecture are explicitly described, showing how confounders are blocked.
  • Reasoning‑process explainability – For each recommendation, the model logs which logical path (∧, ∨, direct) was chosen and the associated preference strength, offering a transparent view of the internal decision rule.
  • Result‑level explainability – The final recommendation can be expressed as a logical statement (e.g., “User viewed and added to cart, therefore purchase is predicted”), which is directly derived from the endogenous behavior chain rather than from external attributes.

Empirical Evaluation – Experiments on three large‑scale datasets (Tmall, RetailRocket, and a third e‑commerce benchmark) compare CNRE against state‑of‑the‑art baselines, including FENCR, KEMB‑Rec, and several contrastive MBR models. CNRE consistently outperforms baselines on HR@10 and NDCG@10 by 5–12 %, demonstrating that incorporating causal reasoning does not sacrifice accuracy. Explainability is assessed via human user studies and automated logical consistency metrics; participants rated CNRE’s explanations as more faithful and easier to understand than those generated by external‑information‑based methods.

Limitations and Future Work – The current formulation assumes a linear, ordered behavior chain. Real‑world platforms often exhibit richer interaction graphs (e.g., view ↔ like ↔ share) where defining a single preference escalation order is non‑trivial. Extending CNRE to handle arbitrary directed acyclic behavior graphs is an open challenge. Moreover, while the mediator M is mathematically interpretable, visualizing it for end‑users or converting it into natural language explanations (potentially via large language models) remains unexplored.

Contribution Summary

  1. Introduces a self‑contained explainability paradigm for multi‑behavior recommendation that eliminates reliance on external side information.
  2. Proposes a causal neuro‑symbolic architecture that maps preference strength to adaptive logical operations, thereby constructing a causal mediator that approximates a front‑door intervention.
  3. Demonstrates superior recommendation performance and multi‑level explainability through extensive experiments and user studies.

In sum, CNRE represents a significant step toward trustworthy recommender systems that are both highly accurate and intrinsically transparent, leveraging the endogenous logic of user behavior chains through principled causal reasoning and neuro‑symbolic computation.


Comments & Academic Discussion

Loading comments...

Leave a Comment