A Fast Interpretable Fuzzy Tree Learner
Fuzzy rule-based systems have been mostly used in interpretable decision-making because of their interpretable linguistic rules. However, interpretability requires both sensible linguistic partitions and small rule-base sizes, which are not guaranteed by many existing fuzzy rule-mining algorithms. Evolutionary approaches can produce high-quality models but suffer from prohibitive computational costs, while neural-based methods like ANFIS have problems retaining linguistic interpretations. In this work, we propose an adaptation of classical tree-based splitting algorithms from crisp rules to fuzzy trees, combining the computational efficiency of greedy algoritms with the interpretability advantages of fuzzy logic. This approach achieves interpretable linguistic partitions and substantially improves running time compared to evolutionary-based approaches while maintaining competitive predictive performance. Our experiments on tabular classification benchmarks proof that our method achieves comparable accuracy to state-of-the-art fuzzy classifiers with significantly lower computational cost and produces more interpretable rule bases with constrained complexity. Code is available in: https://github.com/Fuminides/fuzzy_greedy_tree_public
💡 Research Summary
This paper presents a novel algorithm for building interpretable fuzzy rule-based classifiers that addresses the critical trade-off between computational efficiency and model interpretability. The authors identify key limitations in existing approaches: evolutionary methods yield high-quality models but are computationally prohibitive, while neural-based methods like ANFIS often sacrifice linguistic interpretability during parameter tuning. Furthermore, classical crisp decision trees like CART, though efficient, produce hard splits that are less intuitive and do not support gradual, human-like reasoning.
The core contribution is the “Greedy Fuzzy Rule Tree Induction” algorithm. This method adapts the efficient, top-down splitting strategy of classical decision trees to the fuzzy domain. Instead of searching for optimal crisp split points, the algorithm operates over pre-defined linguistic partitions (e.g., Low, Medium, High for a feature). At each node, it evaluates all possible fuzzy conditions from unused features, selecting the single condition (e.g., “IF Feature_X is High”) that maximizes the reduction in a fuzzy version of the Gini impurity. Rules are built conjunctively along paths from the root to leaves, ensuring readability. The algorithm incorporates multi-way branching based on linguistic terms and avoids contradictory conditions. Its computational complexity is O(n·m·k·d), representing a significant advantage over evolutionary methods that require thousands of fitness evaluations.
To enhance performance without compromising interpretability, the authors also introduce a “Fuzzy Partition Optimization” framework. This lightweight optimization tunes the parameters of trapezoidal membership functions defining the linguistic terms. The goal is to maximize a “separability index,” which measures how well the fuzzy partitions concentrate samples of the same class. A key innovation is an “interleaved encoding” scheme that represents parameters as positive differences from previous values. This ensures by construction that all validity constraints (e.g., monotonic ordering of terms, proper trapezoid shape) are automatically maintained throughout the unconstrained optimization process, preserving linguistic interpretability.
Experimental evaluation on 10 UCI benchmark datasets demonstrates the effectiveness of the proposed method. It achieves classification accuracy comparable to state-of-the-art fuzzy classifiers like FURIA and evolutionary fuzzy classifiers. Crucially, it does so with significantly lower computational cost (runtime). Furthermore, the resulting rule bases are more interpretable due to constrained complexity, evidenced by a smaller average rule size and total number of conditions in the model.
In summary, this work successfully bridges two paradigms: it combines the computational efficiency of greedy tree induction with the interpretability advantages of fuzzy logic. The proposed method offers a practical and efficient alternative to computationally expensive evolutionary approaches for learning interpretable fuzzy rule-based systems, making them more viable for real-world explainable AI applications where both understanding and performance are essential.
Comments & Academic Discussion
Loading comments...
Leave a Comment