Mechanized Undecidability of Higher-order beta-Matching (Extended Version)
Higher-order beta-matching is the following decision problem: given two simply typed lambda-terms, can the first term be instantiated to be beta-equivalent to the second term? This problem was formulated by Huet in the 1970s and shown undecidable by Loader in 2003 by reduction from lambda-definability. The present work provides a novel undecidability proof for higher-order beta-matching, in an effort to verify this result by means of a proof assistant. Rather than starting from lambda-definability, the presented proof encodes a restricted form of string rewriting as higher-order beta-matching. The particular approach is similar to Urzyczyn’s undecidability result for intersection type inhabitation. The presented approach has several advantages. First, the proof is simpler to verify in full detail due to the simple form of rewriting systems, which serve as a starting point. Second, undecidability of the considered problem in string rewriting is already certified using the Coq proof assistant. As a consequence, we obtain a certified many-one reduction from the Halting Problem to higher-order beta-matching. Third, the presented approach identifies a uniform construction which shows undecidability of higher-order beta-matching, lambda-definability, and intersection type inhabitation. The presented undecidability proof is mechanized in the Coq proof assistant and contributed to the existing Coq Library of Undecidability Proofs.
💡 Research Summary
The paper revisits the decision problem of higher‑order β‑matching in the simply‑typed λ‑calculus: given two closed λ‑terms F : σ → τ and N : τ, does there exist a term M : σ such that F M is β‑equivalent to N? While Huet showed undecidability of higher‑order β‑unification in the 1970s, Loader later proved that the one‑sided variant (β‑matching) is also undecidable by reducing from the λ‑definability problem. Loader’s construction, however, relies on a sophisticated encoding of arbitrary finite functions and intricate constraints, making a formal verification in a proof assistant extremely cumbersome.
The present work offers a completely different reduction that is both conceptually simpler and fully mechanized in the Coq proof assistant. The authors start not from λ‑definability but from a well‑studied string‑rewriting problem on simple semi‑Thue systems. A simple semi‑Thue system consists of rewrite rules of the form ab ⇒ cd over a finite alphabet A. The specific decision problem, denoted 0⁺ ⇒* 1⁺, asks whether there exists a non‑empty word of 0’s that can be rewritten (by zero or more applications of the rules) into a non‑empty word of 1’s. This problem is known to be undecidable; a reduction from the Turing‑machine halting problem is already formalised in the Coq Library of Undecidability Proofs.
The core contribution is a many‑one reduction from 0⁺ ⇒* 1⁺ to higher‑order β‑matching. The reduction proceeds in several stages:
-
Encoding of Words and Rewrites – The authors introduce a family of simply‑typed λ‑terms that represent words over the alphabet and the effect of a rewrite rule. Each rule ab ⇒ cd is turned into a higher‑order function G_{ab→cd} that, when applied to a term encoding a word, yields the term encoding the rewritten word. A “starter” term F₀ encodes an arbitrary length of 0’s, while a “target” term N encodes a word consisting solely of 1’s.
-
Constraining the Shape of Solutions – To prevent spurious solutions that ignore the intended functional behaviour, the paper adopts a technique similar to Example 2.11: a free variable and a strict typing discipline are used to force any solution M to use its arguments exactly as prescribed. Concretely, any admissible M must have type (κ→κ)→κ→κ and satisfy M I u =β u for a fresh variable u, which rules out solutions that introduce extra λ‑abstractions or that return a constant term.
-
Correctness of the Reduction – The authors prove two directions. If there exists a rewrite sequence 0ⁿ ⇒* 1ⁿ, then by composing the corresponding G‑functions they obtain a term M that solves the β‑matching instance F M =β N. Conversely, any solution M of the restricted shape must correspond to a sequence of rule applications, thereby yielding a rewrite from some 0ⁿ to 1ⁿ. Both directions are formalised in Coq, ensuring that every inference step (β‑reduction, typing, and the construction of the composed function) is mechanically checked.
-
Integration into Coq – The reduction, together with the already‑certified undecidability of 0⁺ ⇒* 1⁺, is added as a new module to the Coq Library of Undecidability Proofs. The implementation consists of (i) a definition of simple semi‑Thue systems and their rewriting relation, (ii) an encoder that maps rules and words to λ‑terms, (iii) a type‑based filter that enforces the shape of admissible solutions, and (iv) the many‑one reduction theorem with its full proof script.
-
Uniformity Across Related Problems – By analysing the structure of the encoding, the authors observe that the same construction simultaneously yields undecidability results for intersection‑type inhabitation (the original motivation of Urzyczyn) and for λ‑definability. In each case, the problem can be expressed as “does there exist a term of a given simple type that realises a prescribed finite function?” The uniform construction therefore shows that higher‑order β‑matching, intersection‑type inhabitation, and λ‑definability share a common core of undecidability.
The paper also discusses limitations. The technique relies on the absence of η‑reduction; with η‑conversion the shape‑restriction argument breaks down, as illustrated by a concrete counter‑example. Moreover, the reduction works for the simply‑typed λ‑calculus; extending it to richer type systems (e.g., System F) or to βη‑matching would require additional ideas.
In summary, the authors provide a novel, simpler, and fully mechanised proof that higher‑order β‑matching is undecidable. By reducing from a string‑rewriting problem that is already certified in Coq, they avoid the intricate function‑encoding machinery of Loader’s original proof. The work not only strengthens confidence in the undecidability result through machine‑checked verification but also highlights a deep connection between three classic undecidable problems in λ‑calculus and type theory. This contribution is valuable both for researchers interested in the foundations of higher‑order rewriting and for the community developing certified mathematics and programming language metatheory.
Comments & Academic Discussion
Loading comments...
Leave a Comment