On Testing Constraint Programs
The success of several constraint-based modeling languages such as OPL, ZINC, or COMET, appeals for better software engineering practices, particularly in the testing phase. This paper introduces a testing framework enabling automated test case generation for constraint programming. We propose a general framework of constraint program development which supposes that a first declarative and simple constraint model is available from the problem specifications analysis. Then, this model is refined using classical techniques such as constraint reformulation, surrogate and global constraint addition, or symmetry-breaking to form an improved constraint model that must be thoroughly tested before being used to address real-sized problems. We think that most of the faults are introduced in this refinement step and propose a process which takes the first declarative model as an oracle for detecting non-conformities. We derive practical test purposes from this process to generate automatically test data that exhibit non-conformities. We implemented this approach in a new tool called CPTEST that was used to automatically detect non-conformities on two classical benchmark programs, namely the Golomb rulers and the car-sequencing problem.
💡 Research Summary
The paper addresses a gap in software engineering practices for constraint programming (CP) by proposing a systematic testing framework that automatically generates test cases to detect non‑conformities between an initial declarative model and a refined implementation model. The authors observe that modern CP languages such as OPL, ZINC, and COMET enable rapid development of industrial combinatorial solvers, yet the refinement phase—where developers introduce redundant constraints, global constraints, surrogate constraints, and symmetry‑breaking constraints—remains error‑prone. To mitigate this, they treat the first, simple, specification‑driven model as a “Model‑Oracle” that faithfully represents all solutions required by the problem specification.
Four conformity relations are defined to compare the Model‑Oracle (M) with the Constraint Program Under Test (CPUT, denoted P). For satisfaction problems, “conf one” requires that P’s solution set be non‑empty and a subset of M’s solutions, while “conf all” demands exact equality of the two solution sets. For optimization problems, “conf opt min” and “conf opt max” compare the optimal objective values produced by the two models, allowing a bounded interval of acceptable costs. These relations provide a hierarchy of verification depth, from minimal inclusion checks to strict equivalence.
The testing process derives test purposes from the chosen conformity relation. By negating or reformulating constraints of the Model‑Oracle, the framework constructs a search problem whose solutions correspond to assignments that satisfy P but violate at least one Oracle constraint. This search is encoded as a SAT/SMT instance and solved automatically; any found assignment constitutes a concrete counter‑example, i.e., a non‑conformity.
A prototype tool, CPTEST, implements the methodology for OPL programs. CPTEST parses the OPL source, extracts both the Oracle model and the refined model, generates the appropriate constraint negations, and invokes a solver to search for violating assignments. The authors evaluate CPTEST on two classic benchmarks: Golomb rulers and the car‑sequencing problem. In the Golomb ruler case with order m = 8, CPTEST discovers a solution where distances are not all distinct (e.g., 27‑26 = 1‑0), pinpointing a faulty redundant constraint (cc5) in the refined model. After removing this constraint, CPTEST confirms that the refined model now conforms to the Oracle for that instance, achieving the global optimum in a few hours. Similar non‑conformities are detected in the car‑sequencing benchmark, demonstrating the tool’s ability to uncover subtle errors introduced during model refinement.
The experimental results illustrate that even well‑intentioned refinements—global constraints, symmetry‑breaking, surrogate constraints—can unintentionally eliminate valid solutions or introduce infeasible ones. The authors argue that the strict “conf all” relation is often impractical for refined models that deliberately prune symmetric solutions, recommending “conf one” or the optimization‑specific relations for realistic testing.
In conclusion, the paper contributes a formal testing framework, a set of conformity relations tailored to CP, and an automated tool that can quickly expose logical faults in refined constraint models. This work bridges the gap between CP model development and software testing, offering a practical means to increase confidence in CP applications before they are deployed on large‑scale instances. Future work includes extending conformity notions to solver verification, handling more complex objective functions, and adapting the approach to other CP platforms and languages.
Comments & Academic Discussion
Loading comments...
Leave a Comment