Rule-based Test Generation with Mind Maps
This paper introduces basic concepts of rule based test generation with mind maps, and reports experiences learned from industrial application of this technique in the domain of smart card testing by Giesecke & Devrient GmbH over the last years. It describes the formalization of test selection criteria used by our test generator, our test generation architecture and test generation framework.
💡 Research Summary
The paper presents a rule‑based test generation approach that leverages mind maps to model test strategies, and it reports on several years of industrial experience applying this technique to smart‑card testing at Giesecke & Devrient. The authors begin by critiquing existing model‑based testing tools, which typically place the system‑under‑test (SUT) model at the centre of test generation and embed test‑strategy logic inside the tool itself. This makes the strategies hard to modify, limits reuse across projects, and often forces the SUT model to contain test‑specific artefacts.
To overcome these limitations, the authors propose a three‑layer modelling architecture: (1) a test‑strategy model that defines the space of possible test cases, (2) a SUT model that simulates the behaviour and data of the target system, and (3) a test‑goal model that expresses coverage criteria and selection rules. The test‑strategy model is expressed as a collection of business rules that operate on test‑case properties – key‑value pairs describing every aspect of a test case (e.g., input parameters, expected results, test name, coverage tags).
Rules are of two kinds: iteration rules and default rules. An iteration rule follows a WHEN‑IF‑THEN pattern; when the WHEN condition (which may reference any number of already‑assigned properties) becomes true, the rule iterates over a list of values for its target property. The list can be shuffled to produce randomised combinations, enabling stochastic exploration of the test‑case space. Default rules have only an optional IF condition and a THEN action that assigns a single value to a property when that value is requested but not yet defined (backward chaining).
Rules are organized into rule stacks: rules that share the same WHEN‑part and target property are executed in reverse order of definition, allowing later rules to override earlier ones. This mechanism supports strategy variations, such as product‑specific extensions of a common base strategy. The rule engine processes the rule set by starting with all iteration rules that have an empty WHEN part, then recursively applying dependent rules as properties become bound. It tracks dependencies so that a property is only advanced to its next value after all dependent properties have exhausted their own value lists. The engine terminates when all iterations are completed.
A distinctive contribution is the use of mind maps as a visual representation of the rule set. Nodes correspond to properties, and edges encode WHEN and IF relationships. This visualisation makes strategy development, review, and modification more intuitive, and it benefits from mind‑map tooling features such as automatic layout, context‑sensitive formatting, and search/filter capabilities.
Because the raw combinatorial explosion of input parameters is often infeasible to exhaust, the authors introduce test goals to filter generated cases. Test goals are divided into finite goals (with a predefined checklist, e.g., specific input combinations, code‑path coverage, or expected output values) and infinite goals (which consist only of a goal function without a checklist). A test case is considered “important” for a finite goal when it is the first to satisfy an unchecked checklist item; for infinite goals, importance is determined solely by the goal function’s return value. This dual‑goal system enables both precise coverage measurement and dynamic discovery of new coverage opportunities.
The overall generation framework consists of the three models (strategy, SUT, goals), the rule engine, and two auxiliary components: solvers, which compute concrete values for abstract parameters (e.g., generating a valid phone number for a given country), and writers, which translate the fully instantiated test case into executable test scripts in the target language or test harness. The modular design allows each component to be swapped or reused across projects.
The paper illustrates the approach with a concrete example: a simple SUT that calculates telephone call costs based on destination, tariff, time of day, and call duration. The authors define properties such as isCallValid, destination, country, and callDuration, then construct iteration and default rules that capture valid/invalid combinations, tariff selection, and duration rounding. They show how the same rule set can be encoded directly in code or visually as a mind map.
In the industrial case study, the authors applied the framework to a large smart‑card project that previously relied on thousands of manually written test scripts. By refactoring the test logic into rule‑based strategies and reusing a single SUT model, they achieved a dramatic reduction in maintenance effort, faster test generation, and improved test independence (each generated test can be run in isolation). Test‑goal analysis revealed gaps in coverage that were automatically filled by extending the rule set, demonstrating the approach’s ability to both generate and validate test suites.
In conclusion, the paper contributes a practical, extensible methodology for rule‑based test generation that separates strategy, system, and coverage concerns, uses mind maps for intuitive strategy authoring, and integrates a goal‑driven selection mechanism to keep test suites manageable. The reported industrial experience validates the approach’s scalability, maintainability, and effectiveness in a high‑risk domain where test quality directly impacts product safety and cost.
Comments & Academic Discussion
Loading comments...
Leave a Comment