Demonstrators for Industrial Cyber-Physical System Research: A Requirements Hierarchy Driven by Software-Intensive Design

Demonstrators for Industrial Cyber-Physical System Research: A Requirements Hierarchy Driven by Software-Intensive Design
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

One of the challenges apparent in the organisation of research projects is the uncertainties around the subject of demonstrators. A precise and detailed elicitation of the coverage for project demonstrators is often an afterthought and not sufficiently detailed during proposal writing. This practice leads to continuous confusion and a mismatch between targeted and achievable demonstration of results, hindering progress. The reliance on the TRL scale as a loose descriptor does not help either. We propose a demonstrator requirements elaboration framework aiming to evaluate the feasibility of targeted demonstrations, making realistic adjustments, and assist in describing requirements. In doing so, we define 5 hierarchical levels of demonstration, clearly connected to expectations, e.g., work package interaction, and also connected to the project’s industrial use-cases. The considered application scope in this paper is the domain of software-intensive systems and industrial cyber-physical systems. A complete validation is not accessible, as it would require application of our framework at the start of a project and observing the results at the end, taking 4-5 years. Nonetheless, we have applied it to two research projects from our portfolio, one at the early and another at the final stages, revealing its effectiveness.


💡 Research Summary

The paper addresses a pervasive problem in industrial cyber‑physical system (CPS) research projects: the lack of a clear, shared definition of what constitutes a “demonstrator.” In many EU‑funded or public‑private partnership projects, demonstrators are listed as deliverables, yet their scope, maturity, and integration requirements are often left vague. This ambiguity leads to misaligned expectations among academic partners, industrial collaborators, and funding agencies, causing delays, rework, or even failure to deliver a usable prototype. The authors argue that the traditional Technology Readiness Level (TRL) scale, originally devised for hardware‑centric aerospace technologies, is insufficient for software‑intensive systems (SIS) because it does not capture the iterative, component‑based nature of modern CPS development, nor does it reflect the diverse stakeholder perspectives on demonstration outcomes.

To remedy these shortcomings, the authors propose a structured, hierarchical demonstrator requirements framework specifically tailored to software‑intensive industrial CPS. The framework builds on an adapted TRL model that re‑interprets each of the nine original levels with software‑centric terminology (e.g., “algorithm identified,” “software concept formulated,” “simulation‑based prototype”). On top of this adapted scale, they introduce five concrete demonstration levels (Level 1 to Level 5) that map directly to typical research project activities:

  • Level 1 – Conceptual Proof: Basic principles or algorithms are identified and described; minimal functional or non‑functional requirements.
  • Level 2 – Software Concept Definition: The intended software architecture, use‑case, and interface specifications are documented.
  • Level 3 – Simulation/Analytical Prototype: Algorithms are implemented in a simulated or analytical environment; performance is evaluated against defined metrics.
  • Level 4 – Laboratory‑Scale Integration: Individual software components are integrated and tested under controlled laboratory conditions, including data‑flow and temporal synchronization checks.
  • Level 5 – Near‑Production Integrated Demonstration: The full system is exercised in an environment that closely resembles the target industrial setting, satisfying both functional and extra‑functional requirements such as real‑time performance, security, scalability, and robustness.

Each level is defined in terms of (a) functional requirements, (b) extra‑functional (non‑functional) requirements, (c) work‑package (WP) dependencies (data and temporal), and (d) stakeholder audience (industry, academia, funding bodies). By explicitly linking demonstration levels to WP interactions, the framework enables early detection of mismatches between what a WP can deliver and what the overall demonstrator is expected to achieve.

The application methodology consists of four iterative steps:

  1. Define Target Demonstration Level: At project inception, stakeholders agree on the desired level(s) for each demonstrator, documenting the intended audience and success criteria.
  2. Map WP Outputs and Dependencies: For every WP, list deliverables, input/output artifacts, and data/temporal dependencies on other WPs.
  3. Identify Gaps and Risks: Compare the mapped capabilities against the target level; highlight missing interfaces, insufficient data fidelity, or unrealistic performance expectations.
  4. Adjust Scope or Level: Based on the gap analysis, either refine the demonstrator scope (e.g., reduce functional breadth) or raise the target level by allocating additional resources, revising schedules, or redefining integration strategies.

The authors validate the framework through two case studies drawn from their own research portfolio. The first case involves an early‑stage project that initially aimed for Level 3 demonstration. During the gap analysis, the authors discovered that critical data formats required by the diagnostic algorithm WP were undefined, and temporal alignment between monitoring and knowledge‑representation WPs was missing. Consequently, the project escalated to Level 4, adding a data‑exchange specification task and a hardware‑in‑the‑loop (HIL) testbed to ensure realistic timing. This proactive adjustment prevented later integration bottlenecks and kept the project on schedule.

The second case study examines a near‑completion project that targeted a full Level 5 integrated demonstrator. While functional integration was achievable, non‑functional requirements—particularly real‑time latency and security certification—proved infeasible within the remaining budget and timeline. The team therefore scoped the demonstrator to Level 4 for the final delivery, focusing on functional validation in a simulated industrial environment while postponing the full production‑grade performance tests to a follow‑up project. This decision satisfied the industrial partner’s need for a tangible prototype and the funding agency’s requirement for demonstrable impact, without incurring costly overruns.

Key insights derived from the analysis include:

  • Software‑Specific Maturity Metrics: By decoupling demonstration levels from the generic TRL, the framework captures the iterative nature of software development and provides a more granular view of readiness.
  • Stakeholder‑Centric Design: Explicitly documenting the intended audience for each demonstrator clarifies trade‑offs (e.g., operational realism vs. experimental flexibility) and aligns expectations early.
  • WP Dependency Transparency: Modeling data and temporal dependencies uncovers hidden integration risks that would otherwise surface only during late‑stage testing.
  • Iterative Scope Management: The four‑step process supports agile‑style adjustments, allowing projects to adapt demonstrator ambitions as technical knowledge evolves.

In conclusion, the proposed hierarchical demonstrator requirements framework offers a practical, reusable tool for researchers and project managers handling software‑intensive industrial CPS projects. It bridges the gap between high‑level project proposals and concrete, testable deliverables, reduces miscommunication among diverse stakeholders, and improves the likelihood of delivering demonstrators that are both scientifically valuable and industrially relevant. Future work suggested by the authors includes extending the framework to other domains (e.g., smart cities, healthcare IoT) and integrating automated requirement‑traceability tools to further streamline the alignment of WP outputs with demonstrator goals.


Comments & Academic Discussion

Loading comments...

Leave a Comment