End-to-end Differentiable Calibration and Reconstruction for Optical Particle Detectors
Large-scale homogeneous detectors with optical readouts are widely used in particle detection, with Cherenkov and scintillator neutrino detectors as prominent examples. Analyses in experimental physics rely on high-fidelity simulators to translate sensor-level information into physical quantities of interest. This task critically depends on accurate calibration, which aligns simulation behavior with real detector data, and on tracking, which infers particle properties from optical signals. We present the first end-to-end differentiable optical particle detector simulator, enabling simultaneous calibration and reconstruction through gradient-based optimization. Our approach unifies simulation, calibration, and tracking, which are traditionally treated as separate problems, within a single differentiable framework. We demonstrate that it achieves smooth and physically meaningful gradients across all key stages of light generation, propagation, and detection while maintaining computational efficiency. We show that gradient-based calibration and reconstruction greatly simplify existing analysis pipelines while matching or surpassing the performance of conventional non-differentiable methods in both accuracy and speed. Moreover, the framework’s modularity allows straightforward adaptation to diverse detector geometries and target materials, providing a flexible foundation for experiment design and optimization. The results demonstrate the readiness of this technique for adoption in current and future optical detector experiments, establishing a new paradigm for simulation and reconstruction in particle physics.
💡 Research Summary
The paper introduces LUCiD, the first fully differentiable end‑to‑end simulator for large homogeneous optical particle detectors such as water‑Cherenkov and scintillator neutrino experiments. Built on JAX, LUCiD models detector geometry, photon generation, propagation, and sensor response with continuous, analytically differentiable primitives and, where necessary, surrogate neural networks. Instead of traditional Monte‑Carlo photon sampling, the framework uses a ray‑based representation of expected photon contributions; each ray carries an intensity that approximates the ensemble of many photons. This approach dramatically reduces variance, allowing accurate loss estimation with far fewer rays and enabling efficient gradient computation.
Automatic differentiation (reverse‑mode AD) provides exact gradients of the detector‑level output with respect to any simulation or physics parameter, regardless of dimensionality. Consequently, calibration (alignment of simulation parameters such as refractive index, absorption length, sensor quantum efficiency, electronic noise, etc.) and reconstruction (inference of particle track position, direction, and energy) can be performed simultaneously through gradient‑based optimization. The authors demonstrate that this joint optimization captures correlations that are ignored in conventional sequential pipelines, leading to faster convergence and reduced systematic bias.
Experiments on a Super‑Kamiokande‑like cylindrical detector (R = 16.9 m, H = 36.2 m, ~10 k sensors) show that LUCiD achieves 2–3× speedup over a GEANT4‑based workflow on comparable CPU/GPU hardware. Calibration losses converge in fewer iterations, and the final parameter estimates improve by 10–15 % relative to grid‑search or Bayesian methods. For reconstruction, the gradient‑driven approach yields a mean angular error of ~0.3° and an energy error of ~5 MeV, surpassing traditional histogram‑fitting techniques.
Key technical contributions include: (1) a geometry‑agnostic, parametric description of common detector shapes (cylinder, sphere, box) with uniform sensor placement; (2) differentiable models for Cherenkov and scintillation light emission, combining analytic formulas with learned surrogate models; (3) ray‑based propagation with implicit capture and photon relaxation to maintain smooth gradients at geometric intersections; (4) a sensor model that incorporates quantum efficiency, timing response, and electronic noise in a differentiable fashion; and (5) a modular codebase released as open‑source, facilitating extension to new materials, complex geometries (e.g., IceCube‑style arrays), or alternative optical processes.
The authors argue that LUCiD’s modularity and differentiability open new avenues for detector design optimization, allowing simultaneous tuning of hardware parameters and reconstruction algorithms to maximize physics sensitivity. By providing exact gradients throughout the full simulation chain, the framework eliminates the curse of dimensionality that plagues traditional calibration and inference, paving the way for more accurate, faster, and more robust analyses in current and future optical particle‑detector experiments.
Comments & Academic Discussion
Loading comments...
Leave a Comment