Visibility in Polygonal Environments with Holes: Finding Best Spots for Hiding and Surveillance
Visibility plays an important role for decision making in cluttered, uncertain environments. This paper considers the problem of identifying optimal hiding spots for an agent against line-of-sight detection by an adversary whose location is unknown. We consider environments modeled as polygons with holes. We develop a set of mathematical tools for reasoning about visibility as a function of position and rely on non-smooth analysis to formally characterize the regularity properties of various visibility-based metrics. These metrics are non-smooth and non-convex, so off-the-shelf optimization algorithms can only guarantee convergence to Clarke critical points. To address this, the proposed Normalized Descent algorithm leverages the structure of non-smooth points in visibility problems and introduces randomness to escape saddle points. Our technical analysis allows for the non-monotonic decrease in the visibility metric and strengthens the algorithm guarantees, ensuring convergence to local minima with high probability. Simulations on two hide-and-seek scenarios showcase the effectiveness of the proposed approach.
💡 Research Summary
This paper tackles the problem of finding optimal hiding or surveillance locations for an autonomous agent operating in a polygonal environment that may contain multiple holes (obstacles), when the adversary’s position is unknown. The authors first formalize visibility as a set‑valued map S that assigns to each point x in the free space F the visibility polygon S(x) consisting of all points that are line‑of‑sight reachable from x. Because the visibility polygon changes abruptly when the observer passes certain critical points—called “anchors,” which are vertices of obstacles that cause a discontinuous change in the visible region—the visibility‑based cost functions are inherently non‑smooth and non‑convex.
To analyze such functions, the paper introduces two novel notions: µ‑local Lipschitzness and µ‑directional derivatives. µ‑local Lipschitzness measures how the symmetric difference of two visibility sets varies with respect to a reference measure µ (typically the Lebesgue measure), while the µ‑directional derivative quantifies the rate of change of that symmetric difference along a direction. These concepts allow the authors to rigorously define generalized gradients (Clarke sub‑differentials) for visibility‑based metrics such as total visible area, the number of points visible within a limited field‑of‑view, or the expected detection probability assuming a uniformly distributed adversary.
The core algorithmic contribution is the Normalized Descent (NorCenT) method. The original constrained problem (stay inside F) is turned into an unconstrained one by adding a penalty term based on the distance to F. At each iteration, a generalized gradient g ∈ ∂J̃(x) is computed, normalized to a unit direction d = g/‖g‖, and a unit‑step update x←proj_F(x − d) is performed. Because the gradient may vanish near non‑smooth points, the algorithm injects random directions with a small probability p_rand, yielding a stochastic step d←(1−p_rand) d + p_rand u, where u is a uniformly random unit vector. This randomness enables the method to escape saddle points and non‑isolated Clarke stationary points that would trap deterministic schemes.
Theoretical analysis shows that, under the µ‑regularity assumptions, the generalized gradient is bounded and the normalized flow is Lipschitz continuous. Moreover, with any p_rand > 0, the stochastic process converges almost surely to a local minimum of the visibility metric, rather than merely to a Clarke critical point. The proof leverages martingale convergence arguments and does not require the strong third‑order differentiability conditions that appear in earlier works on normalized gradient flows.
Empirical validation is performed on two hide‑and‑seek scenarios. In the first, a human explorer tries to hide from a drone equipped with a 360° camera in a mountainous terrain modeled as a polygon with holes. The NorCenT algorithm rapidly reduces the visible area by about 70 % within a few iterations, outperforming sampling‑based guard placement methods both in solution quality and computational effort. In the second scenario, a robot searches for other robots inside a multi‑room indoor environment with furniture obstacles, using a limited 90° field‑of‑view and a range of 10 m. The algorithm minimizes the expected detection probability, achieving a drop from 0.15 to 0.03 in under ten seconds, again beating baseline gradient descent and graph‑based approaches.
In summary, the paper provides a mathematically rigorous framework for handling non‑smooth visibility metrics, introduces µ‑based regularity tools, and proposes a practical stochastic normalized descent algorithm that provably converges to meaningful local minima. The work bridges computational geometry, non‑smooth analysis, and optimization, offering a solid foundation for future extensions to three‑dimensional spaces, dynamic obstacles, and hybrid learning‑based strategies.
Comments & Academic Discussion
Loading comments...
Leave a Comment