An Anatomy-Aware Shared Control Approach for Assisted Teleoperation of Lung Ultrasound Examinations
Although fully autonomous systems still face challenges due to patients’ anatomical variability, teleoperated systems appear to be more practical in current healthcare settings. This paper presents an anatomy-aware control framework for teleoperated lung ultrasound. Leveraging biomechanically accurate 3D modelling, the system applies virtual constraints on the ultrasound probe pose and provides real-time visual feedback to assist in precise probe placement tasks. A twofold evaluation, one with 5 naive operators on a single volunteer and the second with a single experienced operator on 6 volunteers, compared our method with a standard teleoperation baseline. The results of the first one characterised the accuracy of the anatomical model and the improved perceived performance by the naive operators, while the second one focused on the efficiency of the system in improving probe placement and reducing procedure time compared to traditional teleoperation. The results demonstrate that the proposed framework enhances the physician’s capabilities in executing remote lung ultrasound, reducing more than 20% of execution time on 4-point acquisitions, towards faster, more objective and repeatable exams.
💡 Research Summary
The paper introduces an anatomy‑aware shared‑control framework designed to assist teleoperated lung ultrasound (LUS) examinations. Recognizing that fully autonomous ultrasound robots struggle with patient‑specific anatomical variability, the authors focus on a teleoperation paradigm that retains the clinician’s decision‑making while providing real‑time guidance. Central to the approach is a patient‑specific three‑dimensional anatomical model generated from a pair of calibrated RGB‑D cameras. Point clouds captured in a T‑pose are filtered using YOLOv8‑seg, merged via robust ICP, and then fitted to the SKEL model—a biomechanically informed parametric body model that extends SMPL by jointly representing skin and skeleton meshes. Optimization of the Chamfer distance using AdamW yields individualized pose (q), shape (β), and global translation (t) parameters, producing a high‑fidelity volumetric and skeletal representation of the subject.
From the skeletal mesh, the first five ribs on each side are manually annotated once; their superior and inferior boundaries are projected onto the skin surface along the spine axis, generating central lines for each rib. These lines are interpolated with cubic splines, and elliptical‑cross‑section tubes are constructed around them to define “forbidden regions” that the ultrasound probe must avoid. The tubes are encoded as unilateral mesh‑based constraints in a quadratic programming (QP) formulation that restricts the probe’s translational motion (Δx) to the positive side of hyperplanes representing rib surfaces. Simultaneously, orientation assistance is provided through conic constraints derived from local skin normals and rib directions, limiting angular velocity so that the probe’s z‑axis remains nearly orthogonal to the pleural line while allowing fine alignment by the operator.
Control is realized in two sequential QP stages. The first solves for a position increment that respects the forbidden‑region constraints; the second incorporates the orientation limits and allowable angular velocities. The resulting command is fed to a Cartesian impedance controller that modulates both robot motion and interaction forces. Measured contact forces at the probe flange close the loop with a 6‑DoF haptic interface, delivering tactile feedback to the operator. When a constraint is violated, vibrotactile cues and graphical overlays appear on the GUI, enhancing situational awareness without obscuring genuine tissue reaction forces.
The framework was evaluated in two experimental conditions. In the first, five naïve participants performed a standard LUS protocol on a single volunteer, allowing assessment of model accuracy (average deviation < 3 mm from anthropometric measurements) and subjective usability (reduced perceived workload, higher satisfaction). In the second, an experienced sonographer examined six different volunteers, comparing the anatomy‑aware system to a baseline teleoperation without assistance. Results showed that the shared‑control system reduced total protocol execution time by more than 20 % for the four‑point acquisition sequence, and eliminated probe‑on‑rib collisions, achieving near‑zero positioning errors in intercostal windows. The naïve users also demonstrated a 22 % shorter exploration path when virtual fixtures were active.
These findings indicate that embedding a personalized anatomical model into the teleoperation loop can simultaneously guide the operator during free‑space navigation, enforce correct probe orientation, and constrain approach trajectories to safe intercostal zones. The approach improves repeatability, reduces operator dependence on extensive training, and maintains safety despite the absence of real‑time model updates; the compliant impedance controller accommodates minor respiratory and postural motions. Future work is suggested to incorporate online model refinement from continuous sensor streams, fuse additional modalities such as ultrasound‑derived surface reconstructions, and validate the system across a broader range of clinical scenarios, including critically ill patients and emergency settings. Overall, the study demonstrates that anatomy‑aware shared control is a viable pathway toward more efficient, objective, and scalable remote ultrasound examinations.
Comments & Academic Discussion
Loading comments...
Leave a Comment