Crane Lowering Guidance Using a Attachable Camera Module for Driver Vision Support
Cranes have long been essential equipment for lifting and placing heavy loads in construction projects. This study focuses on the lowering phase of crane operation, the stage in which the load is moved to the desired location. During this phase, a constant challenge exists: the load obstructs the operator’s view of the landing point. As a result, operators traditionally have to rely on verbal or gestural instructions from ground personnel, which significantly impacts site safety. To alleviate this constraint, the proposed system incorporates a attachable camera module designed to be attached directly to the load via a suction cup. This module houses a single-board computer, battery, and compact camera. After installation, it streams and processes images of the ground directly below the load in real time to generate installation guidance. Simultaneously, this guidance is transmitted to and monitored by a host computer. Preliminary experiments were conducted by attaching this module to a test object, confirming the feasibility of real-time image acquisition and transmission. This approach has the potential to significantly improve safety on construction sites by providing crane operators with an instant visual reference of hidden landing zones.
💡 Research Summary
The paper addresses a critical safety issue in crane operations: during the lowering phase, the load blocks the operator’s view of the landing area, forcing reliance on ground personnel’s verbal or gestural cues. To mitigate this blind‑spot problem, the authors propose an attachable camera module that can be mounted directly on the load using a suction cup. The hardware consists of a compact Arducam B0191 camera, a Raspberry Pi 4B single‑board computer for on‑board processing, a 532 nm green laser pointer, a rechargeable battery, and a lightweight 3D‑printed PLA frame measuring 50 mm × 150 mm × 127 mm. The module is designed for rectangular loads such as pallets, precast concrete panels, or steel boxes; the suction cup enables quick, tool‑free attachment, while a magnetic alternative is suggested for metallic objects.
The camera is oriented to look straight down from the side of the load, capturing the ground directly beneath. The laser pointer projects a bright green spot onto the surface, providing a reference point that is easy to detect in the image. Video and laser data are processed on the Raspberry Pi using OpenCV. The processing pipeline converts each RGB frame to grayscale, applies Gaussian blur, performs Canny edge detection, and then uses a Hough transform to extract line segments. Candidate lines are classified as horizontal or diagonal based on angle thresholds (|θ_horiz| ≤ 10°, 20° < |θ_diag| < 70°). Each candidate receives a SCORE based on length, angle, and position; the highest‑scoring horizontal and diagonal lines are selected and extended to the image borders. Simultaneously, the laser spot is detected by converting the frame to HSV color space, creating dual green masks, applying morphological filtering, and extracting the largest contour’s centroid. The intersection of the extended diagonal line with a line drawn through the laser spot yields the predicted ground‑contact point of the load’s corner. This point is overlaid as a guidance line on the live video stream.
Communication between the module and a host PC is achieved via Wi‑Fi; up to three modules can stream simultaneously, allowing multi‑view guidance from different corners of the load. The authors conducted indoor experiments by mounting three modules on a static frame placed at 1 m intervals from a wall, emulating different distances. The system maintained stable video transmission up to 5 m and consistently displayed the guidance line, confirming real‑time operation. However, the test did not involve an actual suspended load, so dynamic factors such as sway, vibration, and varying lighting were not evaluated. The authors acknowledge potential errors arising from the suction cup’s limited attachment precision and the fixed offset between the laser pointer and the load surface.
In conclusion, the study demonstrates the feasibility of a load‑mounted vision system that provides crane operators with a direct visual reference of the hidden landing zone, potentially reducing reliance on ground crew cues and improving safety. Future work will focus on field trials with real crane lifts, more robust attachment mechanisms, algorithm extensions for non‑rectangular or cylindrical loads, and possibly integrating 3‑D reconstruction for higher accuracy. If successfully deployed, this technology could significantly lower the incidence of accidents during crane lowering operations, especially in environments where many workers operate directly beneath large loads.
Comments & Academic Discussion
Loading comments...
Leave a Comment