Link To Latest Report : Coming Soon.
Background :
Monitoring displacement and vibration in large civil structures, such as bridges, is essential for safety and longevity, but traditional methods like GPS, sensors, and vision-based systems face significant limitations. These methods struggle with accuracy, scalability, and adaptability, especially for large-span or tall structures. Challenges include camera positioning, atmospheric interference, lighting variations, and distinguishing structural movements from sensor or camera vibrations. While drone technology combined with AI offers a promising solution, it faces challenges such as maintaining drone stability, differentiating between drone and structural movements, and ensuring accurate data capture over large distances or varying conditions. Dual-camera setups (telephoto and wide-angle) improve accuracy but are limited when bridge lengths exceed the camera’s field of view or when drone distance varies.
Previous research uses stationary modules to complement the drone or uses a group of strategic reference markers or landmarks on the bridge girders to measure structural displacements in the bridge. However, these methods have notable drawbacks. The stationary module, while useful, limits the drone’s mobility and has primarily been tested in controlled experimental settings rather than on actual bridges in real-world conditions. The use of reference markers, though effective, requires physical access to the bridge structure for installation and maintenance. This can be particularly challenging for bridges in operation or in hard-to-reach areas. Moreover, while current methods focus on measuring displacement and vibration, they often overlook the critical step of evaluating data collected to determine whether the bridge is in good or bad condition.
Objectives :
The proposed AI system for measuring structural displacements is a vision-based, non-contact method. Utilizing dual cameras, it achieves a larger field of view by stitching the captured images into a single, comprehensive frame. Unlike traditional image processing techniques, such as digital image correlation (DIC), our displacement tracking method is deep learning-based.
Scope :
Task 1 – Develop AI Algorithms for Object Detection and Analysis:
- To ensure the robustness and generalizability of our system, we will collect extensive data from multiple bridges under varying conditions, including different structural configurations, environmental settings, and operational states. We will further diversify our dataset by employing data augmentation techniques such as rotation, scaling, flipping, brightness adjustment, and noise injection.
- We design and train deep learning models tailored specifically to accurately detect and analyze deck span features from drone-captured data in real-time rather than using markers. Using the trained deep learning model, we will develop algorithms to detect structural movements and vibrations with high precision.
Task 2 – Integrate Dual- Camera System.
To enhance measurement accuracy, we will implement a dual-camera setup on the drone, combining a telephoto lens for detailed, high-resolution imaging and a wide-angle lens for broader structural coverage. Data from both cameras will be synchronized and processed to enable precise displacement and vibration analysis. Additionally, we will perform camera calibration to ensure accurate spatial alignment between the two cameras and minimize lens distortion, thereby reducing measurement errors and improving the overall reliability of the system. Furthermore, we will do lidar-camera calibration, which consists of converting the data from a lidar sensor and a camera into the same coordinate system. This enables data fusion from both sensors and accurately identifies deck span features in bridges.
Task 3 – Enhance Drone Stability and Control.
- To ensure the drone operates smoothly during flight and delivers high-quality sensor data for deep learning tasks, it is crucial to design stabilization techniques that minimize errors caused by its movements. We develop a low-level control system specifically tailored for this type of drone. A custom control board is designed to integrate sensors such as gyroscopes and accelerometers, allowing the drone to continuously monitor its orientation and make real-time adjustments. The control board sends precise signals to the propellers, ensuring stable flight and accurate data collection even in dynamic environments.
- As part of this process, we develop advanced algorithms to counteract external disturbances, such as wind or sudden shifts in weight distribution, and solve optimization problems to generate efficient flight paths for the drone. The low-level controller then maintains stability during inspection tasks, significantly reducing errors. To further enhance the drone’s performance, a simulation platform will be created to test its stability and fine-tune the controller. This platform replicates all essential flight modes, including flying, hovering, climbing, and ground movement, providing a comprehensive environment for testing and optimization. Once these tasks are completed, real-time stabilization will be implemented to maintain optimal positioning and distance from the structure, ensuring high-quality data collection and minimizing collision risks. Together, these steps form a robust system capable of performing complex inspection tasks with precision and reliability.
Task 4 – Conduct Real-World Testing and Validation.
We will perform field tests on bridges to validate the system’s accuracy and reliability. During these tests, displacement and vibration data will be captured at every bridge deck span to ensure comprehensive coverage. The results will be compared to those obtained from traditional methods, such as contact-based sensors or marker-based systems, to demonstrate the improvements in precision, efficiency, and adaptability offered by our vision-based approach.
Task 5 – Prepare Deliverables and Disseminate Findings.
We will finalize the drone-AI system by integrating the hardware, software, and user documentation into a complete, user-friendly solution. To ensure the system meets real-world needs, we will collect feedback from transportation agencies (e.g., Nevada DOT) and industry partners during pilot deployments, using their input to refine and improve the system. Once optimized, we will work to integrate the system into their existing inspection workflows.
Research Team :
Principal Investigator : Jim La, Ph.D.
Co – principal Investigator : Mohamed Moustafa, Ph.D.