Link To Latest Report : Coming Soon.
Background :
Bridge asset management is crucial for maintaining the safety and functionality of numerous bridges, yet traditional maintenance methods often rely on reactive strategies that lead to suboptimal outcomes. To address this issue, our study proposes exploring the application of reinforcement learning (RL) to optimize bridge management strategies, particularly focusing on strategic decision-making under conditions of imperfect information. RL, a subfield of artificial intelligence (AI), trains agents (or stakeholders such as bridge owners) to make sequences of decisions by rewarding them for beneficial choices. Reinforcement learning techniques have been applied to various aspects of bridge engineering, including structural health monitoring, bridge inspection, inspection scheduling, condition assessment, real-time damage detection, and sustainability-informed life-cycle maintenance and operation.
Objectives :
This project aims to explore the application of reinforcement learning (RL) to optimize bridge maintenance strategies. Specifically, the study expects to show that the proposed Generating Interactive RL (GI-RL) can significantly enhance bridge asset management by making maintenance strategies more proactive, cost-effective, and optimized for immediate and/or long-term outcomes.
Scope :
Task 1 – Research Advances in Reinforcement Learning Methods.
The research team will thoroughly investigate the latest advances in RL methods.
Task 2 – Define the RL Environment.
To establish a clear and effective RL environment tailored to our specific application needs, a state space and state space variables that represent the environment’s status at any given time should be defined (refer to Figure 2). Subsequently, possible actions that a RL agent can take within the environment and how reward function that accurately reflects the goals/constraints of the task must be defined. Finally, create a simulation environment that models the real-world scenarios for training the RL agent.
Task 3 – Train and Test the RL Model.
The research team will train a Generating Integrative Reinforcement Learning (GI-RL) model using the defined environment and evaluate its performance. The following subtasks will be completed:
- Model Selection :Choose the most suitable RL algorithm(s) based on the findings from Task 1.
- Training Setup : Configure the training parameters, including learning rate, discount factor, and exploration-exploitation strategy.
- Train a RL Model : Train the RL model using the simulation environment, monitoring its progress and adjusting parameters as needed.
- Test the RL Model : Test the trained model in various scenarios to evaluate its performance, robustness, and generalizability.
- Performance Metrics: Define/compute performance metrics: cumulative reward, convergence rate, and stability.
- Iteration and Optimization : Iterate on the training process to optimize the GI-RL model’s performance.
Task 4 – Synthesize the RL Model into an Asset Management Framework.
To integrate the GI-RL model into an existing asset management framework for practical application, it is essential to develop a comprehensive integration plan and implement the RL model within the framework. This involves creating user interfaces and APIs, if any, to facilitate seamless interaction with the RL model. Comprehensive testing of the integrated system must be conducted to ensure smooth operation and compatibility with the current processes. Additionally, developing training materials for end-users is crucial to enable them to effectively utilize the RL-enhanced asset management framework. If time permits, gathering feedback from users and stakeholders will be vital to refine and improve the integrated system, ensuring it meets the practical needs and enhances overall asset management efficiency
Research Team:
Principal Investigator : Mi Geum Chorzepa, Ph.D
Co-Principal Investigator : Bjorn Birgisson, Ph.D.