Abstract
Developing control programs for autonomous vehicles is a challenging task, mainly due to factors such as complex and dynamic environments, intricacy of tasks, and uncertain sensor information. To tackle the challenge, this paper harnesses the potential of formal methods and deep reinforcement learning (DRL) for a more comprehensive solution that integrates Generalized Reactivity(1) (GR(1)) synthesis with DRL. The GR(1) synthesis module takes care of high-level task planning, ensuring a vehicle follows a correct-by-construction and verifiable plan for its mission. On the other hand, the DRL model operates as the low-level motion controller, allowing the vehicle to learn from experience and adjust its actions based on real-time sensor feedback. Therefore, the resulting controller for autonomous vehicles is not only <italic>guaranteed</italic> to finish its designated tasks but also <italic>intelligent</italic> to handle complex environments. Through comparative experimental studies, we demonstrate that the control program generated by the proposed approach outperforms the ones generated independently utilizing GR(1) reactive synthesis and DRL.
Original language | English (US) |
---|---|
Pages (from-to) | 1-12 |
Number of pages | 12 |
Journal | IEEE Transactions on Intelligent Vehicles |
DOIs | |
State | Accepted/In press - 2023 |
All Science Journal Classification (ASJC) codes
- Automotive Engineering
- Control and Optimization
- Artificial Intelligence