Aprendizado por reforço profundo explicável: um estudo com controle semafórico inteligente
Description
With the fast increase in urbanization levels, the problem of congestion has become even more evident for society, the environment, and the economy. One practical approach to alleviating this problem is adaptive traffic signal control (ATSC). Deep reinforcement learning algorithms have shown great potential for such control. However, these methods can be viewed as black boxes since their learned policies are not easily understood or explainable. The lack of explainability of these algorithms may be limiting their use in real-world conditions. One framework that can provide explanations for any deep learning model is SHAP. It considers models as black boxes and explains them using post-hot techniques, providing explanations based on the response of that model with different inputs, without analyzing or going into internal points (such as parameters and architecture). The state of the art for using SHAP with a deep reinforcement learning algorithm to control traffic lights can demonstrate consistency in the logic of the agent’s decision making, also presenting the reaction according to the traffic in each lane. However, it could not demonstrate the relation of some sensors with the chosen action intuitively and needed to present several figures to understand the impact of the state on the action. This paper presents two approaches based on the Deep Q-Network algorithm to explain the policy learned through the SHAP framework. The first uses the XGBoost algorithm as a function approximation, and the second uses a neural network. Each approach went through a process of studying and optimizing its hyperparameters. The environment was characterized as an MDP, and we modeled it in two different ways, namely Cyclic MDP and Selector MDP. These models allowed us to choose different actions and have different representations of the environment. Both approaches presented the impact of features on each action through the SHAP framework, which promotes understanding of how the agent behaves under different traffic conditions. This work also describes the application of Explainable AI in intelligent traffic signal control, demonstrating how to interpret the model and the limitations of the approach. Furthermore, as a final result, our methods improved travel time, speed, and throughput in two different scenarios, outperforming the FixedTime, SOTL, and MaxPressure baselines.CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior