GTP-Force: Game-Theoretic Trajectory Prediction through Distributed Reinforcement Learning
Abstract
This paper introduces Game-theoretic Trajectory Prediction through distributed reinForcement learning (GTPForce), a system that tackles the challenge of predicting joint pedestrian trajectories in multi-agent scenarios. GTP-Force utilizes decentralized reinforcement learning agents to personalize neural networks for each competing player based on their noncooperative preferences and social interactions with others. By identifying the Nash Equilibria, GTP-Force accurately predicts joint trajectories while minimizing overall system loss in noncooperative environments. The system outperforms existing stateof- the-art trajectory predictors, achieving an average displacement error of 0.19m on the ETH+UCY dataset and 80% accuracy on the Orange dataset, which is -0.03m and 5% better than the best-performing baseline, respectively. Additionally, GTP-Force considerably reduces the model size of social mobility predictors compared to approaches with classical game theory.
Type
Publication
Emami, Negar; Di Maio, Antonio; Braun, Torsten (2023). GTP-Force: Game-Theoretic Trajectory Prediction through Distributed Reinforcement Learning (In Press). In: The 20th IEEE International Conference on Mobile Ad-Hoc and Smart Systems (MASS 2023). IEEE Xplore: IEEE