Dr Salar Arbabi

Postgraduate research student in cooperative decision making for automated driving in urban environment


Autonomous agents that drive on roads shared with human drivers must reason about the nuanced interactions among traffic participants. This poses a highly challenging decision making problem since human behavior is influenced by a multitude of factors (e.g., human intentions and emotions) that are hard to model. This paper presents a decision making approach for autonomous driving, focusing on the complex task of merging into moving traffic where uncertainty emanates from the behavior of other drivers and imperfect sensor measurements. We frame the problem as a partially observable Markov decision process (POMDP) and solve it online with Monte Carlo tree search. The solution to the POMDP is a policy that performs high-level driving maneuvers, such as giving way to an approaching car, keeping a safe distance from the vehicle in front or merging into traffic. Our method leverages a model learned from data to predict the future states of traffic while explicitly accounting for interactions among the surrounding agents. From these predictions, the autonomous vehicle can anticipate the future consequences of its actions on the environment and optimize its trajectory accordingly. We thoroughly test our approach in simulation, showing that the autonomous vehicle can adapt its behavior to different situations. We also compare against other methods, demonstrating an improvement with respect to the considered performance metrics.

Salar Arbabi, Davide Tavernini, Saber Fallah, Richard Bowden (2022)Learning an Interpretable Model for Driver Behavior Prediction with Inductive Biases, In: 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)2022-pp. 3940-3947 IEEE

To plan safe maneuvers and act with foresight, autonomous vehicles must be capable of accurately predicting the uncertain future. In the context of autonomous driving, deep neural networks have been successfully applied to learning predictive models of human driving behavior from data. However, the predictions suffer from cascading errors, resulting in large inaccuracies over long time horizons. Furthermore, the learned models are black boxes, and thus it is often unclear how they arrive at their predictions. In contrast, rule-based models-which are informed by human experts-maintain long-term coherence in their predictions and are human-interpretable. However, such models often lack the sufficient expressiveness needed to capture complex real-world dynamics. In this work, we begin to close this gap by embedding the Intelligent Driver Model, a popular hand-crafted driver model, into deep neural networks. Our model's transparency can offer considerable advantages, e.g., in debugging the model and more easily interpreting its predictions. We evaluate our approach on a simulated merging scenario, showing that it yields a robust model that is end-to-end trainable and provides greater transparency at no cost to the model's predictive accuracy.

Salar Arbabi, Shilp Dixit, Ziyao Zheng, David Oxtoby, Alexandros Mouzakitis, Saber Fallah (2020)Lane-Change Initiation and Planning Approach for Highly Automated Driving on Freeways, In: 2020 IEEE 92nd Vehicular Technology Conference: VTC2020-Fall

Quantifying and encoding occupants’ preferences as an objective function for the tactical decision making of autonomous vehicles is a challenging task. This paper presents a low-complexity approach for lane-change initiation and planning to facilitate highly automated driving on freeways. Conditions under which human drivers find different manoeuvres desirable are learned from naturalistic driving data, eliminating the need for an engineered objective function and incorporation of expert knowledge in form of rules. Motion planning is formulated as a finite-horizon optimisation problem with safety constraints. It is shown that the decision model can replicate human drivers’ discretionary lane-change decisions with up to 92% accuracy. Further proof of concept simulation of an overtaking manoeuvre is shown, whereby the actions of the simulated vehicle are logged while the dynamic environment evolves as per ground truth data recordings.

SALAR ARBABI, DAVIDE TAVERNINI, Saber Fallah, Richard Bowden (2022)Planning for Autonomous Driving via Interaction-Aware Probabilistic Action Policies, In: IEEE access10pp. 81699-81712 IEEE

Devising planning algorithms for autonomous driving is non-trivial due to the presence of complex and uncertain interaction dynamics between road users. In this paper, we introduce a planning framework encompassing multiple action policies that are learned jointly from episodes of human-human interactions in naturalistic driving. The policy model is composed of encoder-decoder recurrent neural networks for modeling the sequential nature of interactions and mixture density networks for characterizing the probability distributions over driver actions. The model is used to simultaneously generate a finite set of context-dependent candidate plans for an autonomous car and to anticipate the probable future plans of human drivers. This is followed by an evaluation stage to select the plan with the highest expected utility for execution. Our approach leverages rapid sampling of action distributions in parallel on a graphic processing unit, offering fast computation even when modeling the interactions among multiple vehicles and over several time steps. We present ablation experiments and comparison with two existing baseline methods to highlight several design choices that we found to be essential to our model's success. We test the proposed planning approach in a simulated highway driving environment, showing that by using the model, the autonomous car can plan actions that mimic the interactive behavior of humans.