HumanSim: Human-Like Multi-Agent Novel Driving Simulation for Corner Case Generation

1Shanghai Jiao Tong University, 2Shanghai Artificial Intelligence Laboratory, *Corresponding Author
The 18th European Conference on Computer Vision ECCV 2024 Workshop on
Multimodal Perception and Comprehension of Corner Cases in Autonomous Driving

Our HumanSim presents a novel multi-agent simulation featuring human-like behaviors.

(a) HumanSim integrates large language models (LLMs) to help agents plan the trajectory, which emulates human-like driving styles and is interpretable to humans.

(b) There are two ways to conveniently generate corner cases at will in HumanSim (See Paper Sec 3.2), setting driving characters or navigation information. We achieve the situation that vehicle 104 swerves on the road in either way. Modifying characters is a more reasonable strategy, while navigation information can guide the behaviors more concisely.

Corner Cases

Abstract

Autonomous driving research faces challenges in generating corner case data, which is crucial yet costly. While current methods like diffusion models and Neural Radiance Field (NeRF) have effectively generated visual-level corner cases, they fall short in creating planning-level scenarios.

To address this, we propose HumanSim, a Human-Like Multi-Agent Novel simulator that leverages large language models (LLMs) to simulate human-like driving behaviors. This approach offers exceptional adaptability, granularity, and situational awareness, enhancing the realism of simulations. HumanSim facilitates the construction of complex corner cases, such as traffic cutting and emergency landings, and balances transparency with efficiency in decision-making.

The experiments show its effectiveness in replicating human driving, and the integration of LLMs brings convenience for humans to understand decisions of agents and construct corner cases. HumanSim provides a comprehensive platform for testing and refining next-generation autonomous driving technologies.