Autonomous driving research faces challenges in generating corner case data, which is crucial yet costly. While current methods like diffusion models and Neural Radiance Field (NeRF) have effectively generated visual-level corner cases, they fall short in creating planning-level scenarios.
To address this, we propose HumanSim, a Human-Like Multi-Agent Novel simulator that leverages large language models (LLMs) to simulate human-like driving behaviors. This approach offers exceptional adaptability, granularity, and situational awareness, enhancing the realism of simulations. HumanSim facilitates the construction of complex corner cases, such as traffic cutting and emergency landings, and balances transparency with efficiency in decision-making.
The experiments show its effectiveness in replicating human driving, and the integration of LLMs brings convenience for humans to understand decisions of agents and construct corner cases. HumanSim provides a comprehensive platform for testing and refining next-generation autonomous driving technologies.