CAR-bench: Evaluating the Consistency and Limit-Awareness of LLM Agents under Real-World Uncertainty
Abstract
Current LLM agent benchmarks fail to evaluate reliability in real-world scenarios with uncertain user inputs, prompting the creation of CAR-bench to test consistency, uncertainty management, and capability awareness in in-car assistant applications.
Existing benchmarks for Large Language Model (LLM) agents focus on task completion under idealistic settings but overlook reliability in real-world, user-facing applications. In domains, such as in-car voice assistants, users often issue incomplete or ambiguous requests, creating intrinsic uncertainty that agents must manage through dialogue, tool use, and policy adherence. We introduce CAR-bench, a benchmark for evaluating consistency, uncertainty handling, and capability awareness in multi-turn, tool-using LLM agents in an in-car assistant domain. The environment features an LLM-simulated user, domain policies, and 58 interconnected tools spanning navigation, productivity, charging, and vehicle control. Beyond standard task completion, CAR-bench introduces Hallucination tasks that test agents' limit-awareness under missing tools or information, and Disambiguation tasks that require resolving uncertainty through clarification or internal information gathering. Baseline results reveal large gaps between occasional and consistent success on all task types. Even frontier reasoning LLMs achieve less than 50% consistent pass rate on Disambiguation tasks due to premature actions, and frequently violate policies or fabricate information to satisfy user requests in Hallucination tasks, underscoring the need for more reliable and self-aware LLM agents in real-world settings.
Community
Why is this gap widening? Frontier models like Claude-Opus-4.6 are crushing base task performance (80%), but hallucination resistance (48%) and disambiguation (46%) lag far behind. What's preventing models from learning when to say 'I need more information' or 'I cannot help with this' as quickly as they learn to complete tasks?
arXivLens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/car-bench-evaluating-the-consistency-and-limit-awareness-of-llm-agents-under-real-world-uncertainty-5365-1dc62bc7
- Executive Summary
- Detailed Breakdown
- Practical Applications
arXivLens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/car-bench-evaluating-the-consistency-and-limit-awareness-of-llm-agents-under-real-world-uncertainty-5365-1dc62bc7
- Executive Summary
- Detailed Breakdown
- Practical Applications
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- TRIP-Bench: A Benchmark for Long-Horizon Interactive Agents in Real-World Scenarios (2026)
- From Self-Evolving Synthetic Data to Verifiable-Reward RL: Post-Training Multi-turn Interactive Tool-Using Agents (2026)
- User-Oriented Multi-Turn Dialogue Generation with Tool Use at scale (2026)
- TravelBench: A Broader Real-World Benchmark for Multi-Turn and Tool-Using Travel Planning (2025)
- SpeakRL: Synergizing Reasoning, Speaking, and Acting in Language Models with Reinforcement Learning (2025)
- IDRBench: Interactive Deep Research Benchmark (2026)
- The Hierarchy of Agentic Capabilities: Evaluating Frontier Models on Realistic RL Environments (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper
