DSAEval: Evaluating Data Science Agents on a Wide Range of Real-World Data Science Problems

Anonymous Authors

Above is the live view of all agent task logs and reports. Select a chat file to explore details.

⚠️ Loading 7000+ tasks may take around 20+ seconds. Please wait a moment...

Abstract

Recent LLM-based data agents aim to automate data science tasks ranging from data analysis to deep learning. However, the open-ended nature of real-world data science problems, which often span multiple taxonomies and lack standard answers, poses a significant challenge for evaluation. To address this, we introduce DSAEval, a benchmark comprising 641 real-world data science problems grounded in 285 diverse datasets, covering both structured and unstructured data (e.g., vision and text). DSAEval incorporates three distinctive features: (1) Multimodal Environment Perception, which enables agents to interpret observations from multiple modalities including text and vision; (2) Multi-Query Interactions, which mirror the iterative and cumulative nature of real-world data science projects; and (3) Multi-Dimensional Evaluation, which provides a holistic assessment across reasoning, code, and results. We systematically evaluate 11 advanced agentic LLMs using DSAEval. Our results show that Claude-Sonnet-4.5 achieves the strongest overall performance, GPT-5.2 is the most efficient, and MiMo-V2-Flash is the most cost-effective. We further demonstrate that multimodal perception consistently improves performance on vision-related tasks, with gains ranging from 2.04% to 11.30%. Overall, while current data science agents perform well on structured data and routine data anlysis workflows, substantial challenges remain in unstructured domains. Finally, we offer critical insights and outline future research directions to advance the development of data science agents.

Benchmark Statistics

Dataset Distribution

Figure 2: Distribution of DSAEval Benchmark. Covering diverse Data Types (Left), Domains (Center), and Task Types (Right).

Comparison with Existing Benchmarks

Benchmark DataSets Questions Hetero. Data Vision-modal Obs Multi-step Deep Learning
DS-1000 (Lai et al., 2023) 1,000
Infiagent-DABench (Hu et al., 2024) 52 257
DA-Code (Huang et al., 2024b) 500 500
MLAgentBench (Huang et al., 2024a) 13 13
DSEval (Zhang et al., 2024) 299 825
DSCodeBench (Ouyang et al., 2025) 1,000
DABstep (Egg et al., 2025) - 450
DSAEval (Ours) 285 641

🏆 Leaderboard & Overall Performance

Overall Performance

Figure 3: Overall Model Performance. Claude-Sonnet-4.5 leads the benchmark.

Rank Model Total Score Reasoning Code Result
1 Claude-sonnet-4.5 8.164 8.970 8.590 7.240
2 GPT-5.2 7.713 8.270 8.400 6.780
3 Mimo-v2-flash 7.644 8.140 8.540 6.600
4 Minimax-m2 7.642 8.100 8.440 6.700
5 Gemini-3-pro 7.309 7.960 8.310 6.070
6 Grok-4.1-fast 7.254 7.870 8.070 6.180
7 GPT-5-nano 7.069 7.700 7.850 6.010
8 DeepSeek-v3.2 7.030 7.470 7.830 6.100
9 GLM-4.6v 6.874 7.500 7.800 5.710
10 Qwen3-VL-30B-Thinking 5.324 6.560 5.320 4.400
11 Ministral-14b-2512 5.182 5.880 5.740 4.240

📊 In-Depth Analysis

Fine-Grained Capabilities

Radar Analysis

Figure 4: Performance breakdown by Domain (Left) and Task Type (Right).


Efficiency & Cost-Effectiveness

Efficiency Analysis

Figure 5: Efficiency Analysis. (Left) Total Score vs. Tokens. (Right) Total Score vs. Price.

Flag Counter