| Gao, Lancheng, Jia, Ziheng, Zeng, Yunhao, Sun, Wei, Zhang, Yiming, Zhou, Wei, Zhai, Guangtao and Min, Xiongkuo 2025. EEmo-Bench: A benchmark for multi-modal large language models on image evoked emotion assessment. Presented at: MM '25: The 33rd ACM International Conference on Multimedia, Dublin, Ireland, 27-31 October 2025. MM '25: Proceedings of the 33rd ACM International Conference on Multimedia. ACM, pp. 7064-7073. 10.1145/3746027.3755777 |
Abstract
The furnishing of multi-modal large language models (MLLMs) has led to the emergence of numerous benchmark studies, particularly those evaluating their perception and understanding capabilities. Among these, understanding image-evoked emotions aims to enhance MLLMs' empathy, with significant applications such as human-machine interaction and advertising recommendations. However, current evaluations of this MLLM capability remain coarse-grained, and a systematic and comprehensive assessment is still lacking. To this end, we introduce EEmo-Bench, a novel benchmark dedicated to the analysis of the evoked emotions in images across diverse content categories. Our core contributions include: 1) Regarding the diversity of the evoked emotions, we adopt an emotion ranking strategy and employ the Valence-Arousal-Dominance (VAD) as emotional attributes for emotional assessment. In line with this methodology, 1,960 images are collected and manually annotated. 2) We design four tasks to evaluate MLLMs' ability to capture the evoked emotions by single images and their associated attributes: Perception, Ranking, Description, and Assessment. Additionally, image-pairwise analysis is introduced to investigate the model's proficiency in performing joint and comparative analysis. In total, we collect 6,773 question-answer pairs and perform a thorough assessment on 19 commonly-used MLLMs. The results indicate that while some proprietary and large-scale open-source MLLMs achieve promising overall performance, the analytical capabilities in certain evaluation dimensions remain suboptimal. Our EEmo-Bench paves the path for further research aimed at enhancing the comprehensive perceiving and understanding capabilities of MLLMs concerning image-evoked emotions, which is crucial for machine-centric emotion perception and understanding. Our code and benchmark datasets are available at https://github.com/workerred/EEmo-Bench.
| Item Type: | Conference or Workshop Item (Paper) |
|---|---|
| Date Type: | Published Online |
| Status: | Published |
| Schools: | Schools > Computer Science & Informatics |
| Publisher: | ACM |
| ISBN: | 9798400720352 |
| Last Modified: | 18 Nov 2025 10:15 |
| URI: | https://orca.cardiff.ac.uk/id/eprint/182477 |
Actions (repository staff only)
![]() |
Edit Item |




Dimensions
Dimensions