Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

Perceptual modelling of visual quality assessment

Wu, Xinbo 2024. Perceptual modelling of visual quality assessment. PhD Thesis, Cardiff University.
Item availability restricted.

[thumbnail of 2024wuxphd (thesis).pdf] PDF - Accepted Post-Print Version
Restricted to Repository staff only until 19 July 2025 due to copyright restrictions.
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (20MB)
[thumbnail of Cardiff University Electronic Publication Form] PDF (Cardiff University Electronic Publication Form) - Supplemental Material
Restricted to Repository staff only

Download (340kB)

Abstract

The Human Visual System (HVS) is essential for perceiving image and video quality. Research on human visual attention has significantly enhanced quality assessment methods. Visual saliency, reflecting viewers’ perceptual responses via eye movements, has become crucial in assessing image and video quality. However, the link between visual attention and subjective quality perception in images and videos remains unclear. This study analyses the SVQ160 eye-tracking database to assess how video content and temporal sequencing affect gaze shifts. We examine Quality Induced Saliency Shifts (QSS) and its correlation with video content, time order, and distortion. Three models are developed to simulate QSS behaviours and integrate them into a Video Quality Assessment (VQA) framework. Experimental findings show that these QSS models improve objective VQA performance in predicting video quality. To explore the subjective perception of high quality images by HVS, we developed a new Image Quality Assessment (IQA) database of filtered images (CUMAD). This database includes real-world data on human assessments of high-quality images. Using this database, we introduce a novel deep learning based No-Reference Image Quality Assessment (NR-IQA) model. A style-aware module was designed to learn the discriminative features of filter-altered images. Experimental results demonstrate that the proposed deep learning-based NR-IQA model outperforms 15 State-Of-The-Art NR-IQA models, achieving a correlation of 0.7253 between predicted image quality scores and the ground truth. In both IQA and VQA, the Mean Opinion Score (MOS) is widely used to measure perceived quality. However, individual differences can affect its reliability. We quantify this variability as the Variance of Opinion Score (VOS) and create a benchmark for IQA. We analysed VOS relevance regarding distortion intensity, type, and scene content. Additionally, a simple deep learning-based model is developed to identify images with significant subjective quality variations. Experimental results show that the model’s precision in recognising images with significant subjective quality variations is 80.69%.

Item Type: Thesis (PhD)
Date Type: Completion
Status: Unpublished
Schools: Computer Science & Informatics
Date of First Compliant Deposit: 19 July 2024
Date of Acceptance: 17 July 2024
Last Modified: 26 Jul 2024 10:32
URI: https://orca.cardiff.ac.uk/id/eprint/170692

Actions (repository staff only)

Edit Item Edit Item

Downloads

Downloads per month over past year

View more statistics