Image Processing (Session 6)
Learning to Predict Localized Distortions in Rendered Images
Article first published online: 25 NOV 2013
© 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd.
Computer Graphics Forum
Volume 32, Issue 7, pages 401–410, October 2013
How to Cite
Čadík, M., Herzog, R., Mantiuk, R., Mantiuk, R., Myszkowski, K. and Seidel, H.-P. (2013), Learning to Predict Localized Distortions in Rendered Images. Computer Graphics Forum, 32: 401–410. doi: 10.1111/cgf.12248
- Issue published online: 25 NOV 2013
- Article first published online: 25 NOV 2013
- I.3.3 [Computer Graphics]: Picture/Image Generation—Image Quality Assessment
In this work, we present an analysis of feature descriptors for objective image quality assessment. We explore a large space of possible features including components of existing image quality metrics as well as many traditional computer vision and statistical features. Additionally, we propose new features motivated by human perception and we analyze visual saliency maps acquired using an eye tracker in our user experiments. The discriminative power of the features is assessed by means of a machine learning framework revealing the importance of each feature for image quality assessment task. Furthermore, we propose a new data-driven full-reference image quality metric which outperforms current state-of-theart metrics. The metric was trained on subjective ground truth data combining two publicly available datasets. For the sake of completeness we create a new testing synthetic dataset including experimentally measured subjective distortion maps. Finally, using the same machine-learning framework we optimize the parameters of popular existing metrics.