Stablevqa: A deep no-reference quality assessment model for video stability
Proceedings of the 31st ACM International Conference on Multimedia, 2023•dl.acm.org
Video shakiness is an unpleasant distortion of User Generated Content (UGC) videos, which
is usually caused by the unstable hold of cameras. In recent years, many video stabilization
algorithms have been proposed, yet no specific and accurate metric enables
comprehensively evaluating the stability of videos. Indeed, most existing quality assessment
models evaluate video quality as a whole without specifically taking the subjective
experience of video stability into consideration. Therefore, these models cannot measure the …
is usually caused by the unstable hold of cameras. In recent years, many video stabilization
algorithms have been proposed, yet no specific and accurate metric enables
comprehensively evaluating the stability of videos. Indeed, most existing quality assessment
models evaluate video quality as a whole without specifically taking the subjective
experience of video stability into consideration. Therefore, these models cannot measure the …
Video shakiness is an unpleasant distortion of User Generated Content (UGC) videos, which is usually caused by the unstable hold of cameras. In recent years, many video stabilization algorithms have been proposed, yet no specific and accurate metric enables comprehensively evaluating the stability of videos. Indeed, most existing quality assessment models evaluate video quality as a whole without specifically taking the subjective experience of video stability into consideration. Therefore, these models cannot measure the video stability explicitly and precisely when severe shakes are present. In addition, there is no large-scale video database in public that includes various degrees of shaky videos with the corresponding subjective scores available, which hinders the development of Video Quality Assessment for Stability (VQA-S). To this end, we build a new database named StableDB that contains 1,952 diversely-shaky UGC videos, where each video has a Mean Opinion Score (MOS) on the degree of video stability rated by 34 subjects. Moreover, we elaborately design a novel VQA-S model named StableVQA, which consists of three feature extractors to acquire the optical flow, semantic, and blur features respectively, and a regression layer to predict the final stability score. Extensive experiments demonstrate that the StableVQA achieves a higher correlation with subjective opinions than the existing VQA-S models and generic VQA models. The database and codes are available at https://github.com/QMME/StableVQA.
ACM Digital Library
Showing the best result for this search. See all results