TY - JOUR
T1 - A comparative study of super-resolution algorithms for video streaming application
AU - He, Xiaonan
AU - Qiao, Yuansong
AU - Lee, Brian
AU - Ye, Yuhang
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2024/4
Y1 - 2024/4
N2 - The escalating consumption of superior quality streaming videos among digital users has intensified the exploration of Video Super-Resolution (VSR) methodologies. Implementing VSR on the user end enhances video resolution without the need for additional bandwidth or capitalising on localised or edge computing resources. In the contemporary digital era, the proliferation of high-quality video content and the relative simplicity of VSR dataset generation have bolstered the popularity of Deep Neural Network-based VSR (DNN-VSR) approaches. Such dataset generation typically involves associating down-sampled high-resolution videos with their low-resolution equivalents as training instances. Nonetheless, current DNN-VSR techniques predominantly concentrate on enriching down-sampled videos, such as through Bicubic Interpolation (BI), without factoring in the inherent codec loss within video streaming applications, consequently constraining their practicality. This research scrutinises five state-of-the-art (SOTA) DNN-VSR algorithms, contrasting their performance on streaming videos using Fast Forward Moving Picture Expert Group (FFMPEG) to emulate codec loss. Our analysis also integrates subjective testing to address the limitations of objective metrics for VSR evaluation. The manuscript concludes with an introspective discussion of the results and outlines potential avenues for further investigation in the domain.
AB - The escalating consumption of superior quality streaming videos among digital users has intensified the exploration of Video Super-Resolution (VSR) methodologies. Implementing VSR on the user end enhances video resolution without the need for additional bandwidth or capitalising on localised or edge computing resources. In the contemporary digital era, the proliferation of high-quality video content and the relative simplicity of VSR dataset generation have bolstered the popularity of Deep Neural Network-based VSR (DNN-VSR) approaches. Such dataset generation typically involves associating down-sampled high-resolution videos with their low-resolution equivalents as training instances. Nonetheless, current DNN-VSR techniques predominantly concentrate on enriching down-sampled videos, such as through Bicubic Interpolation (BI), without factoring in the inherent codec loss within video streaming applications, consequently constraining their practicality. This research scrutinises five state-of-the-art (SOTA) DNN-VSR algorithms, contrasting their performance on streaming videos using Fast Forward Moving Picture Expert Group (FFMPEG) to emulate codec loss. Our analysis also integrates subjective testing to address the limitations of objective metrics for VSR evaluation. The manuscript concludes with an introspective discussion of the results and outlines potential avenues for further investigation in the domain.
KW - FFMPEG
KW - PSNR
KW - SSIM
KW - Video streaming
KW - Video super-resolution
UR - http://www.scopus.com/inward/record.url?scp=85174166986&partnerID=8YFLogxK
U2 - 10.1007/s11042-023-17230-8
DO - 10.1007/s11042-023-17230-8
M3 - Article
AN - SCOPUS:85174166986
SN - 1380-7501
VL - 83
SP - 43493
EP - 43512
JO - Multimedia Tools and Applications
JF - Multimedia Tools and Applications
IS - 14
ER -