Skip to content
/ TMP Public

The official code for paper "TMP: Temporal Motion Propagation for Online Video Super-Resolution"

License

Notifications You must be signed in to change notification settings

xtudbxk/TMP

Repository files navigation

Zhengqiang Zhang1,2 | Ruihuang Li 1,2 | Shi Guo1,2 | Yang Cao3 | Lei Zhang1,2

1The Hong Kong Polytechnic University, 2 The PolyU-OPPO Joint Innovation Lab, 3The Hong Kong University of Science and Technology

ABSTRACT

Online video super-resolution (online-VSR) highly relies on an effective alignment module to aggregate temporal information, while the strict latency requirement makes accurate and efficient alignment very challenging. Though much progress has been achieved, most of the existing online-VSR methods estimate the motion fields of each frame separately to perform alignment, which is computationally redundant and ignores the fact that the motion fields of adjacent frames are correlated. In this work, we propose an efficient Temporal Motion Propagation (TMP) method, which leverages the continuity of motion field to achieve fast pixel-level alignment among consecutive frames. Specifically, we first propagate the offsets from previous frames to the current frame, and then refine them in the neighborhood, which significantly reduces the matching space and speeds up the offset estimation process. Furthermore, to enhance the robustness of alignment, we perform spatial-wise weighting on the warped features, where the positions with more precise offsets are assigned higher importance. Experiments on benchmark datasets demonstrate that the proposed TMP method achieves leading online-VSR accuracy as well as inference speed.

FRAMEWORK

  • the illustration of propagation paths for moving objects and static regions

The OBJ path aims to locate moving objects in the current frame, while the CAM path matches the static regions. The $\text{\color{orange}{orange}}$ arrow represents the estimated motion from ILRt−2 to ILRt−1, which starts from the $\text{\color{blue}{blue}}$ point and ends at the $\text{\color{orange}{orange}}$ point. The $\text{\color{red}{red}}$ arrow indicates the temporally propagated motion. In the CAM path, the $\text{\color{green}{green}}$ point in ILRt   has the same position as the $\text{\color{orange}{orange}}$ point in ILRt−1. The $\text{\color{red}{red}}$ points indicates the potential positions of the object at the corresponding frames, and the brighter colors represent higher likelihood.

  • the architecture of the proposed online-VSR method

Overview of our proposed online-VSR method. Left: The flowchart of the proposed method. There are two major differences between our method and the existing methods. One is the temporal motion propagation (TMP) module (highlighted in $\text{\color{green}{green}}$ color box), which propagates the motion field from the previous frame to the current frame. The other is the motion confidence weighted fusion (highlighted in $\text{\color{orange}{orange}}$ color box), which weighs the warped features by the accuracy of estimated offsets. Right: The detailed architecture of the TMP module. Best viewed in color.

HOW TO USE

  • Prerequisite

    We train and test our project under torch==1.10 and python3.7. You can install the required libs with pip3 install requirement.txt.

  • Dataset

    Please refer to here to download the REDS, Vimeo90K dataset and there forVid4 dataset.

  • Train

    You can train this project using python3 basicsr/train.py -opt options/train/TMP/train_TMP.yaml.

  • Test

    You can test the trained models using python3 basicsr/train.py -opt options/test/TMP/test_TMP.yaml.

  • Pretrained Models

    Please download the pretrained models from OneDrive.

please modify the paths of dataset and the trained model in the corresponding config file mannually

RESULTS

please refer to the paper for more results.

  • Compared with non- and online-VSR methods

  • Visualized Results on Static Regions and Moving Objects

  • Visualized Results on REDS4

CITATION

@misc{zhang2023tmp,
    title={TMP: Temporal Motion Propagation for Online Video Super-Resolution},
    author={Zhengqiang Zhang and Ruihuang Li and Shi Guo and Yang Cao and Lei Zhang},
    year={2023},
    eprint={2312.09909},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

CONTACT

Please leave a issue or contact zhengqiang with zhengqiang.zhang@connect.polyu.hk

License and Acknowledgement

Great thanks to BasicSR. We build our project based on their codes. Specially, we implement the cuda version for TMP and corresponding network architectures. Please refere to basicsr/archs/tmp* for more details.

This project is released under the Apache 2.0 license.

Please refer to BasicSR's LICENCE.md for more details of licence about the code in BasicSR.

visitors

About

The official code for paper "TMP: Temporal Motion Propagation for Online Video Super-Resolution"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published