Learning Continuous Depth Representation via Geometric Spatial Aggregator
DOI:
https://doi.org/10.1609/aaai.v37i3.25369Keywords:
CV: Computational Photography, Image & Video Synthesis, CV: Low Level & Physics-Based VisionAbstract
Depth map super-resolution (DSR) has been a fundamental task for 3D computer vision. While arbitrary scale DSR is a more realistic setting in this scenario, previous approaches predominantly suffer from the issue of inefficient real-numbered scale upsampling. To explicitly address this issue, we propose a novel continuous depth representation for DSR. The heart of this representation is our proposed Geometric Spatial Aggregator (GSA), which exploits a distance field modulated by arbitrarily upsampled target gridding, through which the geometric information is explicitly introduced into feature aggregation and target generation. Furthermore, bricking with GSA, we present a transformer-style backbone named GeoDSR, which possesses a principled way to construct the functional mapping between local coordinates and the high-resolution output results, empowering our model with the advantage of arbitrary shape transformation ready to help diverse zooming demand. Extensive experimental results on standard depth map benchmarks, e.g., NYU v2, have demonstrated that the proposed framework achieves significant restoration gain in arbitrary scale depth map super-resolution compared with the prior art. Our codes are available at https://github.com/nana01219/GeoDSR.Downloads
Published
2023-06-26
How to Cite
Wang, X., Chen, X., Ni, B., Tong, Z., & Wang, H. (2023). Learning Continuous Depth Representation via Geometric Spatial Aggregator. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 2698-2706. https://doi.org/10.1609/aaai.v37i3.25369
Issue
Section
AAAI Technical Track on Computer Vision III