Paper:
Study on Real-Time Point Cloud Superimposition on Camera Image to Assist Environmental Three-Dimensional Laser Scanning
Kenta Ohno*,, Hiroaki Date**, and Satoshi Kanai**
*Graduate School of Information Science and Technology, Hokkaido University
Kita 14, Nishi 9, Kita-ku, Sapporo, Hokkaido 060-0814, Japan
Corresponding author
**Faculty of Information Science and Technology, Hokkaido University, Sapporo, Japan
Recently, three-dimensional (3D) laser scanning technology using terrestrial laser scanner (TLS) has been widely used in the fields of plant manufacturing, civil engineering and construction, and surveying. It is desirable for the operator to be able to immediately and intuitively confirm the scanned point cloud to reduce unscanned regions and acquire scanned point clouds of high quality. Therefore, in this study, we developed a method to superimpose the point cloud on the actual environment to assist environmental 3D laser measurements, allowing the operator to check the scanned point cloud or unscanned regions in real time using the camera image. The method included extracting the correspondences of the camera image and the image generated by point clouds by considering unscanned regions, estimating the camera position and attitude in the point cloud by sampling correspondence points, and superimposing the scanned point cloud and unscanned regions on the camera image. When the proposed method was applied to two types of environments, that is, a boiler room and university office, the estimated camera image had a mean position error of approximately 150 mm and mean attitude error of approximately 1°, while the scanned point cloud and unscanned regions were superimposed on the camera image on a tablet PC at a rate of approximately 1 fps.
- [1] Y. Midorikawa and H. Masuda, “Extraction of Rotational Surfaces and Generalized Cylinders from Point-Clouds Using Section Curves,” Int. J. Automation Technol., Vol.12, No.6, pp. 901-910, 2018.
- [2] T. Takahashi, H. Date, and S. Kanai, “Automatic Indoor Environment Modeling from Laser-scanned Point Clouds using Graph-based Regular Arrangement Recognition,” Proc. of the 4th Int. Conf. on Civil and Building Engineering Informatics, pp. 368-375, 2019.
- [3] E. Wakisaka, H. Date, and S. Kanai, “Model-based next-best-view planning of terrestrial laser scanner for HVAC facility renovation,” Computer-Aided Design and Applications, Vol.15, No.3, pp. 353-366, 2017.
- [4] Y. Kitada, Y. Yasumuro, H. Dan, R. Matsushita, and T. Nishigata, “Optimization Scenario for Large Scale 3D-Scanning Plans based on SFM and MVS,” J. of Japan Society of Civil Engineers, Ser. F3, Vol.71, No.2, pp. I_169-I_175, 2015.
- [5] M. F. Fallon, H. Johannsson, and J. J. Leonard, “Efficient scene simulation for robust monte carlo localization using an RGB-D camera,” Proc. of the 2012 IEEE Int. Conf. on Robotics and Automation, pp. 1663-1670, 2012.
- [6] D. Rozenbersszki and A. Majdik, “LOL: Lidar-only Odometry and Localization in 3D point cloud maps,” Proc. of the 2020 IEEE Int. Conf. on Robotics and Automation, pp. 4379-4385, 2020.
- [7] A. Kurobe, H. Kinoshita, and H. Saito, “Vehicle Trajectory Estimation Method by Drive Recorder Images and Point Cloud of Surrounding Environment,” J. of the Japan Society for Precision Engineering, Vol.85, No.3, pp. 274-281, 2019 (in Japanese).
- [8] D. Hahnel, D. Schulz, and W. Burgard, “Map building with mobile robots in populated environments,” Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Vol.1, pp. 496-501, 2002.
- [9] S. Thrun, W. Burgard, and D. Fox, “A real-time algorithm for mobile robot mapping with applications to multi-robot and 3D mapping,” Proc. of the Millennium Conf. IEEE Int. Conf. on Robotics and Automation, Vol.1, pp. 321-328, 2000.
- [10] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” Int. J. of Computer Vision, Vol.60, No.2, pp. 91-110, 2004.
- [11] P. F. Alcantrarilla, A. Bartoli, and A. J. Davison, “KAZE Features,” Proc. of the European Conf. on Computer Vision 2012, pp. 214-227, 2012.
- [12] P. F. Alcantarilla, J. Nuevo, and A. Bartoli, “Fast explicit diffusion for accelerated features in nonlinear scale spaces,” Proc. of British Machine Vision Conf., pp. 13.1-13.11, 2013.
- [13] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” Proc. of the IEEE Int. Conf. on Computer Vision, pp. 2564-2571, 2011.
- [14] S. A. K. Tareen and Z. Saleem, “A comparative analysis of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK,” Proc. of the 2018 Int. Conf. on Computing, Mathematics and Engineering Technologies, pp. 1-10, 2018.
- [15] L. Li, K. Hasegawa, I. Nii, and S. Tanaka, “Fused Transparent Visualization of Point Cloud Data and Background Photographic Image for Tangible Cultural Heritage Assets,” Int. J. of Geo-Information, Vol.8, No.8, pp. 343-357, 2019.
- [16] Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.22, No.11, pp. 1330-1334, 2000.
- [17] R. Hartley and A. Zisserman, “Multiple View Geometry in Computer Vision,” Cambridge University Press, 2004.
- [18] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, Vol.24, No.6, pp. 381-395, 1981.
- [19] P. Kamousi, S. Lazard, A. Maheshwari, and S. Wuhrer, “Analysis of Farthest Point Sampling for Approximating Geodesics in a Graph,” Computational Geometry, Vol.57, pp. 1-7, 2016.
- [20] A. J. Walker, “New fast method for generating discrete random numbers with arbitrary frequency distributions,” Electronics Letters, Vol.10, No.8, pp. 127-128, 1974.
- [21] OpenCV. http://opencv.org [Accessed June 27, 2019]
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.