VFT-LIO: Visual Feature Tracking for Robust LiDAR Inertial Odometry Under Repetitive Patterns

Abstract

Recent advancements in autonomous vehicle odometry estimation have been largely driven by the integration of various sensor technologies. Among these, Light Detection and Ranging (LiDAR) and cameras play a crucial role; however, both exhibit inherent limitations. In particular, cameras, which are widely utilized, are highly susceptible to illumination changes. In contrast, LiDAR is robust to such variations, making it a powerful tool in Simultaneous Localization and Mapping (SLAM). To enhance LiDAR performance, numerous sensor fusion approaches incorporating Inertial Measurement Units (IMUs) have been proposed. Nonetheless, LiDAR-based methods still face challenges in accurately estimating vehicle states in environments with repetitive patterns. This paper introduces a novel framework to improve LiDAR odometry estimation accuracy in repetitive pattern environments by leveraging the complementary strengths of both cameras and LiDAR. Specifically, we employ a visual feature tracking-based approach that utilizes 2D intensity images generated from 3D point cloud data. The use of 2D projected intensity images enables robust feature extraction while maintaining resilience to illumination changes. The proposed method is evaluated on real-time vehicle state estimation tasks using datasets containing repetitive patterns. Experimental results demonstrate that our approach outperforms traditional LiDAR-based methods, validating the effectiveness of incorporating LiDAR vision techniques in such challenging environments.

Publication
2025 22nd International Conference on Ubiquitous Robots (UR)