Autonomous Vehicles

Autonomous navigation is a key enabler for next-generation mobility systems, from self-driving cars to delivery robots. Autonomous driving requires accurate perception, localization, and planning across diverse environments. A major challenge in this domain is the reliance on high-definition (HD) maps, which are costly to build and maintain, and often limited to major urban areas. To address this challenge, in IRiS Lab, we are focusing on developing a map-light autonomous driving framework. Instead of depending solely on HD maps, we leverage standard-definition (SD) maps—such as Google Maps, Naver Maps, and OpenStreetMap—together with onboard sensors to enable reliable navigation. Our work spans perception, localization, decision-making, and motion planning under real-world constraints, with the goal of creating scalable, adaptable autonomous systems that can operate effectively in diverse settings.


Localization

Researchers: Sanming Lee, Younghun Cho, Heejin Song

Localization with High-Definition (HD) maps typically involves sensor fusion of commercial-grade GPS, LiDAR, and camera data, matching the current scene with the geometric point cloud stored in the HD map, and employing SLAM algorithms to achieve precise global localization. In contrast, our autonomous driving team focuses on localization without pre-built HD maps (e.g., LiDAR maps). We leverage freely accessible, lightweight Standard Definition (SD) maps, such as OpenStreetMap and Google Maps, utilizing extractable features like building outlines and Google Street View imagery to develop robust localization techniques.

Associated Papers

  • Younghun Cho et al. “OpenStreetMap-Based LiDAR Global Localization in Urban Environment Without a Prior LiDAR Map” IEEE Robotics and Automation Letters 2.18 (2022): 4999-5006
  • Sangmin Lee et al. “Autonomous Vehicle Localization Without Prior High-Definition Map” IEEE Transactions on Robotics 4.22 (2024):2888-2906


Mapping

Researchers: Sangmin Lee, Donghyun Choi

In autonomous driving, accurate vehicle localization and map generation through Simultaneous Localization and Mapping (SLAM) are critical, regardless of High-Definition (HD) map usage. Long-term driving can lead to increased localization errors due to environmental changes, making robust localization and mapping essential. Our team conducts fundamental research on SLAM, focusing on addressing challenges in environments with repetitive structures or numerous dynamic objects, where odometry accuracy from LiDAR or camera-based sensor matching often degrades. We are actively developing methods to enhance SLAM performance in such scenarios.

Associated Papers

  • Donghyun Choi et al. “VFT-LIO: Visual Feature Tracking for Robust LiDAR Inertial Odometry Under Repetitive Patterns" IEEE 2025 International Conference on Ubiquitous Robots(UR) 7.18 (2025)


Perception

Researchers: Sangmin Lee, Donghyun Choi, Heejin Song, Handong Lee

Beyond localization, effective planning and decision-making in HD map-less autonomous vehicles require predicting lane positions, lane semantic information, and inter-lane connectivity (topology information) using on-board sensors. To achieve this, our team maximizes the use of data extracted from Standard Definition (SD) maps while developing and applying state-of-the-art deep learning AI algorithms. We focus on learning the relationships between high-dimensional, multi-modal data from SD maps, LiDAR, and cameras to enhance perception capabilities.

Media: [TBA] Topology Segments - research ongoing



Path Planning

Researchers: Handong Lee

In environments with High-Definition (HD) maps, lane information is fully annotated, enabling the creation of a global planner using simple graph-based search algorithms, followed by realtime local planning to adjust paths based on the surrounding environment. In HD map-less settings, such information is unavailable, necessitating the use of sensor inputs 뭉 SD-map to predict maps and generate ego-vehicle trajectories end-to-end via deep learning networks. Our team leverages state-of-the-art AI deep learning techniques, particularly through image and LiDAR fusion, to process the current scene in high-dimensional space and generate accurate trajectories.

Associated Papers

  • Handong Lee et al. “Navigation in Underground Parking Lot by Semantic Occupancy Grid Map Prediction” IEEE Ubiquitous Robotics, July 2025

Media: [TBA] research ongoing



Control

In environments with High-Definition (HD) maps, lane information is fully annotated, enabling the creation of a global planner using simple graph-based search algorithms, followed by realtime local planning to adjust paths based on the surrounding environment. In HD



Remote Driving

This research focuses on developing a solution for when self-driving vehicles encounter a malfunction or disabled state. The proposed approach revolves teleoperation driving technology, which enables a human operator to remotely control the vehicle. This method is especially useful in scenarios where the autonomous systems



Real-world Competition Experience

Researchers: Sangmin Lee, Handong Lee, Heejin Song

  • NAVERLABS Mapping & Localization Challenge (2020)
  • HYUNDAI Motor Group Autonomous Vehicle Competition (2014~2019)

Associated Papers

  • Donghyun Choi, Sangmin Lee, Handong Lee, and Jee-Hwan Ryu, “VFT-LIO: Visual Feature Tracking for Robust LiDAR Inertial Odometry Under Repetitive Patterns” in IEEE Ubiquitous Robotics, July 2025. [Link]

  • Handong Lee, Donghyun Choi, Heejin Song, Sangmin Lee, and Jee-Hwan Ryu, “Navigation in Underground Parking Lot by Semantic Occupancy Grid Map Prediction” in IEEE Ubiquitous Robotics, July 2025. [Link]

  • Sangmin Lee and Jee-Hwan Ryu, “Autonomous Vehicle Localization without Prior High-Definition Map” in IEEE Transactions on Robotics, vol. 40, pp. 2888-2906, April 2024. [Link]

  • Younghun Cho, Giseop Kim, Sangmin Lee, Jee-Hwan Ryu, “OpenStreetMap-based LiDAR Global Localization in Urban Environment without Prior LiDAR Map” in IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 4999-5006, 2022. [Link]


Autonomous Vehicle Equipment:

  • Platform
  • Sensor
  • Computing Server


Map-less Driving Framework

Conventional autonomous driving systems rely on High-Definition (HD) maps, utilizing accurate geometric data from the geographic layer for localization and semantic layer information, such as lane segment positions and topology, for planning. However, in the absence of HD maps, autonomous vehicles must depend solely on on-board sensors (e.g., cameras, LiDAR, INS-GPS, RADAR) and lightweight Standard Definition (SD) maps, such as Google Maps, which provide satellite imagery and basic road network data. To enable map-less driving under these constraints, our laboratory has developed a framework inspired by human driving behavior. Humans typically use SD map-based navigation to determine a global route, infer topological information, observe surrounding vehicle movements, and perform real-time local planning. Drawing from this analogy, our research is structured into three key areas:

  • SD-Map Based Localization
  • Simultaneous Localization And Mapping(SLAM)
  • Perception
  • Online Planning