Data-Efficient Autonomous Driving Framework

Recent research in autonomous driving generally follows two major directions. The first is a Data-Driven approach, like Full Self-Driving (FSD), which relies on massive datasets. The second approach utilizes HD Maps.

While the Data-Driven method achieves high performance by heavily relying on real driving data, it requires immense resources and time for data collection and training, and faces limitations in generalization to new environments. In contrast, the HD Map-based method allows for stable and accurate path planning, but involves very high costs for map construction and maintenance, and cannot operate in unmapped areas.

Our lab proposes and researches a ‘Data Efficient Light-weight Autonomous Driving’ approach, which aims to leverage the strengths of both methods while compensating for their weaknesses.


Core Approach of the Lab

Our research focuses on the following three core elements:

1. Data-efficient Learning

  • Background: To overcome the inefficiency of building a large amount of datasets.
  • Key Research: Developing data-efficient learning methods such as Active Learning, Domain Adaptation, and Sim-to-Real Transfer to maximize model performance with only a small amount of high-quality data.

2. Public Map Source Utilization

  • Background: Implementing autonomous driving systems by utilizing publicly available maps without the high cost of HD Map construction.
  • Key Research: Studying methods to process publicly available map information (e.g., OpenStreetMap, public navigation tools) to be suitable for autonomous driving (e.g., refinement, calibration, light-weighting) and designing perception and planning modules based on this information.

3. Light-weight System Architecture

  • Background: Aiming for a system that can perform autonomous driving fast and efficiently without complicated computations.
  • Key Research: Developing light-weight deep learning models optimized for embedded system environments and securing real-time performance by reducing computational cost through efficient sensor fusion algorithms.

Major Research Areas

The table below summarizes our primary research areas, goals, and features.

Research AreaGoalFeature
PerceptionAccurate environment recognition with less dataObject/Road Recognition based on Self-Supervised Learning
LocalizationPrecise position estimation using public maps and sensor dataLight-weight Localization Technology integrating GNSS/IMU and map data
PlanningSafe and smooth path generation with limited informationDriving Strategy establishment based on Light-weight Model considering prediction uncertainty


1. Perception

Background: To develop robust autonomous driving systems by AI-based perception.

Key Research: Developing reliable and efficient perception methods, including Object Detection, Lane Segmentation, and Occupancy Prediction, to accurately understand the surrounding environment from sensor data.


2. Localization

Background: Ensuring autonomous driving stability through precise localization using public maps and SLAM in situations where RTK-GPS and HD Maps are limited.

Key Research: Researching precise localization methods combining publicly available map information (e.g., OpenStreetMap, commercial navigation maps) and SLAM (Simultaneous Localization and Mapping) technology.


Associated Papers & Demonstrations

  • [IEEE RA-L 2022] Y.Cho et al. “OpenStreetMap-Based LiDAR Global Localization in Urban Environment Without a Prior LiDAR Map”

  • [IEEE T-RO 2024] S.Lee et al. “Autonomous Vehicle Localization without Prior HD Map”

  • [IEEE RA-L 2026] S.Lee et al. “LSV-Loc: LiDAR to StreetView Image Cross-Modal Localization”

3. Planning

Background: Aiming to achieve final planning based on hierarchical information obtained through Localization and Perception.

Key Research: Researching human-like planning methods in diverse and complex environments, guided by public map sources.




Research Equipment

Deep Learning Servers

  • DL Server A: AMD ThreadRipper 7960x / DDR5 ECC 256GB / NVIDIA RTX 4090 24GB x4
  • DL Server B: Intel Xeon W2133 / DDR4 ECC 256GB / NVIDIA RTX 4090 24GB x4
  • DL Server C: AMD ThreadRipper 7960x / DDR5 ECC 128GB / NVIDIA RTX A6000 48GB x4
  • DL Server D: Intel Core i9 10900K / DDR4 128GB / NVIDIA TITAN V 12GB x4

Sensors

  • LiDAR: VLP16, Ouster OS1-128
  • Camera: Sensing GMSL2 Camera, Intel RealSense D455, Intel RealSense L515
  • IMU: XSENSE MTI-30G, SBG system NAVSIGHT APOGEE IMU
  • GPS: SBG system NAVSIGHT Main Unit
  • Antenna: Verostar VSP 6037l, Taoglas XHP 50

Vehicles

  • Hyundai IONIQ5
  • Hyundai i30
  • HUSKY A200