Asymmetry Vulnerability and Physical Attacks on Online Map Construction for Autonomous Driving

Anonymous authors

Abstract

High-definition (HD) maps provide precise environmental information essential for prediction and planning in autonomous driving (AD) systems. Due to the high cost of labeling and maintenance, recent research has turned to online HD map construction using onboard sensor data, offering wider coverage and more timely updates for autonomous vehicles (AVs). However, the robustness of online map construction under adversarial conditions remains underexplored. In this paper, we present a systematic vulnerability analysis of online map construction models, which reveals that these models exhibit an inherent bias toward predicting symmetric road structures. In asymmetric scenes like forks or merges, this bias often causes the model to mistakenly predict a straight boundary that mirrors the opposite side. We demonstrate that this vulnerability persists in the real-world and can be reliably triggered by obstruction or targeted interference. Leveraging this vulnerability, we propose a novel two-stage attack framework capable of manipulating online constructed maps. First, our method identifies vulnerable asymmetric scenes along the victim AV's potential route. Then, we optimize the location and pattern of camera-blinding attacks and adversarial patch attacks. Evaluations on a public AD dataset demonstrate that our attacks can degrade mapping accuracy by up to 11.2% in average precision, render up to 44% of targeted routes unreachable, and increase collision rates with real-world road boundaries by up to 26%. These attacks are also validated on a real-world testbed vehicle. We further analyze root causes of the symmetry bias, attributing them to training data imbalance, model architecture, and map element representation. Based on these findings, we propose asymmetric data fine-tuning as a targeted defense, which significantly improves model robustness. To the best of our knowledge, this study presents the first vulnerability assessment of online map construction models and introduces the first digital and physical attack against them.

Experiments on Real-World

MY ALT TEXT

Figure 1: Road Straightening Attack using a flashlight (corresponding to Figure 8(c) in the paper).

MY ALT TEXT

Figure 2: Road Straightening Attack using an adversarial patch board (corresponding to Figure 8(d) in the paper).

MY ALT TEXT

Figure 3: Early Turn Attack using a flashlight (corresponding to Figure 8(g) in the paper).

MY ALT TEXT

Figure 4: Early Turn Attack using an adversarial patch board (corresponding to Figure 8(h) in the paper).

Real-world Demo Videos

Clean Driving Condition

Road Straightening AttackUsing Flashlight

Road Straightening Attack During Victim AV Movement

The predicted road structure becomes symmetric due to the flashlight-induced interference captured by the victim AV's cameras.

Map Attack Impact on E2E AD Model

End-to-end (E2E) autonomous driving models are increasingly popular, often incorporating online map construction as a key module. We extend our experiments by launching Road Straightening and Early Turn attacks on VAD, a widely used E2E model, across 100 asymmetric scenes. Results demonstrate that our proposed attacks not only compromise dedicated online map construction models but also significantly degrade both map perception and planning performance in E2E autonomous driving systems.

Setting Map Metrics (%) Planning Metric (m)
AP_boundary AP_divider AP_ped mAP avg. L2 distance
Clean 45.6 58.2 48.7 50.8 0.77
Road Straightening Attack
Blinding 21.1 22.8 22.8 22.2 3.71
Adv. patch 16.1 22.3 19.8 19.4 3.69
Early Turn Attack
Blinding 22.1 28.3 25.6 25.3 3.70
Adv. patch 15.4 24.1 21.9 20.4 3.71

Visualization Example

Clean Scenario

Clean scenario: The victim AV successfully makes a left turn at the fork.

Attack Scenario

Road Straightening Attack (RSA) via flashlight: The E2E model (VAD) predicts a straight road and plans to continue straight.