Hunchline
← Back to Digest
Artificial IntelligenceApr 9, 2026

"Why This Avoidance Maneuver?" Contrastive Explanations in Human-Supervised Maritime Autonomous Navigation

A contrastive explanation system helps ship officers understand why an AI chose a specific collision-avoidance maneuver — but adds cognitive load in simple scenarios.

5.4
Scrape Score
5.5
Academic
1.7
Commercial
5.0
Cultural
HorizonMid (2-5y)
Evidencelow
Was this useful?

The Thesis

Autonomous ships need human supervisors who can quickly understand and override AI collision-avoidance decisions — a task that is harder than it sounds when the AI's logic involves multiple vessels, regulations, and competing objectives. This paper proposes 'contrastive explanations': instead of just showing what the AI decided, the interface shows why that choice beats plausible alternatives, using visual overlays and plain-language text. A small study with four experienced marine officers found the approach genuinely helped in complex multi-vessel scenarios. The catch is real: in simpler situations, the extra explanation actually increased mental load rather than reducing it. The implication is that smarter interfaces should offer explanations on demand, not by default.

Catalyst

Maritime autonomy regulation is accelerating — the International Maritime Organization is actively developing frameworks for autonomous ship operations, creating immediate commercial pressure to demonstrate human-supervisory interfaces that meet accountability standards. At the same time, explainable AI research has matured enough that contrastive explanation techniques, borrowed from general AI interpretability work, can now be layered onto domain-specific navigation solvers. These two trends converging makes the human-interface problem urgent rather than theoretical.

What's New

Prior maritime AI explanation work focused on post-hoc visualizations — showing a ship's planned path or sensor cone — without comparing that choice to alternatives the system considered and rejected. General explainable AI systems like LIME and SHAP (tools that highlight which input features drove an AI's output) exist but were not designed for real-time, safety-critical maritime contexts where a navigator needs spatial and regulatory reasoning, not just feature weights. This paper applies contrastive explanation design specifically to maritime collision avoidance and tests it with domain experts, producing the first empirical evidence about how such explanations affect nautical officer comprehension and workload.

The Counter

Four marine officers is not a study — it is a conversation with four people, and the paper itself labels the work 'exploratory.' No statistical conclusions can be drawn from a sample this small. The finding that contrastive explanations increase cognitive load in simpler scenarios is arguably the more important result, and it undercuts the thesis: if the explanation system makes easy situations harder, operators may disable or ignore it in exactly the routine conditions where attention lapses cause accidents. The paper also tests a single collision-avoidance system in a controlled interface, not the messy, multi-vendor bridge environment real ships use. Maritime regulation moves slowly, and the gap between a promising interface prototype and a certified, class-approved product is measured in years and millions of dollars of compliance work that this research does not address.

Longs

  • KONGSBERG Gruppen (Oslo: KOG) — leading maritime autonomy systems integrator, directly relevant
  • Wärtsilä (Helsinki: WRT1V) — ship navigation and automation systems vendor
  • ANAB / OSK — smaller maritime technology ETF and index exposure
  • ESAB Corp (ESAB) — marine engineering and automation adjacent

Shorts

  • Traditional ECDIS (Electronic Chart Display) vendors like Furuno and JRC — their static chart interfaces assume human situational awareness, not AI-mediated decision-making requiring explanation layers
  • Autonomous ship vendors who ship black-box systems without explainability — regulatory pressure will increasingly demand auditability they have not built

Enablers (Picks & Shovels)

  • International Maritime Organization (IMO) MASS regulatory framework — defines compliance requirements that explanation interfaces must satisfy
  • COLREGS (International Regulations for Preventing Collisions at Sea) — the rule set the AI and explanations must reason over
  • Open-source maritime simulation environments such as OpenCPN — used to prototype and test navigation AI

Private Watchlist

  • Orca AI — maritime AI situational awareness startup
  • Seafar — remotely operated vessel platform requiring supervisor interfaces
  • Avikus (Hyundai subsidiary) — autonomous navigation software for commercial vessels

Resources

The Paper

Automated maritime collision avoidance will rely on human supervision for the foreseeable future. This necessitates transparency into how the system perceives a scenario and plans a maneuver. However, the causal logic behind avoidance maneuvers is often complex and difficult to convey to a navigator. This paper explores how to explain these factors in a selective, understandable manner for supervisors with a nautical background. We propose a method for generating contrastive explanations, which provide human-centric insights by comparing a system's proposed solution against relevant alternatives. To evaluate this, we developed a framework that uses visual and textual cues to highlight key objectives from a state-of-the-art collision avoidance system. An exploratory user study with four experienced marine officers suggests that contrastive explanations support the understanding of the system's objectives. However, our findings also reveal that while these explanations are highly valuable in complex multi-vessel encounters, they can increase cognitive workload, suggesting that future maritime interfaces may benefit most from demand-driven or scenario-specific explanation strategies.

Synthesized 4/27/2026, 10:44:20 PM · claude-sonnet-4-6