Humboldt-Universität zu Berlin - Mathematisch-Naturwissenschaftliche Fakultät - Institut für Informatik

Humboldt-Universität zu Berlin | Mathematisch-Naturwissenschaftliche Fakultät | Institut für Informatik | Institutstermine | Vorstellung des Promotionsthemas von Herrn Martin Eberlein (LSt. Grunske)

Vorstellung des Promotionsthemas von Herrn Martin Eberlein (LSt. Grunske)

"Explainable Debugging: Understanding Program Behavior"

  • Wann 16.12.2025 von 10:00 bis 11:30
  • Wo Humboldt-Kabinett - 3.116, online: zoom
  • iCal

Eine Zoom-Einladung finden Sie hier. (nur mit Informatik-Account)

Abstract

Software is inherently prone to erros.
Debugging---the process of identifying and resolving the root causes of these errors---is essential but consumes a significant portion of engineering resources.
While automated techniques exist to isolate failure-inducing inputs, current state-of-the-art approaches often fall short in providing actionable insights.
Existing methods typically struggle with semantic complexity, failing to capture intricate relationships between input features (such as length fields versus payload size), or are hindered by coincidental correlations, producing explanations cluttered with irrelevant properties that hold true only by chance.

This thesis introduces Explainable Debugging, a systematic framework designed to automatically derive precise, generalizable, and human-understandable explanations for pathological program behavior.
In this thesis, I formalize Explainable Debugging as an optimization problem: the goal is to find an explanation that maximizes the distinction between pathological and benign inputs while minimizing complexity.
By unifying the expressiveness of semantic pattern matching with the rigorous hypothesis-driven experimentation, this framework successfully isolates true causal relationships from spurious correlations.

I demonstrate the versatility and effectiveness of this framework across three distinct domains.
First, in failure diagnosis, I show how Explainable Debugging identifies the precise semantic root causes of complex bugs, such as the Heartbleed vulnerability, which prior tools fail to characterize accurately.
Second, I apply the framework to software testing by mining input invariants to automatically define valid input specifications.
Finally, I adapt the methodology of Explainable Debugging to machine learning explanations to generate more accurate and more efficient explanations for model mispredictions.
Collectively, these contributions establish Explainable Debugging as a robust methodology that transforms opaque software failures into transparent, actionable diagnoses.