Humboldt-Universität zu Berlin - Mathematisch-Naturwissenschaftliche Fakultät - Software Engineering

Probevortrag Dissertation: Arut Prakash Kaleeswaran

  • Wann 29.03.2023 von 13:00 bis 14:00
  • Wo online: Zoom
  • Name des Kontakts
  • iCal

Arut Prakash Kaleeswaran will give a test presentation of his PhD defense on the topic: Explanation of the Model Checker Verification Results



The rapid growth of features and functionalities in the modern automotive domain makes ensuring the safety of any sophisticated system incredibly hard. To instill confidence in these systems, the demand for formal verification is escalating, which necessitates a greater reliance on model checking techniques.

Whenever new requirements are introduced for a system, the correctness and consistency of the system specification must be verified, which is often done manually in industrial settings. One viable option to traverse disadvantages of this manual analysis is to employ the contract-based design, which can automate the verification process to determine whether the refinements of top-level requirements are consistent. Thus, verification can be performed iteratively to ensure the system’s correctness and consistency in the face of any change in specifications.

Having said that, it is still challenging to deploy formal approaches in industries due to their lack of usability and their difficulties in interpreting verification results. For instance, if the model checker
identifies inconsistency during the verification, it generates a counterexample while also indicating that the given input specifications are inconsistent. Here, the formidable challenge is to comprehend the generated counterexample, which is often lengthy, cryptic, and complex. Furthermore, it is the engineer’s responsibility to identify the inconsistent specification among a potentially huge set of specifications.

This PhD thesis proposes a counterexample explanation approach for formal methods that simplifies and encourages their use by presenting user-friendly explanations of the verification results. The proposed counterexample explanation approach identifies and explains relevant information from the verification result in what seems like a natural language statement. The counterexample explanation approach extracts relevant information by identifying inconsistent specifications from among the set of specifications, as well as erroneous states and variables from the counterexample. Furthermore, presenting all this extracted relevant information in a natural language-like explanation makes it easier for engineers to comprehend the error, when compared to understand the cryptic and complex raw model checker output. As a result, the proposed approach minimizes the volume of information engineers must handle to comprehend and resolve the inconsistency.

The counterexample explanation approach is evaluated using two methods: (1) evaluation with different application examples, and (2) a user-study known as one-group pretest and posttest experiment. The outcomes from both these methods are favorable, where the results of evaluation with application examples indicate that the counterexample explanation approach is applicable to real-world industrial applications. Further, the results of the user-study indicate that employing a counterexample explanation approach makes it substantially easier for engineers to comprehend the verification results, especially whencompared to the interpretation of the raw model checker output.


Die Zugangsdaten zum Zoom Onlinemeeting teilt Prof. Dr. Lars Grunske gern auf Anfrage mit.