Embedded Systems

Model-based Verification - Fundamentals and Industrial Practice

Lecturers Thomas Kropf
Jürgen Ruf
Lecture A 104
Instructors Jürgen Ruf
Patrick Heckeler
Sebastian Burg
Sebastian Burg

Hanno Eichelberger
Stefan Huster
Jörg Behrend
Tutorial 25.04.2013, 17:30 Uhr, Sand 13/14 Raum A 104
Amount 2 SWS / 4 LP
Entry in course catalog Campus


Reliability, security, accuracy, and robustness of software become more and more important. Mistakes in system development are made, including critical ones, due to mental-logic errors in the specification and/or during implementation of the software itself. Thereby, compilers and used programming languages play an important role. To avoid errors, the used programming languages are often limited to specific language constructs. This is done to avoid dynamic malfunction (memory overflows, memory leaks etc.). Another reason for that is to simplify the analysis and verification process of these software components. The verification techniques range from static code analysis of programs and their specifications to combinations of automated proof systems and model-checkers. Besides error prevention and fault detection, error tolerance (e.g. due to redundancy) in software is an interesting approach to increase reliability of safety critical systems. To do so, techniques such as testing, runtime verification, program observation, monitoring, and consistency checking come into action. Another aspect is quality assurance of software. An example would be the certification of safety-related systems that relies on the compliance of software and its specified properties regarding safety standards like ISO 26262 and IEC 61508. In this context, libraries, tools, compilers, system components and third-party software play an important role. The aim of this seminar is to provide an insight into the theory of software verification and the corresponding research topics and tools. A second goal is to investigate on methods that are currently used in industrial software development units. The lecturers have relevant experience in the design and verification of systems in industrial environments.


  1. UPPAAL: UPPAAL is an integrated environment for modelling, validation and verification of systems. It contains several modules including the UPPAAL TRON module for black-box conformance testing of timed controlled software components running on embedded controllers. The goal of this topic is to present the concepts behind the UPPAAL TRON tool and to describe amongst others the online testing mechanisms. Supervisor: Hanno Eichelberger
  2. Event-based vs. Sample-based Runtime Verification: The Runtime Verification approach tests software components by monitoring the conformance of specified properties during the execution of the components. There are different approaches to monitor these properties. One approach is to monitor the properties triggered by the internal and external events. This variant has,however,the disadvantage that the overhead of monitoring is unpredictable, especially during high event frequencies. Another approach samples the properties in time constant frequency and is thus more predictable. But the sample-based variant can overlook error conditions. The goal of this topic is to present the two approaches and develop a comparison. Supervisor: Hanno Eichelberger
  3. Formalization and Verification of Behavioral Correctness of Dynamic Software Updates: Updating running software systems on the fly correctly is mostly regarded on code level and not on the behavior level, where a wrong update can produce an unexpected behavior. This should be proven as correct by theorem provers. Supervisor: Sebastian Burg
  4. Cross-Entropy for Satisfiability: Using probabilistic methods to find a valid solution for a deterministic problem. Supervisor: Sebastian Burg
  5. Model-based Testing: Model-based testing has become increasingly popular in recent years. Major reasons include: (1) the need for quality assurance for increasingly complex systems, (2) the emerging model-centric development paradigm, e.g., UML and MDA, with its seemingly direct connection to testing, and (3) the advent of test-centered development methodologies. Model-based testing relies on execution traces of behavior models. They are used as test cases for an implementation: input and expected output. Supervisor: Patrick Heckeler
  6. Automatic Generation of Test Cases from UML Models: The underlaying paper presents a novel technique to create the test cases from UML models. In this technique, the UML diagrams such as Use Case Diagram, Class Diagram & Sequence Diagram of any application are considered for creating the test cases. A graph is created to store the necessary information that can be extracted from these diagrams & data dictionary expressed in OCL for the same application. The graph is then scanned to generate the test cases that are suitable for system testing. Supervisor: Patrick Heckeler
  7. A Hardware-Dependent Model for SAT-based Verification of Interrupt-Driven Low-level Embedded System Software: The verification of the interaction of HW and SW is a key to ensure correct function of embedded systems. One approach to tackle this problem is by modelling the HW functions into the SW control flow graph. This yields in a big Boolean formula which can be handed over to dedicated SAT solvers. On big problem in this modelling approach is the representation of interrupt service routines which can stop the main thread execution at any time. Supervisor: Jürgen Ruf
  8. Liggesmeyer, P.; Software-Qualität, Chapter 14.4/14.5, Modelling and Analysis of Safety and Reliability of Software: Safety-critical software is widely used in embedded systems. Therefore, reliability and availability of these systems become essential and quantitative assumptions of systems quality have to be made. This topic covers analytic and statistical methods of quantitative measurement methods. Supervisor: Jürgen Ruf
  9. Graph-based Verification with TrekSoC: TrekSoC automatically generates self-verifying C test cases to run on the embedded processors in SoCs. This verifies your chips more quickly and more thoroughly than an external testbench, hand-written C tests, or running only production code on the processors. TrekSoC’s generated test cases target all aspects of full-SoC verification and work in a variety of different environments. Supervisor: Jörg Behrend
  10. Modular Specification and Verification: Today, object-oriented software is common in industrial practice. This causes new challenges for corresponding verification methodologies. This paper discuss these challenges and presents actual solutions. Supervisor: Stefan Huster
  11. Collaborative Verification and Testing: Latest verification methodologies combine results, taken from formal proofs and dynamic testing. This paper presents a methodology based on Microsoft’s verification- and test-frameworks for C#. Supervisor: Stefan Huster



25.04.2013, 17:30 Uhr, Sand 13/14 Raum A 104


Download Infoblatt Anmeldemodalitäten

Randbedingungen fürs Seminar

Von den Teilnehmerinnen und Teilnehmnern wird eine weitgehend selbstständige Erarbeitung des jeweiligen Themengebiets incl. Literaturbeschaffung und -recherche erwartet. Literaturhinweise werden natürlich gegeben, eine bedarfsgesteuerte Betreuung ist sichergestellt.

Vortragsdauer und Themenaufbau

Der Vortrag soll einen Dauer von 25 Minuten haben und folgenden Aufbau besitzen:

  • Einleitung mit Motivation (ca. 5 Minuten im Vortrag)
  • Theorie zum jeweiligen Thema (ca. 10-15 Minuten im Vortrag)
  • Praktische Anwendung / Fallbeispiele (restliche Zeit, mind. 5. Minuten)
  • Zusammenfassung (2 Minuten im Vortrag)
  • Vortrag muss in englischer Sprache gehalten werden

Bedingungen für den Seminarschein / Benotung

Um den Schein zu erhalten muss

  • eine Vortragsgliederung 4 Wochen vor dem Vortragstermin mit dem Betreuer durchgesprochen sein
  • eine erste Vortragsversion 2 Wochen vor dem Vortragstermin mit dem Betreuer durchgesprochen sein (vorzugsweise PowerPoint oder PDF)
  • ein Vortrag (25 Minuten + Diskussion) in englischer Sprache gehalten werden
  • eine schriftliche Ausarbeitung in englischer Sprache abgegeben werden (6-8 Seiten in PostScript- oder PDF-Format)
  • bei allen Seminarterminen anwesend sein Die Seminarleistung wird nach folgenden Kriterien benotet:* Einhaltung der Termine
  • Qualität der durchgeführten Literaturrecherche
  • Inhaltliche Qualität des Vortrags
  • Qualität der Präesntation incl. Zeiteinhaltung
  • Qualität der Ausarbeitung
  • Grad der Sebstständigkeit (davon unberührt sind natürlich Diskussionen mit dem Betreuer über Auswahl der zu präsentierenden Themen, Eignung zur Präsentation etc. - solche Diskussionen sind sogar explizit erwünscht)