Research

What makes software good? When is software engineering good? And how do we make sure software is good? We see software engineering both as an academic subject that is about understanding interrelationships; and as a practical challenge in both industry and education. Software is "good" when attributes such as correctness, security, performance, testability, usability, maintainability, etc. are fulfilled. Software engineering is good if changes can be reacted to quickly and, at the same time, an adequate compromise can be found between functionality, the different quality attributes and cost. These qualities and adequate trade-offs always strongly depend on the context, the mastery of which is accordingly always part of our work. There is no one single hammer for all nails in software engineering. Even machine learning and Scrum are only two hammers among many!

We aim at understanding the full range of software engineering: contexts, activities, artifacts. Because software engineering also is a practical activity, we work closely with many industry partners. Our work is about the context-specific quality of software and software engineering from the perspectives of the system engineer and the user: concepts, models, languages, architectures, components, patterns, tools, methods and processes. Our focus is, methodologically, on testing and, qualitatively, on information and functional security. We investigate the effects of digital transformation on society and the design of compatible technologies in interdisciplinary collaborations at bidt, the Bavarian Research Institute for Digital Transformation. We do more technology-centered research on systems engineering, including model-based development and code excellence, at fortiss, the Research Institute of the Free State of Bavaria for Software-Intensive Systems.

 

In the area of testing, we are particularly interested in the question of when a test case is "good" and how to translate answers to this question into practical action. We are specifically concerned with testing automated driving systems (Hauer 2021) and autonomous drones; see this recent lecture video (slides) from June 2020 for some recent results and ideas. We also work on good test cases for machine-learned systems; on the selection of regression tests; on the localization of errors (Golagha 2020) in code and HiL testbeds in the face of observable misbehavior; on the question how good a fuzzing run is; on how symbolic execution techniques scale through compositionality and coupling with fuzzers (Ognawala 2020); and on the derivation and design of defect models for integration testing, for continuous systems (Holling 2016) and for testing security properties (Büchler 2015).

As a complement to dynamic test methods, runtime monitors are essential for modern software-intensive systems. Under the heading of causality and accountability, we are interested in general frameworks and implementations, based on long experience with technology and methodology for data usage control (see below), to detect undesired events at runtime, to identify their causes, and to assign responsibility to specific parts of the (socio-technical) system parts afterwards. This is relevant for both compliance and forensics. In addition to algorithms for runtime verification and causality analysis (Ibrahim 2021), we look into questions about the origin and adequate degree of abstraction of models describing causality, about the requirements for the degree of abstraction of logging, and about the design of systems that can ensure accountability. We are working on approaches for cyber-physical systems, such as diagnostic systems for drones, web applications, and microservices.

In the area of information security, we are currently devoting technical attention to the question of when an intrusion detection system is good, and how this can be measured. We are investigating how organizations can perform automated hardening of their infrastructure given existing security policies - and, of course, the question in what sense this is good and how we can measure this. Past work - motivated by practical needs in the fields of accountability and data usage control - has looked into software-based software integrity protection and software obfuscation (Banescu 2017, Ahmadvand 2021) and how well these techniques work: While it is true that an attacker with appropriate motivation and resources will always be able to break through these protection mechanisms, (1) even hardware-based techniques are not fully secure, (2) for cost reasons alone, it is not foreseeable that every device in the Internet of Things will be equipped with appropriate protection hardware, and (3) it is remarkable how widely these techniques are used in practice, for example in game protection. Machine learning for malware detection (Wüchner 2016, Salem 2021) and the question of where labeled data for such techniques can come from (Salem 2021) have also played a role in the past.

In the past, we have closely looked into distributed data usage control, a generalization of access control to the future and in distributed systems: What happens to data once it has been released? Requirements such as "data must be deleted after thirty days," "data must not be deleted for five years," "data owner must be informed when data is disclosed," "data must only be disclosed anonymously," and "images from my social network profile must not be stored or printed out" play a role here. They are relevant in data protection, compliance with regulatory frameworks, business processes implemented in a distributed manner in the cloud, intellectual property management and the protection of secrets. The problem encompasses a variety of fascinating theoretical, conceptual, methodological and technical challenges. Some demos are online. Completed dissertations address the connection between information flow control and system-wide data usage control across different levels of abstraction of a system (Lovat 2015, Fromm 2020), data usage control in distributed systems (Kelbert 2016), data usage control for privacy-aware camera surveillance (Birnstill 2016), and the derivation of machine-understandable policies from human-understandable policies (Kumari 2015). Bier's interdisciplinary work (2017) examines the negative impact of data discovery systems on privacy. This is a description of the central ideas and results.

 

This is a list of dissertations at the chair since 2015.