top of page


Prof. Philip Koopman - Carnegie-Mellon University

Introduction to Critical Systems & Automotive Software Safety Issues 


Over the past two decades the automotive industry has dramatically increased the use of life-critical computer based control systems. However, because there are no regulatory requirements that mandate the use of software safety standards and good practices, the results have been uneven. This tutorial will discuss a case study of vehicles that did not conform to many accepted safety practices and how that eventually lead to an adverse legal verdict and costs of well over a billion dollars.  That case study motivates a discussion of critical system practices and safety requirement techniques that are applicable to both conventional vehicles and autonomous vehicles. Finally, a discussion of the role of attribution to driver error reveals that autonomous vehicles will not only change how people drive, but also require a significant overhaul of the US automotive safety regulatory system.  Tutorial modules are selected from material in a graduate-level course on embedded system safety taught at Carnegie Mellon University, and include:


  • A first-hand account of what really happened in the Toyota Unintended Acceleration cases (both legal & technical)

  • Critical system assurance principles applied to large automotive fleets

  • A safety envelope based approach to safety requirements

  • Historical perspective on blaming the driver and how that will affect future regulation in potentially surprising ways


Zhe Hou, Hadrien Bride & Jim Song Dong - Griffith University 

Machine learning for dependable decision-making



Machine learning (ML) has been very successful in prediction, classification, regression, anomaly detection and other forms of data analytics.  ML is becoming an integrated part of automated decision-making for critical systems.  However, most existing ML techniques are used as black-boxes and they do not provide a high level of trustworthiness. In fact, there have been numerous cases where ML-based applications failed and caused tremendous damage.  To address the security and safety concerns of critical systems, advances in trust-related aspects of machine learning are important.  Explainable artificial inteligence (XAI) is one way to improve trustworthiness and it has attracted much attention recently.  Machine learning approaches that are capable of explaining the rationale behind the predictions are more relatable and transparent.  Another way to improve trustworthiness is to develop ML techniques that can produce auditable predictive models.  The verification of these models provides formal guarantee that the models are correct, safe and secure with respect to user's requirements.


In this tutorial we will cover recent developments in the domain of transparent and auditable machine learning techniques.  We will introduce our latest research combining advanced machine learning, high performance computing and automated reasoning techniques.  We will also present the fruit of our work:  Silas -- a state of the art ML toolkit for dependable machine learning.

David Ward - Horiba Mira Ltd

Safety of the intended functionality (SOTIF; ISO/PAS 21448) in road vehicle automation



Since original publication of the road vehicle functional safety standard ISO 26262 in 2011 it was quickly noted that due to wider system safety issues not being in scope, guidance was needed on addressing additional factors which could influence safe operation of automated driving features.  The concept of SOTIF was originally conceived to address failures in driver assistance systems (ADAS or Level 1 / Level 2 automation) associated with sensor performance limitations, for example “false positive” triggering of automatic emergency braking caused by a vehicle radar acquiring an incorrect target.  The first iteration of this guidance was recently published as ISO/PAS 21448.


In this tutorial we will examine

  • Background and context of automated driving features in road vehicles

  • Functional safety, SOTIF and the wider system safety context

  • Brief overview of ISO/PAS 21448

  • SOTIF “area” concept for managing complexity and unknowns of automated driving scenarios

  • Introduction to SOTIF approach to different “areas” of scenarios

  • Case studies

  • Conclusions and future outlook towards Level 3 and higher levels of automation.

David Ward.jpg
bottom of page