Home Page
Software Safety Verification  
last updated November 26, 1998
 
formalWARE 
    project  

  Participating 
     Organizations 
  Research   
     Topics 
  People 
   

formalWARE 
    results
 
  Overview 
  Publications 
  Presentation 
  Tools   
  Methods 
  Examples   
  Training 

formalWARE  
  information  

  News 
  Events 
  Index  
  Search   
  Links   
  Contacts

[This page is no longer being maintained. Please visit Ken Wong's safety resource page ]

Overview

The formalWARE project focuses on the use of formal methods to address specific challenges in area of software safety verification.

Contents of this section include:


What is software safety verification?  

For systems with software that implements safety-critical functionality, the key question that must be answered is:

How do you know if your software is safe?

The system safety engineering approach to ensuring safety is to identify the "bad things" that can occur (i.e., accidents) and to then focus on the ways the system may contribute to the "bad things" occurring (i.e., system hazards). The goal of system safety engineering is to build safety into the system by analyzing the identified system hazards and then eliminating or controlling them during system development.

An important step in the overall safety engineering process is the verification of the source code with respect to the hazards. As discussed in chapter 18 of Nancy Levenson's seminal textbook on software safety, Safeware, this step is necessary even after system hazards have been identified and controlled through design, to determine if any mistakes were made in the safety engineering process.


What is formalWARE doing in this area?  

Requirements Safety Analysis

Much of the current research into safety tends to focus on the development of safe system requirements. For example, the work being performed within formalWARE on requirements specification and validation could be applied to that purpose.

Dynamic Code Safety Analysis

formalWARE is also developing techniques that can be used in dynamic code analysis. For example, at the system level, the work being performed on requirements-based testing could be used to aid in verification of system safety requirements. At the software component level, unit-testing techniques are being developed that could be used in the verification of component safety properties.

Safety Verification Case for the Source Code

In general, there is a need for more effective methods for the safety verification of the source code. In particular, there is a need for methods that will deal with large software-intensive systems with modern architectures. Static code analysis techniques, such as software fault tree analysis, have mostly been applied to relatively small systems.

formalWARE is developing a practical, comprehensive process for constructing a safety verification case for the source code. This work is the focus of Ken Wong's M.Sc. thesis work.

The process takes as input:

  • a large amount of source code (e.g., several hundred thousand SLOCs) which serves as a very concrete representation of the implementation of the system;
  • a set of identified hazards expressed at a relatively high level of abstraction;

and produces as output:

  • a safety verification case that provides a rigorous, repeatable and recorded argument that the hazard cannot occur given specific, explicitly stated assumptions about the hardware and software.

The construction of the safety verification case is confronted with two central challenges:

  • the long thin slice of hazard-related code which is located in a number of different software components;
  • the semantic gap between the source code and the abstract "system level" concepts/language used to define hazards.

In light of these challenges, the safety verification case will consist of three parts:

  1. step-wise refinement of the hazard into one or more source code level safety verification conditions (SVCs);
  2. step-wise extraction of models of one or more critical code paths;
  3. verification that each of the source code level SVCs (from step 1) is satisfied by the representation of the source code (from step 2).

If you are a safety engineer, you may be interested in Wong's experiences with the safety verification of large software-intensive systems with modern software architectures. He is also developing techniques for implementing each part of the safety verification case.

If you are a safety or formal methods researcher, you may be interested in his investigations into:

  • generating system level SVCs with a method that combines Fault Tree Analysis and rigorous mathematical reasoning;
  • constructing verifiable code assertions from the source code level SVCs;
  • formalizing source code level SVCs in "S", a formal specification notation based on typed, predicate logic.


formalWARE Publications and Presentations  

Papers

Safety Verification Conditions for Software-Intensive Systems, Ken Wong, M.Sc. dissertation, Department of Computer Science, University of British Columbia, October 1998.

Refinement of Safety-Related Hazards into Verifiable Code Assertions, Ken Wong and Jeff Joyce, accepted for presentation at SAFECOMP'98, Heidelberg, Germany, October 5-7, 1998.

Generating Safety Verification Conditions Through Fault Tree Analysis and Rigorous Reasoning, Jeff Joyce and Ken Wong, accepted for presentation at the 16th International System Safety Conference, Seattle, Washington, September 14-19.

Looking at Code With Your Safety Goggles On, Ken Wong, presented at Ada-Europe'98, Uppsala, Sweden, June 9-11, 1998.

Ensuring the Inspectability, Repeatability and Maintainability of the Safety Verification of a Critical System, Ken Wong, Jeff Joyce and Jim Ronback, Technical Report 98-06, Department of Computer Science, UBC, May 1998.

Formal Representation of Safety Verification Conditions, Ken Wong and Jeff Joyce, accepted for presentation at the workshop on Making Object Oriented Methods More Rigorous, Imperial College, London, June 24, 1997.   

Presentations

Refinement of Safety-Related Hazards into Verifiable Code Assertions
Presented by Ken Wong at SAFECOMP'98 in Heidelberg, Germany, October 5-7, 1998.
 

Generating Safety Verification Conditions Through Fault Tree Analysis and Rigorous Reasoning
Presented by Ken Wong at the 16th International System Safety Conference, Seattle, Washington, USA, September 14-19.
 

Looking at Code with Your Safety Goggles On
Presented by Ken Wong at Ada-Europe'98, Uppsala, Sweden, June 8-12, 1998.
 

Derivation of Safety Verification Conditions from Hazard Definitions
Variations of this talk was presented by Ken Wong at:
 

    May 21, formalWARE project review
    CICSR building, University of British Columbia
     

    May 26, Center for Software Reliability (CSR),  
    City University, London, UK 
    http://www.csr.city.ac.uk 

    May 28, Praxis Critical Systems,  
    Bath, UK 
    http://www.praxis-cs.co.uk 

    June 3, The Institute of Applied Computer Science (IFAD),  
    Odense, Denmark 
    http://www.ifad.dk 

    June 12, Industrilogik,  
    Stockholm, Sweden 
    http://www.l4i.se

Safety Verification of Software-Intensive Systems
Presented by Ken Wong, at Raytheon Systems Canada Ltd, March 18, 1998.

Code Level Safety Analysis
Presented by Ken Wong at the formalWARE Open House, October 8, 1997.


Recommended readings  

General

Safeware: System Safety and Computers, Nancy G. Leveson, Addison-Wesley, 1995.

An outstanding overview of software safety, including detailed discussions of the safety process, safety management, hazard analysis, human factors and safety verification. The view is from the system safety engineering tradition, with a focus on the identification and management of system hazards. This is, perhaps, the "bible" of software safety from one of the leading experts in the field.

Evaluation of Safety-Critical Software, David L. Parnas, A. John van Schouwen and Shu Po, Communications of the ACM, 33( 6), pp. 636-648, June 1990.

Another authorative work which, contrary to Leveson and the system safety engineering approach, does not believe that safety should be treated separately from the issues of trustworthiness or reliability. Topics include code reviews, software documentation, and software reliability estimates.

The Risks of Software, Bev Littlewood and Lorenzo Strigini, Scientific American, November 1992.

A discussion of the difficulties of achieving high levels of dependability for safety-critical software-intensive systems. The authors believe, unlike Leveson, that quantitative dependability measures, like reliability estimates in fault-tolerant systems, are possible and important for safety-critical systems.

Workshop on Developing Safe Software: Final Report, Dennis Lawrence, Fission Energy and Systems Safety Program, Lawrence Livermore National Laboratory, UCRL-ID-113438 August 1994.
      [Local Copy (gzipped pdf)] [Summary]

A workshop put together by the LLNL with a panel that included Nancy Leveson, Ricky Butler, Bev Littlewood and John Rushby. Though there was general agreement among the panel, the emphasis differed in the areas of system safety engineering (Leveson), formal methods (Rushby, Butler) and dependability (Littlewood).

Safety Analysis

Software Assessment: Reliability, Safety, Testability, Michael Friedman and Jeffrey Voas, John Wiley & Sons, 1995.

An excellent overview which integrates a wide range of software testing, safety and reliability topics.

Testing Existing Software For Safety-Related Applications, John A. Scott and J. Dennis Lawrence, Fission Energy and Systems Safety Program, Lawrence Livermore National Laboratory, UCLR-ID-117224, Revision 7.1, December 1994.
      [Local Copy (gzipped pdf)] [Summary]

A survey of standard static analysis and testing techniques, with an emphasis on their applicability to critical systems.

Static Safety Analysis

A Study on Hazard Analysis in High Integrity Software Standards and Guidelines, Laura M. Ippolito and Dolores Wallace, NISTIR 5589, National Institute of Standards and Technology, January 1995.

A survey of safety standards for software hazard analysis techniques. These include techniques that were designed for mechanical systems, such as Fault Tree Analysis (FTA), Hazard and Operability Study (HAZOP) and Failure Mode and Effects Analysis (FMEA).

Towards Integrated Safety Analysis and Design , Peter Fenelon, John A. McDermid, Mark Nicholson and David J. Pumfrey, ACM Computing Reviews, 2(1), p. 21-32, 1994.
      [Local Copy (gzipped postscript)] [Remote Original (compressed postscript) ]

An approach to the development and assessment of software-intensive safety-critical systems. Includes references to work by the authors' that adapt HAZOPS, FMEA and FTA to software.

Safety Verification of Ada Programs using software fault trees, Nancy G. Leveson, Steven S. Cha, and Timothy J. Shimall, IEEE Software , 8(7), pp 48-59, July 1991.

The application of FTA to the source code in support of system safety verification. One of the few safety analysis techniques designed explicitly for the safety verification of the source code.

Dynamic Safety Analysis

Predicting How Badly "Good" Software can Behave, Jeffery Voas, Frank Charron, Gary McGraw, Keith Miller and Michael Friedman, IEEE Software , 1997.
      [Local Copy (gzipped postscript)] [Remote Original (postscript)]

The application of fault-injection to software testing. This technique supplements standard testing techniques by focusing on hazards that may arise from external, e.g., human, hardware or environmental, failures.

Testing a Safety-Critical Application, John C. Knight, Aaron G. Cass, Antonio M. Fernandez, Kevin G. Wika, UVA CS Technical Report 94-08.

This paper investigates the possibility of exhaustive testing of certain system safety properties.

Safety Case

Safety Case Development: Current Practice, Future Prospects, S. P. Wilson, T. P. Kelly, J. A. McDermid, in Proceedings of 1st ENCRESS/5th CSR Workshop, September 1995.

A discussion of some of the practical issues involved in the development of a safety case.

A Methodology for Safety Case Development, Peter .G. Bishop and Robin E. Bloomfield, in Safety-critical Systems Symposium, Birmingham, UK, February 1998.

This paper proposes a safety case structure that mirrors the construction of a rigorous argument.

Formal Methods in Safety-Critical Systems

Safety-Critical Systems, Formal Methods and Standards, Jonathan Bowen and Victoria Stavridou, IEE/BCS Software Engineering Journal, 8(4):189-209, July 1993.
      [Local Copy (gzipped postscript)] [Remote Original (compressed postscript)]

Though a bit dated, this is an excellent survey of the potential applications of formal methods in critical systems, their use in the development of existing industrial systems, as well as their mention in various safety standards.

Experience with Formal Methods in Critical Systems, Susan Gerhart, Dan Craigen and Ted Ralston, IEEE Software, January 1994.

An indepth discussion and evaluation of the use of formal methods in the development of four different industrial critical systems.

Formal Method and Their Role in the Certification of Critical Systems, John Rushby, SRI International, Technical Report CSL-95-1, SRI 1995.
      [Local Copy (gzipped postscript)] [Summary]

An insightful examination of the appropriate use of formal methods in industry, and how they may be used in the context of the RTCA air standard, DO-178B.

High Integrity Ada: The SPARK Examiner Approach, John Barnes, Addison Wesley Longman Ltd, 1997.

A well-written account of the SPARK Ada subset and the SPARK toolset, including SPARK Examiner, which support data flow analysis and code verification.

Probabilistic Safety Assessment

Validation of Ultrahigh Dependability for Software-Based Systems, Bev Littlewood and Lorenzo Strigini, Communications of the ACM, 36(11), November 1993.

A discussion of the issues involved in attaining dependability measures for software, where "dependability" includes a number of properties including reliability, maintainability and safety.

Applying Bayesian Belief Networks to Systems Dependability Assessment, M. Neil, B. Littlewood and N. Fenton, in Proceedings of Safety Critical Systems Club Symposium, Leeds, 6-8 February 1996. Published by Springer-Verlag.

The attainment of quantitative dependability measures through probabilistic techniques.

Industrial Safety Programs

A System Engineering Process For Software-Intensive Real-Time Information Systems, Bruce Elliott and Jim Ronback, in Proceedings of the 14th International System Safety Conference, Albuquerque, New Mexico, August 1996.

An overview of the system safety engineering process for the Canadian Automated Air Traffic System (CAATS), which is being built by Hughes Aircraft of Canada Ltd.

Reliability and Safety

Critical System Properties: Survey and Taxonomy, John Rushby, SRI Technical report, CSL-93-01 (Revised), February 1994.
      [Local Copy (gzipped postscript)] [Summary]

A survey of the main features of the safety engineering, dependability and security approaches to critical systems.

The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software, Ricky W. Butler and George B. Finelli, IEEE Transactions on Software Engineering, 19(1), January 1993.

Experimental evidence of the practical impossibility of using testing to establish high reliability levels for software.

An Experimental Evaluation of the Assumption of Independence in Multi-Version Programming, Nancy Leveson and John Knight, IEEE Transactions on Software Engineering, Vol. SE-12, No. 1, pp. 96-109, January 1986.
AND
A Reply to the Criticisms of the Knight and Leveson Experiment, Nancy Leveson and John Knight, ACM Software Engineering Notes, January 1990.

Both of these papers provide experimental evidence for the limitations of using software redunduncy to achieve software fault tolerance.

Recommended websites  

General

Safety-Critical Systems
WWW Virtual Library at Oxford

Aviation Safety
Safety links with a focus on aviation safety

Research Groups

Software Safety at the University of Washington
Nancy Leveson's software safety group and includes their work on requirements safety analysis

Safety-Critical Software Group at the University of Virginia
Includes the work of John Knight on software safety testing

Centre for Software Reliability
At City University, under the directorship of Bev Littlewood and with a focus on quantifying software dependability, and at University of Newcastle upon Tyne, under the directorship of Tom Anderson

High Integrity Systems Engineering Group at the University of York
Under the directorship of John McDermid, conducting research into all aspects of high integrity computer-based systems, including the development of software safety analysis techniques

NASA Langley Formal Methods Team Home Page
The application of formal methods to critical systems

SRI International's Computer Science Laboratory (CSL)
The Formal Methods and Dependable Systems Group, with program director John Rushby and the PVS theorem prover, and the Dependable System Architecture Group, led by Victoria Stavridou

Safety Agencies and Programs

NASA Office of Safety and Mission Assurance (OSMA) Software Program
Part of the NASA Software IV&V Facility

JPL Software Assurance Group
Includes the application of formal methods and software safety analysis to NASA projects, as well as the work of Robyn Lutz on requirements analysis

Computer Safety and Reliability at the Lawrence Livermore National Laboratory (LLNL)
Part of the Fission Energy and Systems Safety Program (FESSP) and home of Dennis Lawrence

High Integrity Software System Assurance (HISSA)
Software quality group at the National Institute of Standards and Technology (NIST)

System Safety Society
Professionals Dedicated to the Safety of Systems, Products and Services

Industry

Reliable Software Technologies (RST), Sterling, US
Includes the work on software fault-injection testing by Jeffery Voas and their tool SafetyNet

Praxis Critical Systems, Bath, UK
A UK company specializing in the development of safety-critical systems and the SPARK approach to software development

Adelard, London, UK
An independent UK consulting company involved in the development of some MOD standards, and methodologies for developing safety cases

CSE International Ltd (CSE), UK
Consultants in high integrity and safety-critical computer-based systems

York Software Engineering Ltd (YSE), UK
Subsidiary of CSE

DLSF Systems Inc., Ottawa, Canada
Specializes in software system safety

ORA Canada, Ottawa, Canada
Application of formal methods to the development of critical systems. Headed by Dan Craigen

Safeware Engineering Corporation, Everett, US
Applied research and development for real-time, safety-critical systems. Headed by Nancy Leveson

Formal Methods Companies
A list of companies many of which are involved in the development of critical systems

Projects

SafeFM
The practical application of formal methods to safety-critical systems

Standards

UK MoD Safety Standards
Including 00-55, 00-56 and 00-58

RTCA Documents
Listing of air standards including DO-178B

US DOD System Safety
Including MIL-STD-882C

NASA Guidebooks and Standards
Includes NASA Software Safety Standard and Guidebook for Safety Critical Software

Center for Devices and Radiological Health (CDRH) Web Documents
CDRH, a branch of the FDA, software guidence documents

International Electrotechnical Commission
International standards and conformity assessment body for all fields of electrotechnology. IEC standards can also be ordered from Global Engineering Documents

IEEE SESC Safety Study Group
Examination of existing safety standards and the the development of IEC 61508 for the Software Engineering Standards Committee (SESC)

Incidents

Ariane 5 Report
Flight 501 Failure: Report by the Inquiry Board. See also Put it in the contract: The lessons of Ariane , an argument for Design By Contract (DBC), as well as a rebuttal

Therac 25 Report
Reprinted with permission, IEEE Computer, Vol. 26, No. 7, July 1993, pp. 18-41

Computer-Related Incidents with Commercial Aircraft
An extensive list of incidents, along with discussion of many related issues and links to aviation safety sites

Challenger Report
The Roger's Commission report on the Challenger Space Shuttle explosion

Ada and Safety

Ada 95 and Trustworthy Systems
Evaluation by ORA Canada

SPARK Examiner
A progam analysis and verification environment for the SPARK Ada subset, developed by Praxis Critical Systems

This page has been accessed  lots of times since November 12, 1997.