GENERATING TEST CASES FROM FORMAL SPECIFICATIONS

Kalman C. Toth
Michael R. Donat
Jeffrey J. Joyce *
University of British Columbia
Department of Computer Science
#201 - 2366 Main Mall
Vancouver, British Columbia, V6T 1Z4
Canada
* also affiliated with Hughes Aircraft of Canada, Systems Division

Table of Contents

Abstract
INTRODUCTION
THE FORMAL METHODS
THE FORMATS PROCESS
THE "S" NOTATION
REMOVING AMBIGUITY
TEST CASE GENERATION
TEST CASE ELEMENTS
AUTOMATING TEST CASE GENERATION
ACKNOWLEDGMENTS
BIOGRAPHY
REFERENCES



Abstract.

This paper describes the possible process elements and benefits of applying "Formal Methods" to the specification and testing of software requirements. It is argued that the overall effort required to generate test cases can be significantly reduced by applying these methods. Ambiguities and inconsistencies are identified and removed from the specifications through the use of formal methods. This paper provides a sketch of a theoretic foundational for generating test cases from formalized software requirements specifications thereby reducing test development effort and providing developers and testers with a consistent interpretation of requirements. Preliminary work also supports the thesis that test case generation can be automated.

INTRODUCTION

In the fall of 1993, the authors began collaborating on the application of Formal Methods at Hughes Aircraft of Canada, Systems Division and the University of British Columbia with the support of the British Columbia Advanced Systems Institute. An opportunity was identified for the potential application of these methods to the development of Air Traffic Control systems (such as the Canadian Automated Air Traffic Systems - CAATS) and other similar large scale operational systems. Typically, such systems are widely distributed, have real-time properties, support transaction and database processing, and have stringent reliability, availability and safety requirements. Preliminary investigations identified several potential areas for applying formal methods including software requirements specification and test case generation. The overall process was described in (Toth et al. 1994) at NCOSE `95.

Software engineers have long understood the value of defining superior specifications in order to reduce design, coding and testing efforts and costs later in the development and maintenance life cycles. Traditionally, they have relied on structured specification techniques, extensive reviews and inspections, and prototyping to clarify customer requirements to the extent possible. The cost of developing a software-intensive system can also be reduced by applying formal methods to specify software requirements. Costs are reduced by uncovering ambiguities, inconsistencies, incompleteness, and other related problems earlier. Attempts to apply formal specification techniques have suffered in the past as their typical cryptic forms have often proven to be too challenging to learn and apply in general practice.

Most software requirements specification efforts in use today employ natural language (e.g., English). The inherent ambiguity of natural language may easily result in differences in the interpretation of a requirement which directly affect cooperating stakeholders on a development project. Software developers may follow one possible interpretation of specifications while the test engineers may follow an equally reasonable, but fundamentally different interpretation. These differences in the interpretation of a requirement maybe discovered at a stage in the development when costly re-work of either the developed software or test procedures will be required.

While tools exist for manipulating formal specifications and reasoning about their properties, the application of these tools for the generation of test cases is not wide-spread. There is clearly a significant potential for further life cycle cost reductions with the use of software tools to assist in the generation of test case elements from formalized specifications. The derivation of test case elements from formal specifications of requirements (either manually or with machine-assistance) can be viewed as a generalization of another process, namely, generating test cases for a compiler from the BNF for some programming language. BNF is a very constrained form of specification notation in comparison to the richness of predicate logic which serves as the basis of the approach to formal specification described in this paper.

FORMAL METHODS

The term "Formal Methods" encompasses a wide variety of techniques. Fundamentally, each of these techniques involves a notation with a well-defined syntax and semantics. Most of these notations are textual, but there are examples that combine graphics with text. Most formal methods notations are simpler than programming languages. This is partly because of a desire (in most cases) to give a precise, unambiguous definition of the semantics of the notation in a form that can be used as the basis for a system of reasoning or calculation.

Some notations, such as Z (Spivey 1992) are designed primarily for writing formal specifications; type checkers can be used to validate such specifications. Other notations lend themselves to so-called, "systems of reasoning" (high order logic). These include highly automated model-checking (Clarke et al. 1986) and interactive theorem-proving (Gordon et al. 1993) techniques and tools.

As reported in (Craigen et al. 1993), there have been a number of industrial applications of Formal Methods. For example, formal specification and analysis techniques were applied to the two shutdown systems of the Darlington Nuclear Generating System. Formal methods were used to verify about 9,000 lines of code in a safety-critical railway signaling system used in the Paris metro system. Formal specification techniques were also applied to the Collision Avoidance System (CAS) of TCAS, an on-board aircraft system designed to reduce the risk of midair and near midair collisions. In some instances, formal methods have been used as a means of certifying safety-critical and security-critical systems to conform with such standards as DO178B (RTCA 1992) and the "Orange Book" (DoD 1985). Primary motivations for applying formal methods have been to ensure consistency, completeness and correctness of specifications, designs and implementations.

THE FORMATS PROCESS

Figure 1. "FORMATS" Process.

As presented at NCOSE 95, the authors have formulated a process called "Formal Methods Based, Machine-Assisted Test Case Element Generation for Systems Integration Testing", or stated much more simply, "FORMATS". As illustrated in Figure 1, the process is divided into two significant parts. First, using the FUSS tool, software requirements specifications are written in "S". Then, a Test Case Generator (TCG) tool based on interactive theorem-proving technology is applied to generate test case elements from the S-code. Test case elements are the building blocks of the test cases and test procedures developed by the system test organization. The development of this process and the supporting tools is evolving together with the research efforts partly discussed in this paper.

THE "S" NOTATION

Predicate logic allows assertions to be connected by using logical operators (and, or, not) to construct formulae that are used to describe system constraints. Quantifiers allow formulae to specify constraints that apply to all individuals ("universal constraints") or to at least one ("existential constraints"). S (Joyce et al. 1994) is a simplified notation for predicate logic. This simplicity allows S to be read easily by the various stakeholders (users, developers, testers etc.) who use the specifications to perform their various tasks. S uses the typical logical connectives /\, \/, ~, ==> for conjunction (and), disjunction (or), negation and implication, respectively. The keywords "and", "or" and "not" may also be used to express conjunction, disjunction and negation. The keywords "forall" and "exists" are used to indicate universal and existential quantification. For readability, we add an if-then-else construct which is defined in terms of implication, conjunction, and negation:

if A then B else C ( (A ==> B) /\ ((not A) ==> C)

User-defined types may be introduced by either a type declaration, type definition or type abbreviation. For example,

:person;
:sex := Male | Female;
:age == num;

introduces a new "uninterpreted" type called "person", defines a new type called "sex" populated by exactly two values, "Male" and "Female", and finally, introduces "age" as an abbreviation (or re-naming) of a more general purpose type, namely, the built-in type called "num", i.e., the natural numbers. User-defined functions and predicates may be introduced as shown by following examples:

plus_three (m:num) := m + 3;

is_an_even_number (m:num):=
exists (n:num). m = (2 * n);

Some additional aspects of the S notation are described in the context of examples presented in the next few sections of this paper. The reader is referred to (Joyce et al., 1994) for a more detailed description of S.

REMOVING AMBIGUITY

To illustrate the potential of identifying ambiguity in specifications, we offer the example of fault tolerance specifications for a multi-processor node in an operational system. The system design allows multiple processors to be redundantly configured. One processor is active while the others are in standby. One of the fundamental requirements is specified as follows:

"If there is no ACTIVE processor, a STANDBY processor, if one exists, shall become the ACTIVE processor within STANDBY_TO_ACTIVE milliseconds".

Although this statement may, upon first reading, seem unambiguous, there are (at least) two fundamentally different ways to interpret this particular requirement. Translation of the English text expressing this requirement into formal notation is an effective way to uncover differences in the interpretation of the requirement. Ambiguity is forced to the surface because the specifier is obliged to make explicit choices between different formulations. In this particular example, the ambiguity is exposed because the specifier must choose between different possible quantifications, that is, choosing between "forall" and "exists".

But first, consider formalizing portions of the above requirement. There are many ways to do this. One way to begin is to notice that STANDBY_TO_ACTIVE is a duration, i.e., a quantity of time. We will also need varibles that refer to times when the processors are in certain modes. Time and duration can be expressed using numbers, so we can define the types "time" and "duration" and declare the constant STANDBY_TO_ACTIVE with the following S notation:

:time == num;
:duration == num;
STANDBY_TO_ACTIVE : duration;

Processors are either ACTIVE or in a STANDBY mode. In a later example, FAILED is also a possibility, so we shall define a new type called "mode" populated by exactly three possible values, "ACTIVE", "STANDBY" and "FAILED":

:mode := ACTIVE | STANDBY | FAILED;

We now need a way to say that a processor is ACTIVE at a particular time. We would like to do this by writing the boolean expression "p ACTIVE t", meaning processor p is ACTIVE at time t. This can be achieved by modelling a processor as a function which given a mode (i.e., "ACTIVE", "STANDBY", or "FAILED") and a time value, returns TRUE or FALSE. (Hence, a processor is modelled as a "predicate" since, in S, a predicate is just a function that returns TRUE or FALSE.) To support this formalization, we introduce a new type called "processor":

:processor == mode -> time -> bool;

This allows us to formalize an expression such as "processor p is in ACTIVE mode at time t" as:

p ACTIVE t

S, like most specification languages, allows us to quantify an expression. The quantified expression:

forall (p:processor, t:time). p ACTIVE t

means that all the processors are ACTIVE all the time. (The dot "." separates the quantified variables from the expression being quantified.) The expression:

exists (p:processor).forall (t:time). p ACTIVE t

means that at least one processor is always ACTIVE, which is different from meaning that at any time there is at least one processor that is ACTIVE:

forall (t:time). exists (p:processor). p ACTIVE t

Another example is the phrase "there is no ACTIVE processor at time t". This can be formalized as:

not (exists (p:processor). p ACTIVE t)

Using these elements of S notation, a possible interpretation of the requirement statement,

"If there is no ACTIVE processor, a STANDBY processor, if one exists, shall become the ACTIVE processor within STANDBY_TO_ACTIVE milliseconds".

is captured in the following definition of a predicate called "Standby_Becomes_Active":

Standby_Becomes_Active :=	
  forall (t:time).

		(not (exists (p:processor). p ACTIVE  t))

		==>
 
		forall (p:processor).

			p STANDBY t

			==>

			exists (d:duration).

				p ACTIVE (t+d) /\

					d 

Even with very limited experience in the use of predicate logic, a specifier will immediately recognize that this interpretation is not reasonable for a system with more than one standby processor. This interpretation implies that every (i.e., possibly more than one) standby processor would become ACTIVE ( and this would conflict with another requirement (not shown herein) that at most one processor will ever become ACTIVE.

The correct interpretation is expressed by the following S expression:

Standby_Becomes_Active :=
	forall (t:time). 
		(not (exists (p:processor). p ACTIVE  t))
		==>
		(exists (p:processor). p STANDBY t) 
		==> 
		exists_unique (p:processor,d:duration).
			p STANDBY t /\ 
			p ACTIVE (t+d) /\
			d 

This second interpretation is clearly preferable since it specifies that precisely one of the STANDBY processors shall become ACTIVE.

TEST CASE GENERATION

Test case generation for large projects is typically a highly manual process.  Teams of test engineers wade through large volumes of software specifications, interpret them to the best of their abilities, and from this generate appropriate suites of tests to apply to the developed systems.  The process is very tedious, error prone and frustrated by the ambiguities of natural language specifications.

As already demonstrated in the preceding section, difficulties due to requirements ambiguity can be mitigated by the use of formal specifications techniques. In this section, we provide a brief overview of how the cost of test case generation can be further reduced by machine-assisted derivation of test case elements from formal specifications.

There are well known techniques for generating test cases from source code (or pseudo-code) for the purpose of "white box" testing. However, our approach is intended to support requirements-based "black box" testing by an independent organization (i.e., an organization independent of the developers). The standard approach for an independent test organization to take is to perform "black box" (functional) testing to verify the correct implementation of requirements, i.e., that given system stimuli result in given system responses. Using formal specification techniques, it is possible to codify such requirements in a form that is both readable and unambiguous for both the developer and the tester. However, while the specification focuses on what the system is to achieve, the requirement may not be expressed in its most "atomic" or reduced form.

A "test case element" (TCE) is an atomic stimulus-response relationship. A TCE is atomic in the sense that the specification of the stimulus (and qualifying conditions) identifies exactly one class of external inputs. A tell-tale sign of a non-atomic relationship is the existence of a disjunction (i.e., the word "or") in the specification of stimulus or its qualifying conditions. For readability, requirements specifications are often written with disjunctions. Deriving "black box" tests from a specification can be done by extracting TCEs from the specification and sequencing these TCEs into test cases.

A TCE, is defined to be an implication, A ==> B. The antecedent, A, represents a minimal set of stimuli and conditions that will cause a response, represented by the consequent, B. The TCE is atomic, i.e. the antecedent, A, is a conjunction of predicates (all "ands") and does not contain a disjunction (an "or").

Consider the example of the following pair of requirement statements (using the symbols "A", "B" , ... and "F" in place of actual stimuli, conditions and responses):

1) If stimulus A occurs when condition B holds, then response C shall be given.
2) If response C is given when D or E hold, then response F is also given.

These requirements can represented formally in S as:

1) A /\ B ==> C
2) C /\ (D \/ E) ==> F

A /\ B ==> C qualifies as a TCE for response C, but C /\ (D \/ E) ==> F is not a TCE for two reasons: the antecedent does not contain a stimulus, and D \/ E is a disjunction. Based the above definition of a TCE, a test engineer would transform C /\ (D \/ E) ==> F into the following pair of TCEs:

A /\ B /\ D ==> F
A /\ B /\ E ==> F

This transformation of C /\ (D \/ E) ==> F into the above pair of TCEs involves substitution of C by A /\ B using the first of the two requirements statements shown above. Although test engineers do not necessarily use formal rules of logical deduction, the substitution of C by A /\ B in this example is an instance of a rule of logical deduction called "pre-condition strengthening." This transformation also involves application of rules of logical deduction to break the intermediate result,

(A /\ B) /\ (D \/ E) ==> F

into the two TCEs shown above. Transformation such as these are routine and intuitive to a test engineer. One objective of our research is to capture such transformations in an algorithmic form.

In the case of requirements-based system testing, a test engineer must use both analysis ("taking apart") and synthesis ("putting together") to generate test case elements. Consider, for example, the following pair of symbolically represented requirements:

1) If stimulus A occurs when either condition B or C is true, then response D shall be produced.
2) Response E shall be produced whenever response D is produced and condition G is true.
3) Produce response D if stimulus H occurs.
4) Produce response I when stimulus A occurs unless response D is produced.

We begin by translation of these four requirements into "S" notation:

1) A /\ (B \/ C) ==> D
2) D /\ G ==> E
3) H ==> D
4) A /\ (not D) ==> I

Using the following logical equivalences,

x /\ (y \/ z) ( (x /\ y) \/ (y /\ z)
(x \/ y) ==> z ( (x ==> z) /\ (y ==> z)

Requirement 1) can be transformed into a pair of implications,

TCE 1) A /\ B ==> D
TCE 2) A /\ C ==> D

which both specify atomic stimulus-relationships. Requirement 2) requires synthesis to derive an atomic stimulus-response relationship. When represented as a logical implication, the antecedent (i.e., the "if" part of the implication) must contain only external stimuli and qualifying conditions. In the case of Requirement 2), synthesis must be used to replace the reference to Response D by an appropriate stimulus based on a stimulus-response relationship specified by another requirement. In this case, we could replace "D" by either "A /\ (B \/ C)" using Requirement 1) or alternatively by "H" using Requirement 3). Choosing the latter, we obtain by synthesis the following test case element:

TCE 3) H /\ G ==> E

Requirement 3) is already in "reduced form":

TCE 4) H ==> D

Finally, in the case of Requirement 4), we must employ a "closed world" assumption that Requirements 1) to 4) collectively specify all of the stimulus and qualifying conditions for generating response D. With this assumption, "not D" of Requirement 4) can be replaced by "not ((A /\ (B \/ C)) \/ H)" using Requirements 1) and 3) to yield,

A /\ (not ((A /\ (B \/ C)) \/ H)) ==> I

as an intermediate result. Then, the reduction into test case elements can be completed by analysis using the above mentioned logical equivalences in combination with other logical equivalences such as,

not (x \/ y) ( (not x) /\ (not y)

to yield the following (draft) test case elements:

TCE 5) A /\ (not A) /\ (not H) ==> I
TCE 6) A /\ (not B) /\ (not C) /\ (not H) ==> I

The antecedent of TCE 5) is a logical contradiction, i.e., "A and (not A)", and so it is filtered out.

Converting these results of logical transformation back into a tabular format, the set of test cases elements that we obtain for Requirements 1) to 4) is given in Table 1.

Stimuli Conditions Expected Responses Source
A B D 1
A C D 1
H G E 2, 3
H - D 3
A not B, not C, not H I 1, 3, 4

Table 1. TCEs for Requirements 1) to 4).

Test Case Elements

To further illustrate the generation of test case elements, consider the following requirement.  A system operator is provided the capability to manually fail a processor, that is, intentionally put it off-line.  If the processor is ACTIVE, then confirmation from the operator is required.  If  no other processors are available in STANDBY (all others are either failed or out of service), the System Operator must authorize the operation by supplying a password.  This requirement is specified in S as follows:
ReqManualFail :=
	forall (p:processor, t:time, d1:duration).
		Event (ManualFail p) t /\
		(if p ACTIVE t then
			(if (not (exists (q:processor).q STANDBY t)) then
				Event Password (t + d1)
			else
				Event Confirm (t + d1))
		else
			(d1 = 0))
		==>
			exists (d2:duration).p FAILED (t + d1 + d2);
Test case elements for this requirement are given in Table 2.  Note that these TCEs contain quantifiers.  A discussion of the complications due to quantifiers is beyond the scope of this paper.
Quantifiers Stimuli Conditions Expected Responses
forall (p:processor, t:time) Event (ManualFail p) t not p ACTIVE t exists (d2:duration).
p FAILED (t + d2)
forall (p,q:processor, t:time, d1:duration) Event (ManualFail p) t /\
Event Confirm (t + d1)
p ACTIVE t /\ q STANDBY t exists (d2:duration).
p FAILED (t + d1 + d2)
 
forall (p:processor, t:time, d1:duration) Event (ManualFail p) t /\
Event Password (t+d1)
p ACTIVE t /\
(not (exists. q STANDBY t))
exists (d2:duration).
p FAILED (t + d1 + d2)
 

Table 2. Test Case Elements for the ReqManualFail requirement.

Automating Test Case Generation

Several test generation techniques make use of by-products of the development process as input to their test case generation phase.  Murphy, Townsend, and Wong (Murphy et al. 1991) use the Booch (Booch 1993) scenarios expressed as object diagrams as an inspiration for a test script.  The script is populated by examining the implementation using techniques that extend the PGMGEN work of Hoffman (Hoffman 1989).  Poston's technique (Poston 1994) is based on artifacts from Rumbaugh's OMT (Rumbaugh 1991).  This technique also populates the test cases by examining the implementation.

Richardson's TAOS system (Richardson 1994) uses a formal temporal logic specification to provide the test script. Test cases are generated by populating the test scripts with random values under user guidance. The technique described herein does not consider a sequence of inputs. Each test case element represents a particular but single stimuli and condition along with the expected result. TCEs are generated from the formal specification and are populated and sequenced manually.

Generating TCEs from requirements is primarily a matter of manipulating a logical expression into an implication that contains the response as the consequent. This is done by lifting the implication for a response to the surface of the requirement using the following rewrite rules:

A \/ (B /\ C) ( (A \/ B) /\ (A \/ C)
A \/ (B ==> C) ( (not A) /\ B ==> C

The antecedent of the resulting implication is then placed in disjunctive normal form. TCEs are then generated using the rewrite rule:

A \/ B ==> C ( (A ==> C) /\ (B ==> C)

These are the same rewrites employed to generate the TCEs for ReqManualFail. The simplicity of these rewrite applications strongly suggests that TCEs can be generated automatically.

We currently have a prototype capable of generating "unquantified" atomic test case elements such as those presented in Table 1. This prototype is being extended to include quantification.

Dick and Faivre (Dick et al. 1993) generate test cases from formal specifications by reducing the specification to disjunctive normal form (DNF) by logical manipulations. In contrast, the technique presented here expresses test cases as implications. This provides an advantage when dealing with certain forms of requirements with stimuli in the post-state. By assuming a "closed world" we can produce test cases that do not include response terms in the antecedent.

(Ghezzi et al. 1991) describes the generation of test cases for a compiler from the BNF specification of the language it accepts. This approach is similar to the one in this paper in certain respects since both generate test cases from formal notations. However, the approaches are actually quite different. (Ghezzi et al. 1991) describes the syntactic form of valid input while the use of predicate logic in this paper represents a logical relationship between inputs and outputs.

Further research will consider automating the instantiation and sequencing of TCEs. However, as noted in (Dick et al. 1993), the manual review of TCEs has the potential of providing assistance in validating the requirements specification well before test cases would be applied during development. Another important aspect of this research is with respect to the types of test coverage to be considered when generating TCEs. For large specifications, generating a complete set of TCEs would be prohibitively large. Coverage schemes need to be employed to keep the number of TCEs to manageable levels.

SUMMARY

This paper has provided further details on the theoretic foundations, techniques, tools and process for applying formal methods to software requirements specification and test case generation.  The work to date has confirmed that practical notations and techniques can be applied to improve the quality of requirements specifications.  Strong evidence has also been found to support the thesis that test case generation can be implemented effectively.  A number of challenges for research remain to be addressed in the area of managing test coverage.

ACKNOWLEDGMENTS

The authors gratefully acknowledge funding from the BC Advanced Systems Institute towards support of this research.

BIOGRAPHY

Kal Toth is an Adjunct Professor in the Department of Computer Science at the University of British Columbia and a specialist in the systems and software  engineering field.  He was the Director of Quality at Hughes Aircraft of Canada Systems Division supporting the development of Air Traffic Control Systems and has consulted in the areas of software engineering, information security, and distributed information throughout his career.  Dr. Toth holds a Ph.D. from Carleton University in Systems Engineering. 
Michael R. Donat is currently a Ph.D. student in the Department of Computer Science at the University of British Columbia.  His research interests include the formal relationships between specifications, programs, and test cases.  His current work focuses on the automation of test case generation processes from formal specifications.
Jeffrey J. Joyce is a Research Scientist at Hughes Aircraft of Canada, Systems Division.  He is also an Adjunct Professor at the University of British Columbia and a Fellow of the B.C. Advanced Systems Institute. Dr. Joyce obtained his doctorate at the University of Cambridge (U.K.) where he studied the use of Formal Methods to specify and verify digital hardware.  His current research focuses on the use of Formal Methods in the industrial development of large, complex systems.

REFERENCES

G. Booch, Object-oriented Analysis and Design with Applications, Benjamin-Cummings, 1993.

E.M. Clarke and E.A. Emerson and A.P. Sistla, "Automatic Verification of Finite-State Concurrent Systems Using Temporal Logic", ACM Transactions on Programming Languages and Systems, Vol. 8, No. 2, April 1986, pp. 244-263.

D. Craigen, S. Gerhart and T. Ralston, "An International Survey of Industrial Applications of Formal Methods" (2 Volumes), Technical Report #NRL/FR/5546-93-9581, Naval Research Laboratory, Washington, D.C., 1993.

Jeremy Dick, Alain Faivre, "Automating the Generation and Sequencing of Test Cases from Model-Based Specifications". In J.C.P. Woodcock, P.G. Larsen, editors, Formal Methods Europe '93, Lecture Notes in Computer Science 670. 1993.

C. Ghezzi, M. Jazayeri, D. Mandrioli, Fundamentals of Software Engineering, Prentice Hall, 1991.

M. Gordon and T. Melham, Introduction to HOL, Cambridge University Press, 1993.

D.M. Hoffman, "A Case Study in Module Testing", Proceedings of the Conference on Software Maintenance, IEEE Computer Society, 1989, pp. 100-105.

J. Joyce, N. Day and M. Donat, "S: A Machine Readable Specification Notation based on Higher Order Logic", 1994 International Meeting on Higher Order Logic Theorem Proving and its Applications, Malta, September 1994.

G.C. Murphy, P. Townsend, P.S. Wong, "Experiences with Cluster and Class Testing", Communications of the ACM, Vol. 37 No. 9, September 1994.

R.M. Poston, "Automated Testing from Object Models", Communications of the ACM, Vol. 37 No. 9, September 1994.

D.J. Richardson, "TAOS: Testing with analysis and oracle support", ISSTA `94 Software testing and analysis, pp. 138-153, New York, 1994, ACM Press. J. Rumbaugh, Object-oriented Modeling and Design, Prentice Hall, 1991.

J.M. Spivey, The Z Notation: A Reference Manual, 2nd edition, Prentice-Hall, 1992.

K. Toth and J. Joyce, "Industrialization of Formal Methods through Process Definition", 5th Annual Symposium of the National Council on Systems Engineering (NCOSE), July 1995.

RTCA/Eurocae, Software Considerations in Airborne Systems and Equipment Certification, D0-178B, December 1992.

Department of Defense, Department of Defense Trusted Computer System Evaluation Criteria, DoD52028-STD, December 1985.