“Requirements
Validation” –
Nancy
Day
This research focuses on techniques for
systematically analyzing specifications using techniques such as simulation,
prototyping, checking for completeness and consistency and model checking.
Model checking allows us to query the specification to see if it has certain
desired properties.
To test these ideas, Day has built a framework
in which different notations can be used to describe different parts of
a system, but integrated to create an overall model of the system's behavior
for analysis. She has found that using an explicit formal definition of
the meaning of the notations makes it possible to do this integration.
This formal definition can then be used to drive the analysis.
This framework has been used to run completeness
and consistency checks on the Separation
Minima for the North Atlantic Region given in a tabular specification
notation. A second example in progress is the
analysis
of an aeronautical telecommunications network specified in a state
transition notation.
Nancy's presentation.
“Requirements Specification”
– Kendra Cooper
The written specification of requirements
is of keen interest because it is used as a primary form of communication
between stakeholders in the system development process as well as often
serving as the basis of a contractual agreement between the customer and
the developer. For this communication
to be effective, the requirements document must be both readable, precise
and expressed at an appropriate level of abstraction.
This research addresses the challenge of
achieving the precision and rigour of conventional formal specification
notations without sacrificing the readability of less formal approaches
to specification of requirements. This research focuses specifically on
the formal specification of requirements for data-oriented systems with
the research objective of defining a core specification language (SPECL)
that can be used to write highly-readable, machine-checkable references
to a logical model of data expressed at the same level of abstraction used
by domain experts in less formal approaches.
The formalized portions of this document
may also be suitable for use as input to other processes such as the automated
test case generation process under development in the FormalWare project
by Michael Donat (UBC Computer Science).
Kendra's presentation.
“Requirements-Based
Testing” – Michael Donat
This research focuses on the automation
of a test generation for system level, requirements based testing.
This research applies formal logic directly to an industrial problem.
This test generation process plays an essential role in the development
of complex systems by demonstrating that every requirement has been implemented.
This process is currently a predominantly manual task. Although some
tools exist to support this task, the support is generally limited to editing
and searching documentation. This research has resulted in the automation
of a substantial portion of the test generation process. The
formal logic underpinning this research provides 1) assurance that the
generated tests are logical consequences of the input specification, 2)
specific definitions for various coverage criteria, 3) mechanized traceability
from requirements to tests, and 4) minimum impact on test suites due to
requirements specification changes. In addition to providing a level
of automation, this research establishes a basis for defining formal industry
standards for system-level test coverage.
To view Mike's presentation
click here.
“Code Level Safety
Verification” – Ken
Wong
How do you know that your software is
safe? Increasingly software is being used to control systems whose
operation can potentially lead to property damage, injury or even the loss
of life. Determining if the software can contribute to an accident is complicated
by the fact that the hazard-related software may not be conveniently isolated
to a specific software component or subsystem. We refer to this as the
"long thin slice problem". As part of our research, a static code
analysis method was developed to capture the long thin slice and reason
about its impact on safety. Though not integral to the method, expressing
the slice in a simple machine-readable formal notation was found to be
very effective. The choice of notation will allow for the future use of
tools such as a type checker or theorem prover to aid in the safety analysis.
Interaction with industry has done much to shape the direction of the research
by providing an industrial and system safety context for the static code
analysis method, as well as an appreciation of the long thin slice problem.
To view Ken's presentation
click here.
. |