Home Page
formalWARE seminar
last updated November 20, 1997 
 
formalWARE 
    project  

  Participating 
     Organizations 
  Research   
     Topics 
  People 
   

formalWARE 
    results  

  Overview 
  Publications 
  Presentations 
  Tools   
  Methods 
  Examples   
  Training 

formalWARE  
  information  

  Events 
  Index  
  Links   
  Contacts

We are pleased to announce a two part seminar, open to anyone who is interested, on Probability and Statistics and Probabilistic Analysis of Software Testing.  



Date and Place:  December 3rd, CICSR/CS Boardroom, UBC. 

Seminar 1Probability and Statistics -- 2:30 to 3:30PM 
Refreshments:  3:30 to 4:00PM 
Seminar 2Probabilistic Analysis of Software Testing -- 4:00 to 5:00PM 

Please RSVP by Friday, November 28th:  Contact Christine Jensen, cjensen@cs.ubc.ca or 822--0698 

Speaker:  Lee White, Visiting Professor at the University of Victoria and formalWARE member. 

Lee White is the Jennings Chair Professor of Computer Engineering and Science at Case Western Reserve University in Cleveland, Ohio, but is currently a Visiting Professor in the Department of Computer Science at the University of Victoria for the 1997-98 academic year.  He received his BSEE from the University of Cincinnati, and MSc, PhD from the University of Michigan in Electrical and Computer Engineering.  He has served as Chairman in the computing departments at Ohio State University, the University of Alberta, and CWRU.  His research interests are in software engineering in general, and in software testing in particular, together with algorithm development and analysis; he has over 50 published papers in these areas.  He has served as a consultant to a number of industrial companies, including IBM, GE and Rockwell International. 

These seminars were motivated by work done by Prof. White in response to a problem expressed by a problem statement which appears at the bottom of this page.    
 
 



An Intuitive Overview of Probability and Statistics, or What You Always Wanted to Know About Statistics, But Formulas Got in the Way 

Lee J. White 

Department of Computer Science 
University of Victoria 
 
Abstract: 

This talk will provide an intuitive tutorial of the principles and concepts which are the basis of probability and statistics, especially as needed for software testing.  We will characterize the difference between a population and samples of that population in terms of appropriate statistics (or measures).  Several example probability distributions will be related to the testing problem.  Examples will be given to show why the Normal and Poisson distributions can be used to approximate probability distributions that arise in testing or other practical situations.  The role of adding random variables in this process will be explained.  In conclusion, it will be emphasized that probability and statistics should be considered as just tools, and as such do not dictate the correct answer to any practical problem.  There are different models for many practical problems, and then probability and statistics will give different answers depending upon the model selected for the problem. 

Please note that for those whose probability and statistics insights are a bit rusty, this talk will provide sufficient background to understand the next talk "Probabilistic Analysis of Software Testing for a Rare Condition". 

Handout and Step-by-Step Analysis 
 



Probabilistic Analysis of Software Testing for a Rare Condition 
  
Lee J. White 

Department of Computer Science 
University of Victoria 
 
Abstract: 

An industrial problem is discussed and analyzed:  given a rare condition "C", we are interested in statistically modeling the occurrence of the rate of "false positives", i.e., the rate at which the system incorrectly reports that condition C has occurred.  (Of course, we are also interested in the "misclassification" rate, i.e., the rate at which the system misses the detection of condition C when it in fact has occurred). 

The talk will address the following issues: 
 

  • How many tests are required to estimate a probability parameter of 0.01 to within a specified interval with high confidence?  Of 0.001?  Of 0.0001?
  • Two models will be presented for the solution of this problem, illustrating the fact that more than one solution approach may make sense.
  • In the first model, it turns out that the analysis of the false positive rate is essentially the same as that for the misclassification rate.
  • How to quantify the level of confidence that we have in the test results?
  • What assumptions are needed to make the tests representative of normal operations of the system?
Please note that if your probability and statistics insights are a bit rusty, but you are interested in this talk, then you might want to consider attending the first talk:  "An Intuitive Overview of Probability and Statistics". 
 


Problem Statement:  Software Testing for a Rare Condition "C" 

Suppose that we have a system which monitors a stream of inputs with the objective of detecting a particular condition "C".  You can imagine that this system is implemented mainly as software and that the specification and implementation of condition "C" are very complex.  For instance, the system could be a computer-based system in a chemical factory that must detect the rare occurrence of a dangerous combination of physical conditions (temperatures, pressures, viscosity). 

We first need to distinguish between various types of testing.  Here we will define lab tests as those tests done in the laboratory before the system is put into operational use.  These lab tests will ensure that the system exhibits specified behavior before being released for operation, and that certain probability parameters can be estimated from frequencies obtained during lab tests.  After the system is put into operational use, we will need to conduct tests as well, which will be referred to as field tests.   

To gain confidence that the implementation is correct with respect to its specification, we want to perform some lab testing.  In particular, we are interested in false positives, i.e., instances where the system incorrectly reports that the inputs satisfy condition "C". A lab test will consist of a sequence of inputs applied to the system--each combination of inputs is called a "test case element".  We have three fundamental questions: 

1)  How do we quantify (in a mathematical way) the level of confidence that we could have in our lab test results and hence subsequent field test results? 

2)  How many lab test cases are required to achieve this level of confidence? 

3)  What assumptions would be needed to make the lab test inputs representative of normal operation? 

With respect to 1), we want to determine the number of lab test case elements that would be required to conclude (from a mathematical basis) that "For 1000 inputs, the system will detect at most one false positive" with some mathematical confidence in this conclusion.  Consider an analogy between this problem and the way that opinion polls are used to make conclusions about the outcome of an election.  For instance, of the way that confidence in opinion polls is quantified, e.g., "19 times out of 20 this result will be accurate".  In other words, the actual conclusion might be "If the system will generate more than one false positive per 1000 inputs during its normal operation, then 19 times out of 20, we will encounter at least one false positive by applying a set of X lab test case elements to the system."  Is this an appropriate way to quantify our confidence? 

With respect to 2), how do we compute X in the above statement?  Ideally, we would like to have a formula with a variable R for the "acceptable rate for false positives", e.g., R = 0.1%  (i.e., 1 in 1000 false positives). 

With respect to 3), for the analogy with opinion polls to hold up, we will have to assume that the lab test cases are independent except that they are also representative of the kinds of inputs that would be applied to the system during normal operation.  What are the issues which need to be taken into account to ensure this is correct, and what does this imply about the statement of this problem?