Authors:
Copyright(c) 1995 UniSQL, Inc., All rights reserved
This paper provides a briefdescription of our product and some of the ways we are ensuring its quality through testing, including an introduction to
UniSQL Server is a commercial client-server object-relational database system that runs on a variety of platforms. The UniSQL Smalltalk Interface consists of approximately 22,000 lines of VisualWorks code and 2000 lines of C code.Our testing has focused on the 160 public Smalltalk API methods.
Although there are a number of error handling variations, the general form of a test method is:
self describe: `a description string that is logged'. self urequire: {preconditions} {code to exercise a particular feature} 40 self uensure: {conditions that demonstrate whetherthe test worked}
In conjunction with nightly system builds that construct and test the heart of the UniSQL client and server libraries, we use Make and shell scripts to build and test Smalltalk development and delivery images on a number of platforms.
In the non-ENVY version, we first built a bootstrap image manually. This is a vanilla VisualWorks image with just enough infrastructure added so that we can use it to build and test images without manual intervention. This infrastructure establishes a minimal UniSQLBuilder class in the bootstrap image that is smart enough to build images, but that can be extended dynamically. We can then extend the functionality by filing in new versions of supporting methods as part of the nightly build process.The framework can also be extended on a more dynamic basis for individuals without impacting system code.
When an image is "hooked up," the UniSQLBuilder class is established as a dependent of ObjectMemory. On startup, a hooked up image files in a full-fledged version of UniSQLBuilder. This fully-functional version then takes appropriate actions based on the value of environment variables. For example, the CREATE and TEST flags:
All code is maintained in Smalltalk change sets and stored in RCS files. Each night, a custom object engine is built for each platform, followed by construction of a development image. From there, customer delivery files are constructed, and a customer-configured image is built and tested. Results of tests that fail are logged.
Although the tests were originally developed in parallel with the interface, some parts of the interface have grown faster than their corresponding tests. We have found that the branch coverage analyzer written by Andy Choi of HP (andyc@hprnd.rose.hp.com) has been avaluable tool in validating tests and pointing out areas that need more testing. The tool modifies methods at the bytecode level to indicate which branches have not been executed. It does not attempt to analyze whether all paths through the system have been covered. After you specify the methods you want to trace, you run your application or test suite. You then open a browser on all traced methods that have unexecuted branches. The first message in each unexecuted branch is highlighted in the browser. In the example below, the unexecuted message (the first #asDays message) is underlined.
"Establish methods to betraced." CodeCoverageTool traceClass: Date selector: #subtractDate:. "Execute the code." Date today subtractDate: (Date newDay: 100year: 1995). "View traced methods with unexecutedbranches." CodeCoverageBrowser browseUncovered. Date>>subtractDate: aDate "Answer the number ofdays between the receiver and aDate." ^year = aDate year ifTrue: [day - aDate day] ifFalse: [self asDays - aDateasDays]
The first message (year) is not highlighted in the browser which indicates that the method has been called at least once. This also shows that the second branch has not been executed since the method was modified for branch coverage analysis. The tool has especially useful in revealing oversights in error checking branches. The best part is that it makes testing more fun.