Author: Gordon Sheppard, American Management Systems, Inc. (gordon_sheppard@mail.amsinc.com)
American Management Systems is in the business of developing complex business applications. The author has designed and developed several OO systems for large organizations. This position paper discusses some of the testing issues encountered developing Smalltalk applications. Smalltalk applications demand much of the same types of testing as any other application. In addition, OO programs like those written in Smalltalk undergo additional testing and additional types of tests to account for the differences between OO and traditional development.
Testing consists of the following activities: planning the tests, preparing scripts and data, executing the test and reviewing and investigating test results. The types of testing that AMS advocates are Unit, Integration, Proof of Concept, System, Regression, Stress, and Exception Testing. Based on factors such as system complexity, mission criticality, number of users, volume of data/transactions etc. the level and degree of testing may vary.
The types of testing AMS has used in building OO client/server systems include proof of concept, unit, integration, system, regression, stress, and exception testing. The iterative nature of object-oriented development is unique in that it provides more checkpoints than traditional development where such testing might occur. For example, proof of concept testing is used in the early stages of a project to verify assumptions in the technical architecture. Unit and integration testing are used during system construction to verify the functioning of individual components and the integration of those components. As more components are completed, the project must regression test components to verify the inclusion of new functionality without breaking old functionality. System testing tests the entire functionality of a system to be delivered at the completion of a logical phase. Stress or performance testing verifies that the system can meet the volume requirements placed on the system, and exception testing verifies that the system can recover gracefully from catastrophic failures such as a database server failure.
Testing programs written in Smalltalk (or any OO language) must place additional emphasis on regression testing because changes in superclasses can effect subclass operation. For complete coverage, any change in a superclass necessitates retesting its entire subclass tree. Retesting all subclass functionality whenever a change occurs requires too much effort to execute manually. Automated tools are therefore required to reduce the effort of Smalltalk regression testing.
The iterative nature of the development process implies that unit and integration testing will occur more frequently for Smalltalk programs. Presumably, more frequent testing will help create better programs. However, it redoubles the need for some form of automated testing. A good regression testing framework provides developers with the means to create and manage their test scripts through version-controlled Smalltalk source, and an easy way to recreate unit test data.
The experimental attitude one adopts for most OO development projects encourages the use of concept testing. Concept testing is used in the early stages of a project to determine the feasibility of a given technical approach. By testing the riskiest areas of a design first, projects can avoid costly redesign at later stages. This approach is useful for most client/server projects, not just OO or Smalltalk projects.
System testing, stress testing, and exception testing occur much the same for a Smalltalk project as they would for any client/server project. For example, unplugging a database server during exception testing would follow the same procedure whether the application was written in Smalltalk or C.
At present, we do not automate user interface testing for Smalltalk systems. Typically, the interface test is conducted as part of the system test. The expected results are created and verified manually. The author would be interested in hearing about others' experiences with automated interface testing.