--
ThomasFritz - 21 Nov 2005
Review from Thomas Fritz
Problem
The paper presents the AHEAD model, an approach for the refinement of programs as well as their non-code representations based on algebraic hierarchical equations.
According to the authors, programs are refined in an incremental way by adding new features to modules (i.e. containment hierarchies of artifacts) of an existing program and thus creating a new program. Different to
GenVoca - the authors previous design methodology for creating application families – a program itself can also be seen as a module, so that it can be refined further and synthesized with other programs.
Base artifacts in the model are constants that are refined by applying functions on them. An arbitrary set of functions can be applied on a constant resulting in a refinement chain. Given a set of constants, i.e. the base program, a refinement is seen as a function that adds new constants to the program and extends/refines existing ones. Each refinement can thus be seen as a layer that increments the previous one (and that is only based on the previous one). The refined version is then obtained by refining the collectives of the lower layer according to the refinement equations. Thereby, the operators for composing the collectives are the same on each level (Principle of Abstraction Uniformity). By basing the refinement on small steps and having only a small number of simple refinement operators makes the approach scalable and keeps code generators that apply the refinements simple.
Furthermore, refinement is not only limited to source code artifacts but refinement of non-code artifacts such as makefiles, UML documents, etc. can also be modeled with AHEAD. All artifacts are thereby treated as classes and then refined analogously to source code artifacts (Principle of Uniformity), where refinements for each artifact type have to be defined separately. Those refinements can be used to keep the artifacts consistent to each other.
Contributions
* presents an algebra to model feature-refinement of programs; extends the idea of modeling refinements of an individual program (
GenVoca) to an arbitrary number of programs and representations by looking at each program as a module that can be refined and composed with other modules
* enables modeling of refinements of non-code artifacts
* presents tool support for AHEAD: composer, a tool that, based on an refinement equation, invokes artifact-specific composition tools; also presents the use of jampack and mixin as code-specific composition tools
* provides information on the application of AHEAD modeling to itself and FSATS (fire support simulators)
Weaknesses
* The incremental/layered structure of all refinements does not seem convincing. Can really all refinements be applied in an incremental way and why? They especially did not support enough why this might be possible. And looking at the 21 layers they needed for 4500 Java LOC it does not seem very scalable.
* In my opinion, the claim for the possibility and relevance of applying AHEAD to non-code artifacts is not supported sufficiently in the paper. Does it really help to use such an algebra for all non-code artifacts and how does it help?
* The application part does not provide useful information. It would be interesting to see what feature refinements (layers) they have in the equations for AHEAD.
* The authors never really define the term ‘feature’ or ‘feature refinement’ in a satisfying way. They just state that “feature refinements are modular, albeit unconventional, building blocks of programs”. What exactly is a feature especially with respect to refinement?
* The whole approach is so far just base on structural extension (static) of modules, i.e. adding functions and data members, but how can you change dynamic behaviour or would each dynamic extension imply an extension of the class and overriding of the corresponding methods?
Questions
* see also weaknesses
* Is it really possible to say that all refinements are incrementally? (The authors also question this in one of their Future Work points by stating that they do not know how refactorings fit into the algebra.)
* Is this approach really scalable to composing programs? Would you then look at really long refinement chains of constants or is there a mechanism to abstract from those? On level n+1 (layer n+1) can you just refine the refinement chains of the previous ones or can you abstract for example several of those refinement chains to new constants and just refine the composite constant?
* How would you specify behavioural refinements in their algebra? (Is the algebra not too dependent on the underlying available code composers for example?)
* What is and how do you define/know the base model you apply your feature refinements on?
* Is it possible to have just simple operators for composition or would they not get more elaborate on higher levels?
Belief
I think the idea of modeling feature refinements by applying functions on previous refinement chains / constants and having those functions encapsulating all changes that need to be made is interesting but the incremental way does not seem too convincing. Furthermore, the tool itself does not seem to bring anything new at all compared to Hyper/J for example especially as the claim for the refinement of non-code artifacts is not supported enough in my opinion.