Tags:
create new tag
view all tags
-- ThomasFritz - 21 Nov 2005

Review from Thomas Fritz

Problem

The paper presents the AHEAD model, an approach for the refinement of programs as well as their non-code representations based on algebraic hierarchical equations. According to the authors, programs are refined in an incremental way by adding new features to modules (i.e. containment hierarchies of artifacts) of an existing program and thus creating a new program. Different to GenVoca - the authors previous design methodology for creating application families – a program itself can also be seen as a module, so that it can be refined further and synthesized with other programs.

Base artifacts in the model are constants that are refined by applying functions on them. An arbitrary set of functions can be applied on a constant resulting in a refinement chain. Given a set of constants, i.e. the base program, a refinement is seen as a function that adds new constants to the program and extends/refines existing ones. Each refinement can thus be seen as a layer that increments the previous one (and that is only based on the previous one). The refined version is then obtained by refining the collectives of the lower layer according to the refinement equations. Thereby, the operators for composing the collectives are the same on each level (Principle of Abstraction Uniformity). By basing the refinement on small steps and having only a small number of simple refinement operators makes the approach scalable and keeps code generators that apply the refinements simple.

Furthermore, refinement is not only limited to source code artifacts but refinement of non-code artifacts such as makefiles, UML documents, etc. can also be modeled with AHEAD. All artifacts are thereby treated as classes and then refined analogously to source code artifacts (Principle of Uniformity), where refinements for each artifact type have to be defined separately. Those refinements can be used to keep the artifacts consistent to each other.

Contributions

  • presents an algebra to model feature-refinement of programs; extends the idea of modeling refinements of an individual program (GenVoca) to an arbitrary number of programs and representations by looking at each program as a module that can be refined and composed with other modules

  • enables modeling of refinements of non-code artifacts

  • presents tool support for AHEAD: composer, a tool that, based on an refinement equation, invokes artifact-specific composition tools; also presents the use of jampack and mixin as code-specific composition tools

  • provides information on the application of AHEAD modeling to itself and FSATS (fire support simulators)

Weaknesses

  • The incremental/layered structure of all refinements does not seem convincing. Can really all refinements be applied in an incremental way and why? They especially did not support enough why this might be possible. And looking at the 21 layers they needed for 4500 Java LOC it does not seem very scalable.

  • In my opinion, the claim for the possibility and relevance of applying AHEAD to non-code artifacts is not supported sufficiently in the paper. Does it really help to use such an algebra for all non-code artifacts and how does it help?

  • The application part does not provide useful information. It would be interesting to see what feature refinements (layers) they have in the equations for AHEAD.

  • The authors never really define the term ‘feature’ or ‘feature refinement’ in a satisfying way. They just state that “feature refinements are modular, albeit unconventional, building blocks of programs”. What exactly is a feature especially with respect to refinement?

  • The whole approach is so far just base on structural extension (static) of modules, i.e. adding functions and data members, but how can you change dynamic behaviour or would each dynamic extension imply an extension of the class and overriding of the corresponding methods?

Questions

  • see also weaknesses

  • Is it really possible to say that all refinements are incrementally? (The authors also question this in one of their Future Work points by stating that they do not know how refactorings fit into the algebra.)

  • Is this approach really scalable to composing programs? Would you then look at really long refinement chains of constants or is there a mechanism to abstract from those? On level n+1 (layer n+1) can you just refine the refinement chains of the previous ones or can you abstract for example several of those refinement chains to new constants and just refine the composite constant?

  • How would you specify behavioural refinements in their algebra? (Is the algebra not too dependent on the underlying available code composers for example?)

  • What is and how do you define/know the base model you apply your feature refinements on?

  • Is it possible to have just simple operators for composition or would they not get more elaborate on higher levels?

Belief

I think the idea of modeling feature refinements by applying functions on previous refinement chains / constants and having those functions encapsulating all changes that need to be made is interesting but the incremental way does not seem too convincing. Furthermore, the tool itself does not seem to bring anything new at all compared to Hyper/J for example especially as the claim for the refinement of non-code artifacts is not supported enough in my opinion.

Review from Brian de Alwis

Problem

This paper describes the AHEAD model for synthesizing software systems through the piecewise composition of program features, and an overview of the implementation of a set of tools supporting development using AHEAD.

The AHEAD model describes programs using a containment hierarchy and refinements as functions that transform these hierarchies. Its use of a composition algebra enables automation of step-wise refinement. These refinements are viewed as functions that produce a feature-augmented program as a result. A program is then a composition of some features to some program. The system uses a carefully defined form of composition. A composition operator must be defined for the different types of elements (e.g., Java files, XML files, etc.). Composition is only applied to base level artefacts -- others are then regenerated from these artefacts.

Contributions

  • Broadens MDSOC ideas to include non-source code files.
  • Defines a model of program generation through hierarchical refinement, and provides a composition algebra with defined semantics.
  • Describes an automated system implementing this refinement process.

Weaknesses

  • Despite the initial focus on product lines (``focus has been on the production of source code for individual program.s This is too limited.'' p187), their examples do not appear to be product lines.
  • Fails to address the use of layers for the shared layers of the AHEAD tools (p.194), instead of implementing them as a library.
  • It would be more encouraging to see assessments of systems either maintained or developed by others.

Questions

  • Development using AHEAD seems to produce a huge number of layers (21 for the STRICOM, 69 for the AHEAD tools).
    • Does this extensive (excessive?) layering pose comprehension problems for developers?
    • Are all programs able to be shoe-horned into a hierarchical structure?

  • What are the impact on development? Does product-line development suffer from fragile-superclass issues?

  • Refinement seems to assume that the composition of two things of equivalent type results in a third thing of equivalent type. Are there situations where this merging actually produces something of a different type?

  • Subject-OP and Hyper/J maintained the need for different types of ``glue'' / composition rules. Is AHEAD able to cope with just composition because of its strict hierarchical composition?

Belief

  • I believe this is an interesting work. I'd appreciate more discussion in comparing and contrasting the different models of program refinement.

Review from Ed McCormick

Problem

  • Step-wise refinement is a method for developing software incrementally by adding features to a simple program. Prior to this work, only ad-hoc methods were available to apply this technique to multiple programs and non-code representations written in different dsals (such as design rules or make files).

Contributions

  • The AHEAD model: a principled technique for scaling refinement-based generators. AHEAD expresses an arbitrary number of programs and representations as nested sets of equations.

  • A detailed description of an AHEAD tool built for code artifacts (A jakarta project with code, make files and rules.)

  • Quantitative results of an experiment using AHEAD tools to build non-trivial systems (FSATS and AHEAD tools).

Weaknesses

  • Qualitative results on the experiment using AHEAD to construct large systems would have been useful - especially on the debugging/maintentence phases.

  • The AHEAD tool for code artifacts seemed to get bogged down in "what is possible". More on "what works" and "how well" would have made this story more convincing.

  • The principle of uniformity says to treat all non-code artifacts as classes and refine them analogously. (basically - define inheritance relationships between non-code artifacts when need be). That this is possible does not seem believable by the end of the paper.

Questions

  • How do refinement functions look for DSALS like javadoc comments or word docs?

  • What are the payoffs in using this system? What are the drawbacks?

  • What does the composer tool actually do?
Edit | Attach | Watch | Print version | History: r4 < r3 < r2 < r1 | Backlinks | Raw View |  Raw edit | More topic actions
Topic revision: r4 - 2005-11-22 - EdMcCormick
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback