Responding to Jason...
Post by jason Post by Ron Jeffries Post by H. S. Lahman
Not quite the only one. There are at least five formal methodologies
(S-M, xtUML, MBSE, ROOM, and ROPES) for which successful model execution
is the exit criteria. Not to mention that OMG's MDA initiative is
largely focused on standardizing a semantic infrastructure for it.
All true. Yet I have never encountered a team that was working this way. Is the
technique currently limited to a narrow range of applications? Or still not
widely used? Or ... ?
Executable UML seems limited to a small number of realtime development
teams who have come from a Formal Methods background. Having said
that, the sheer amount of money being pumped into MDA rsearch and
tools development, plus the recent rush by Microsoft, IBM and others
to hire MDA luminaries for their tools & technologies groups suggests
true MDA may not be that far away. It will still take quite some time
to filter into the mainstream (OOP took over 30 years, remember, and
where is AOP?)
Translation grew up in R-T/E and is most commonly practiced there
because that unforgiving environment made good A&D a matter of survival.
However, it has nothing to do with Formal Methods per se. It simply
requires that one define the OOA model in an unambiguous fashion (i.e.,
one can't assume any A&D problems will be identified and fixed later).
IOW, it just /requires/ one to devote the same thinking effort necessary
for good A&D in any methodology rather than making it optional.
[One could have applied translation to SA/SD if the SA/SD models had
played together properly. OO uses essentially all the same diagrams as
SA/SD. OO just modified the SA/SD notations so that the various
diagrams were constructed under the same methodological umbrella to
remove inconsistency and ambiguity between views. However, OO's
emphasis on abstraction makes it relatively easy to separate the
concerns of the customer problem space (functional requirements) from
those of the computing space (nonfunctional requirements). Thus the
first widespread, general-purpose use of translation was in the OO arena.]
Though translation grew up in R-T/E, it is not at all limited to that
playground. Nor is it unique for R-T/E to pioneer software Good
Practices (such as modularity, behavior polymorphism, layered
architectures, block structuring and a bunch of other ideas we take for
The reason the Heavy Hitters are now jumping on the bandwagon is because
(a) the technology has been proven through nearly two decades of use on
real projects and (b) it is an obvious next step in the advance of
automation in software development. The computing space is enormously
complex but it is also highly deterministic because of its ties to a
rigorous computational model. As a result it is ideally suited to
The inevitability of that evolution has already been clearly
demonstrated. Nobody programs with plug boards anymore and Assembly has
become an arcane specialty for 3GL compiler developers and some
unfortunate souls doing embedded development. RAD IDEs have automated
traditional IT UI-to-DB pipelines by employing translation to hide the
computing space details behind the IDE facade. Almost as soon as GUI
and web paradigms were defined the world was filled with builders that
automated the details behind WYSIWYG IDEs. Display and data storage
were automated first because they were the most narrowly defined
paradigms. Soon to follow were the layered model infrastructures like
COM+ and EJB that hid all the mundane details of transforming one
representation to another in Computer Land.
The technology for general-purpose translation has existed since the
early '70s. Given a well defined, consistent semantic meta model, like
an MDA profile, it is actually surprisingly easy to automate full code
generation from an OOA model. The main technical problem has been with
optimization. Because of the complexity of the computing space there
are a vast number of alternatives and the rules for choosing among them
to satisfy nonfunctional requirements can be very complicated. So
developing a translation engine for a particular computing environment
requires solving a much tougher optimization problem than a 3GL compiler
[For example, at OOP time a developer may need to implement a 1:*
relationship between classes. There several ways to do that but most
developers make the choice almost instinctively because they recognize
patterns in the collaboration characteristics (e.g. maximum number of
participating instances, average number of participating instances,
whether the participating instances are fixed, etc.) that dictate a
particular, optimal implementation for the relationship in hand.]
It generally took about a decade for a new procedural 3GL's compiler to
have competitive optimization with established 3GL compilers. It took
the '70s, '80s, and much of the '90s to provide reasonable optimization
in translation engines. So it is only recently that translation has
emerged from the Early Adopter stage as a viable, mainstream technique.
Now that it has, the Heavy Hitters are positioning themselves for the
next paradigm shift.
[Interesting aside. In the '90s there was a series of well-publicized
debates at various OO conventions between Steve Mellor and the Three
Amigos (in fact, Steve provided that appellation in the debates) about
whether translation was viable. Despite his contrarian position at the
time, Ivar Jacobsen wrote the forward in Mellor's '02 book. More
significantly, in his keynote address at UML World in '01 he predicted
that writing 3GL code would soon be as rare as writing Assembly is
today. (Someone with a more Machiavellian bias than I might be just a
bit suspicious about those debates.)]
Post by jason
Precise UML modeling, on the other hand, along the lines of Catalysis
or UML Components is enjoying an ever-widening audience. Mostly this
is done on paper or on whiteboards and is not about code generation or
xUML to any degree. Rather, its about reducing the number of
requirements defects (errors, inconsistencies, omissions) finding
their way into the code. In that sense, adoption of a
quasi-Model-driven approach is very much on the rise.
The technology at issue here is different than Catalysis. For
translation there is complete separation of the customer and computing
space. In addition, Catalysis, though pretty formal, is still an
elaboration technique (i.e., the developer still produces the OOD).
The round-trip and reverse engineering tools would be closer to the
mark, but the deliverable at each stage is still produced by the
developer (albeit with substantial help by automating transfer of
semantic content across stages). Real executable models are only found
in the translation methodologies where the application developer
provides no added value to OOD/P.
Post by jason
Indeed, some of the techniques endorsed by Catalysis can be very
effectively applied in agile methods like XP and SCRUM - provided one
doesn't get bogged down in tools. The idea is not to deliver a model -
rather to ensure everybody who needs to understand the problem has
exactly the same understanding.
XP and SCRUM are development processes, not OOA/D/P methodologies. One
does the OOA/D the same way for translation as for the OOP-based agile
processes (which is another reason why Formal Methods are not
particularly relevant to executable UML). When one eliminates the need
for the application developer to touch OOD/P through translation, one
just changes the where the conceptual work is done. In Robert Martin's
words, the OOA model then becomes the "code".
To put it another way, one can apply the practices of XP or SCRUM (e.g.,
TDD, test-first, YAGNI, etc.) to the OOA model within a translation MDA
profile in pretty much the same way as they are currently applied to 3GL
Post by jason
One area where MDA will undoubtedly be huge is in web services.
keeping schemas and code (in several different implementation
languages) in step manually is an absolute nightmare. Tying it all to
a single UML model and then automatically generating the XML schema
and the domain classes, serialization code and persistence code seems
the only reasonable option.
MDA will be important in any arena where the implementation paradigms
are well defined and choices are deterministic (i.e., the entire
[BTW, MDA is a much broader initiative than UML or even software
development. It is much closer to the sort of notion represented by
IEEE standards like 716 (ATLAS and related electronic test specification
standards). Among other things it is intended to support collaboration
between software and non-software (e.g., hardware) contexts, which
implies that the profile include a semantic meta model of the
There is nothing wrong with me that could
not be cured by a capful of Drano.
H. S. Lahman
Pathfinder Solutions -- Put MDA to Work