Discussion:
XP in school
(too old to reply)
4Space
2003-08-20 15:04:51 UTC
Permalink
[snip]
There is one type of team, however, for which XP works much, much
better than RUP. So much better, in fact, that I stopped allowing
those teams to use RUP, even though they often wanted to. These are
the teams with little programming experience.
[snip]

That certainly is interesting. I've always found that amongst non-XP'ers the
perception is that only experienced programmers would succeed, where less
experienced programmers would become hackers.

Cheers,

4Space
Phlip
2003-08-20 15:11:09 UTC
Permalink
Post by 4Space
There is one type of team, however, for which XP works much, much
better than RUP. So much better, in fact, that I stopped allowing
those teams to use RUP, even though they often wanted to. These are
the teams with little programming experience.
[snip]
That certainly is interesting. I've always found that amongst non-XP'ers the
perception is that only experienced programmers would succeed, where less
experienced programmers would become hackers.
Ralph says they do. They write healthy tests and crappy code. They don't
refactor between adding each test, because they don't see the benefit of
each tiny change. But the tests say they succeed.

This is where XP spreads the effect of a single guru out wider. Ralph's
homogenius classes can't do that. A single guru can demonstrate the
refactors, while the tests help the guru avoid getting stuck in the endless
time-wasters that traditional projects drag them down with.

--
Phlip
JXStern
2003-08-21 02:27:40 UTC
Permalink
There is one type of team, however, for which XP works much, much
better than RUP. So much better, in fact, that I stopped allowing
those teams to use RUP, even though they often wanted to. These are
the teams with little programming experience.
Sounds like a known group process phenomenon -- groups tend to perform
close to or slightly above the mean. When I worked at Xerox they put
us through a little playtime exercise along these lines.

Whether this is a good thing or not, well, you do the math.

J.
Ron Jeffries
2003-08-21 10:22:43 UTC
Permalink
Post by JXStern
There is one type of team, however, for which XP works much, much
better than RUP. So much better, in fact, that I stopped allowing
those teams to use RUP, even though they often wanted to. These are
the teams with little programming experience.
Sounds like a known group process phenomenon -- groups tend to perform
close to or slightly above the mean. When I worked at Xerox they put
us through a little playtime exercise along these lines.
Whether this is a good thing or not, well, you do the math.
Please explain how this "known group phenomenon" explains how Ralph's XP teams
do better than the RUP teams.
--
Ronald E Jeffries
http://www.XProgramming.com
http://www.objectmentor.com
I'm giving the best advice I have. You get to decide whether it's true for you.
JXStern
2003-08-22 06:36:39 UTC
Permalink
On Thu, 21 Aug 2003 06:22:43 -0400, Ron Jeffries
Post by Ron Jeffries
Post by JXStern
There is one type of team, however, for which XP works much, much
better than RUP. So much better, in fact, that I stopped allowing
those teams to use RUP, even though they often wanted to. These are
the teams with little programming experience.
Sounds like a known group process phenomenon -- groups tend to perform
close to or slightly above the mean. When I worked at Xerox they put
us through a little playtime exercise along these lines.
Whether this is a good thing or not, well, you do the math.
Please explain how this "known group phenomenon" explains how Ralph's XP teams
do better than the RUP teams.
It's quite simple, really, Ron. Nine women can make a baby in one
month, this being the same principle behind pair programming.

Beyond that, if you want me to recapitulate the entire argument to
this point and supply you with a complete education in social
psychology as well, make it worth my while.

Geez, I was only observing that the claimed results are entirely
reasonable, and you take it as a challenge?

J.
Ron Jeffries
2003-08-22 10:59:49 UTC
Permalink
Post by JXStern
It's quite simple, really, Ron. Nine women can make a baby in one
month, this being the same principle behind pair programming.
No, actually the principle behind pair programming is that two people working
together can do certain kinds of work better, in the same amount of time, as two
people working alone, in the same amount of time.

It's more like moving tables than having babies.
Post by JXStern
Beyond that, if you want me to recapitulate the entire argument to
this point and supply you with a complete education in social
psychology as well, make it worth my while.
Rudeness objection.
Post by JXStern
Geez, I was only observing that the claimed results are entirely
reasonable, and you take it as a challenge?
No, I asked a question. Ralph described /two/ kinds of groups, groups doing RUP
and groups doing XP, and said that the results were better with XP. You
Post by JXStern
Post by Ron Jeffries
Post by JXStern
Sounds like a known group process phenomenon -- groups tend to perform
close to or slightly above the mean. When I worked at Xerox they put
us through a little playtime exercise along these lines.
Since the comparison was group to group, it seemed to me that "known group
Post by JXStern
Post by Ron Jeffries
Please explain how this "known group phenomenon" explains how Ralph's XP teams
do better than the RUP teams.
Perhaps you perceive some difference between the groups? If so, what is that
difference?
--
Ronald E Jeffries
http://www.XProgramming.com
http://www.objectmentor.com
I'm giving the best advice I have. You get to decide whether it's true for you.
Donald Roby
2003-08-23 00:43:54 UTC
Permalink
Post by Ron Jeffries
Post by JXStern
It's quite simple, really, Ron. Nine women can make a baby in one
month, this being the same principle behind pair programming.
No, actually the principle behind pair programming is that two people
working together can do certain kinds of work better, in the same amount
of time, as two people working alone, in the same amount of time.
It's more like moving tables than having babies.
Good metaphor. I'll steal it if you don't mind.
Post by Ron Jeffries
Post by JXStern
Beyond that, if you want me to recapitulate the entire argument to this
point and supply you with a complete education in social psychology as
well, make it worth my while.
Rudeness objection.
Post by JXStern
Geez, I was only observing that the claimed results are entirely
reasonable, and you take it as a challenge?
No, I asked a question. Ralph described /two/ kinds of groups, groups
doing RUP and groups doing XP, and said that the results were better
Post by JXStern
Post by Ron Jeffries
Post by JXStern
Sounds like a known group process phenomenon -- groups tend to perform
close to or slightly above the mean. When I worked at Xerox they put
us through a little playtime exercise along these lines.
Since the comparison was group to group, it seemed to me that "known
Post by JXStern
Post by Ron Jeffries
Please explain how this "known group phenomenon" explains how Ralph's
XP teams do better than the RUP teams.
Perhaps you perceive some difference between the groups? If so, what is
that difference?
Seems clear to me that Stern's perception of difference was that
pair-programming made the difference.

Note also that Johnson did NOT say that the results were better with XP
universally. He was reasonably specific that teams with little
programming experience fared better with XP than with RUP.

I'm not sure classroom experience such as this have much bearing on
workplace realities, but Johnson also said "Success or failure depends
more on the ability of the students, whether the project is reasonable,
and whether the team works well together than it does on whether the
project was RUP or XP." This seems to me likely to carry over. Native
technical and collaborative ability of the staff are more important than
the process used. And besides being true, it's agile. It's one of the
lines in the manifesto.
JXStern
2003-08-23 03:09:12 UTC
Permalink
On Fri, 22 Aug 2003 06:59:49 -0400, Ron Jeffries
Post by Ron Jeffries
Post by JXStern
It's quite simple, really, Ron. Nine women can make a baby in one
month, this being the same principle behind pair programming.
No, actually the principle behind pair programming is that two people working
together can do certain kinds of work better, in the same amount of time, as two
people working alone, in the same amount of time.
No better a metaphor that way.
Post by Ron Jeffries
It's more like moving tables than having babies.
Cute, if you accept the implication that individual programmers just
can't do a good job, by the very nature of the task. Which
implication I, for one, reject.
Post by Ron Jeffries
Post by JXStern
Beyond that, if you want me to recapitulate the entire argument to
this point and supply you with a complete education in social
psychology as well, make it worth my while.
Rudeness objection.
Piffle, your question was no better than any kid who logs on to get
his homework done for him.
Post by Ron Jeffries
Post by JXStern
Geez, I was only observing that the claimed results are entirely
reasonable, and you take it as a challenge?
No, I asked a question. Ralph described /two/ kinds of groups, groups doing RUP
and groups doing XP, and said that the results were better with XP. You
Post by JXStern
Post by Ron Jeffries
Post by JXStern
Sounds like a known group process phenomenon -- groups tend to perform
close to or slightly above the mean. When I worked at Xerox they put
us through a little playtime exercise along these lines.
Since the comparison was group to group, it seemed to me that "known group
Post by JXStern
Post by Ron Jeffries
Please explain how this "known group phenomenon" explains how Ralph's XP teams
do better than the RUP teams.
Perhaps you perceive some difference between the groups? If so, what is that
difference?
I continue to resist the temptation to spell out the obvious.

Maybe *you* can tell *me* what the difference is "between the groups."

J.

ps - hint: in no way does what I am agreeing to here detract from the
claims of XP, it very nearly gives some degree of validation. What
gets complex is explaining why this turns out NOT to be the case. But
I have explained that so many times over the past few years, it would
seem pointless to attempt again, especially when you refuse to take
yes for an answer on such occassions (like this one) as it is offered.
Ron Jeffries
2003-08-23 03:12:27 UTC
Permalink
On Fri, 22 Aug 2003 13:15:30 GMT, "Greg Chien"
Post by Ron Jeffries
Ralph described /two/ kinds of groups, groups doing RUP
and groups doing XP, and said that the results were better with XP.
Looks to me that what Ralph described was not what you perceived (or
believed ;-)
<Quote from Ralph>
I teach both XP and RUP, and projects are either XP projects or RUP
projects. Both kinds of projects can succeed, and both can fail. Success
or failure depends more on the ability of the students, whether the
project is reasonable, and whether the team works well together than it
does on whether the project was RUP or XP.
...
There is one type of team, however, for which XP works much, much better
than RUP. So much better, in fact, that I stopped allowing those teams to
use RUP, even though they often wanted to. These are the teams with
little programming experience.
</Quote from Ralph>
I didn't express myself clearly. I meant "Inexperienced teams using RUP" vs
"Inexperienced teams using XP" as the two kinds of teams. Ralph reports better
results with XP.

Yes?
--
Ronald E Jeffries
http://www.XProgramming.com
http://www.objectmentor.com
I'm giving the best advice I have. You get to decide whether it's true for you.
Greg Chien
2003-08-23 03:45:05 UTC
Permalink
Post by Ron Jeffries
Post by Ron Jeffries
Ralph described /two/ kinds of groups, groups doing RUP
and groups doing XP, and said that the results were better with XP.
I didn't express myself clearly. I meant "Inexperienced teams using RUP" vs
"Inexperienced teams using XP" as the two kinds of teams. Ralph reports better
results with XP.
Yes?
Spin as you would like ;-)

I don't think putting an "inexperienced team using XP" can convince any
management to get a GO for any project, except in academic/training
environments (where the programmers pay $$$ to learn something). I think
Donald Roby has given sensible comments in the thread.
--
Best Regards,
Greg Chien
e-mail: remove n.o.S.p.a.m.
http://protodesign-inc.com
Donald Roby
2003-08-23 09:59:34 UTC
Permalink
Post by Ron Jeffries
Ralph described /two/ kinds of groups, groups doing RUP and groups
doing XP, and said that the results were better with XP.
I didn't express myself clearly. I meant "Inexperienced teams using
RUP"
vs
Post by Ron Jeffries
"Inexperienced teams using XP" as the two kinds of teams. Ralph reports
better
Post by Ron Jeffries
results with XP.
Yes?
Spin as you would like ;-)
I don't think putting an "inexperienced team using XP" can convince any
management to get a GO for any project, except in academic/training
environments (where the programmers pay $$$ to learn something). I
think Donald Roby has given sensible comments in the thread.
Thanks. No doubt I'll shoot myself in the foot eventually.

I think your point about the value of this in selling XP to management is
correct and rather unfortunate. Management never of course likes to admit
that they have an inexperienced staff, even though it's common to hire
less experienced people because the salaries don't have to be quite so
high. A manager who hires inexperience should consider XP if in fact
Johnson's assessment is valid.

And contrary to Stern's apparent take on this, I don't in fact think the
entire effect is from pair programming. I find it highly believable that
XP would play better with an inexperienced team than RUP because
inexperienced developers are likely to bog down in use case analysis
before getting any where near code.
Phlip
2003-08-23 14:15:05 UTC
Permalink
Post by Donald Roby
I think your point about the value of this in selling XP to management is
correct and rather unfortunate. Management never of course likes to admit
that they have an inexperienced staff, even though it's common to hire
less experienced people because the salaries don't have to be quite so
high. A manager who hires inexperience should consider XP if in fact
Johnson's assessment is valid.
Ideally, management should not micro-manage engineers over engineering
details, such as the location of everyones' desks.

http://groups.yahoo.com/group/extremeprogramming/files/TenCheapActions.pdf
Post by Donald Roby
And contrary to Stern's apparent take on this, I don't in fact think the
entire effect is from pair programming. I find it highly believable that
XP would play better with an inexperienced team than RUP because
inexperienced developers are likely to bog down in use case analysis
before getting any where near code.
Stern's take is not rational.

If he were our micromanager, which of the following items would he object
to?

1. Authorize Programmers To Stop When They're Tired
2. Test Code Immediately And Automatically Even
3. Build The Software Twice Every Day
4. Code In A Conference Room
5. Combine QA and Development
6. Add a workspace for the Busines Value Decision Maker
7. Turn Off E-Mail Notifications
8. Plan (and Work) by Features
9. Write Down Everything on Index Cards
10. Eat a [Bagel] Together Every Morning

"...If you wrap all these ideas up into a package ... you'll have an agile
software development process, and you'll be well onthe road to adopting a
particular process called XP."

I'm curious to hear which of the 10 cheap actions are zealotry.

--
Phlip
JXStern
2003-08-23 18:08:39 UTC
Permalink
Post by Phlip
Post by Donald Roby
And contrary to Stern's apparent take on this, I don't in fact think the
entire effect is from pair programming. I find it highly believable that
XP would play better with an inexperienced team than RUP because
inexperienced developers are likely to bog down in use case analysis
before getting any where near code.
Stern's take is not rational.
XP'ers frequent claims that any disgreement with them is not rational,
is not rational.
Post by Phlip
If he were our micromanager, which of the following items would he object
to?
1. Authorize Programmers To Stop When They're Tired
2. Test Code Immediately And Automatically Even
3. Build The Software Twice Every Day
4. Code In A Conference Room
5. Combine QA and Development
6. Add a workspace for the Busines Value Decision Maker
7. Turn Off E-Mail Notifications
8. Plan (and Work) by Features
9. Write Down Everything on Index Cards
10. Eat a [Bagel] Together Every Morning
"...If you wrap all these ideas up into a package ... you'll have an agile
software development process, and you'll be well onthe road to adopting a
particular process called XP."
These are ten random blurtings about the work environment, and in no
way constitute a coherent set of issues regarding methdology, project
management, or organizational behavior. The only thing they have in
common is the kind of randomized self-aggrandizement that seems
endemic to XP zealotry.
Post by Phlip
I'm curious to hear which of the 10 cheap actions are zealotry.
I fully endorse #1, but it is not an element of methodology.

#2 is extreme, I do object (sic) to it in this form.

#3 is ridiculous. As an individual developer, I probably build the
software many times a day, depending on how long it takes. But there
is no need or benefit from doing system builds that often if you have
ten team members -- the granularity of useful work is many times too
large. There is a good principle in play here, but it does not help
to abuse it like this.

#4? I don't care who codes where.

#5 is an expression of ignorance -- yes TQM says you build in quality
at the lowest level, but that does NOT eliminate the need for external
QA -- it just facilitates the passage of software quickly through that
process, to the overall benefit of the project.

#6 is gibberish. There is much to say about the customer
representative to the development process, and about the customer
relationship to the development process, but I've never been in a
situation where this person, the BVDM (?), was in any way short of
workspace. Quite the opposite!

#7 is my preference, but I wouldn't mandate it either way, and again,
it is no point of methodology.

#8 sounds too simple. It's a good principle to map all details back
to some observable feature, which is what I guess it is supposed to
mean, but sometimes there are technical "refactorings" that have to
take place that are 99% invisible to the user.

#10 is extreme and cheap. I'd give each developer their own bagel, if
they want one, which they may not, if they are on an Atkins/Zone diet.

--

Seems to me that these kinds of concerns under the XP banner express
exactly that trend to micromanagement that I most loathe.

J.
JXStern
2003-08-23 19:20:46 UTC
Permalink
Post by Phlip
9. Write Down Everything on Index Cards
Sorry, I missed #9.

You find me a 6' x 9' index card I can put my E/R diagram on, and
things will be fine.

J.
Bob Hathaway
2003-08-24 06:34:09 UTC
Permalink
Post by JXStern
Post by Phlip
Post by Donald Roby
And contrary to Stern's apparent take on this, I don't in fact think the
entire effect is from pair programming. I find it highly believable that
XP would play better with an inexperienced team than RUP because
inexperienced developers are likely to bog down in use case analysis
before getting any where near code.
Stern's take is not rational.
XP'ers frequent claims that any disgreement with them is not rational,
is not rational.
Post by Phlip
If he were our micromanager, which of the following items would he object
to?
1. Authorize Programmers To Stop When They're Tired
2. Test Code Immediately And Automatically Even
3. Build The Software Twice Every Day
4. Code In A Conference Room
5. Combine QA and Development
6. Add a workspace for the Busines Value Decision Maker
7. Turn Off E-Mail Notifications
8. Plan (and Work) by Features
9. Write Down Everything on Index Cards
10. Eat a [Bagel] Together Every Morning
[...]
#3 is ridiculous. As an individual developer, I probably build the
software many times a day, depending on how long it takes. But there
is no need or benefit from doing system builds that often if you have
ten team members -- the granularity of useful work is many times too
large. There is a good principle in play here, but it does not help
to abuse it like this.
I'd have to agree with this. I've worked on projects with a daily build which
propagated to everyone, often making people waste most entire mornings realizing
the reason their program didn't work is because someone else checked in
buggy code and it wasn't their recent checkin, this is the worst possible approach
in my experience, usually followed by people who simply don't understand
sophisticated build processes as followed on large projects.

Last project we did weekly builds and all went very smoothly as compared to
the first project. Between the15 or so groups totalling about 100 people, everyone
had plenty of time to get their code reasonably correct before the build. If a team
whose code changed effected another team before the build, that other team could
recompile the required (and checked in) code themselves and coordintate, which
was common but on an as needed or desired basis. While any approach will encounter
problems, such as the weekly manager's rush to get a code patch into the build
1 hour beforehand, the stability of a weekly build (with occasional required project
recompiles) is far preferable.

Perhaps I don't understand what daily builds mean but if it means incorporating
other's code checkin's, (or even worse forcing all code to be checked in) into
everyone else's dev builds, maybe an XP'er can try to explain why.

Maybe the 'daily buiilds' people are talking about only checking interfaces (although
won't work if interfaces are in occasionl states of flux) and only compiling running
code against stable builds. Perhaps the teams are so small the disastrous
'other people's bugs' problem doesn't show up every morning (or twice a day?)
Years ago when I heard Microsoft did daily builds, I attributed it to ignorance
of build process and reasonable SCM tools.
Post by JXStern
#5 is an expression of ignorance -- yes TQM says you build in quality
at the lowest level, but that does NOT eliminate the need for external
QA -- it just facilitates the passage of software quickly through that
process, to the overall benefit of the project.
While I think you know this:
QA is supposed to facilitate reviews: requirements, use cases, test cases, design, and code
and provide support and training to teams, for example any chosen methodology should
be supported by QA with training and tools. I've worked at places where some of this activity
was referred to by the sr manager as 'that sounds like something that should be done by QA',
meaning a group on another floor having nothing to do with development. External QA
is important if done properly but most orgs make it so external they effectively provide
no support for dev! Clearly good QA occurs in groups with TSP as strong example and
also supporting them.

Regards,
Bob
Ron Jeffries
2003-08-24 12:05:22 UTC
Permalink
Post by Bob Hathaway
Perhaps I don't understand what daily builds mean but if it means incorporating
other's code checkin's, (or even worse forcing all code to be checked in) into
everyone else's dev builds, maybe an XP'er can try to explain why.
Maybe the 'daily buiilds' people are talking about only checking interfaces (although
won't work if interfaces are in occasionl states of flux) and only compiling running
code against stable builds. Perhaps the teams are so small the disastrous
'other people's bugs' problem doesn't show up every morning (or twice a day?)
Years ago when I heard Microsoft did daily builds, I attributed it to ignorance
of build process and reasonable SCM tools.
See the Testing and Continuous Integration practices of XP. XP teams check in
their code a couple of times per day per pair.

Unlike the situations you describe, they do this in concert with a suite of very
comprehensive unit tests which is run, and must run perfectly, before anyone
integrates. The result is not chaos but smooth progress.

It would seem, perhaps, that this would make it impossible ever to integrate. I
suppose if one were sufficiently bad at testing or bad at making tests run, it
might. Whether that would be good or bad, I'll leave to the reader.

In practice, however, what happens is that pairs make progress in small steps
which "ratchet" the project forward in small steps (almost) each of which works.

When I was first told of the practice I thought it couldn't possibly work. But
it does, teams all over the world are now doing it and find it both practical
and effective.
--
Ronald E Jeffries
http://www.XProgramming.com
http://www.objectmentor.com
I'm giving the best advice I have. You get to decide whether it's true for you.
Phlip
2003-08-24 13:34:12 UTC
Permalink
Evil bosses have ordered everyone on this newsgroup, at one time or another,
Post by Phlip
1. Authorize Programmers To Stop When They're Tired
2. Test Code Immediately And Automatically
3. Build The Software Twice Every Day
4. Code In A Conference Room
5. Combine QA and Development
6. Add a workspace for the Busines Value Decision Maker
7. Turn Off E-Mail Notifications
8. Plan (and Work) by Features
9. Write Down Everything on Index Cards
10. Eat a [Bagel] Together Every Morning
Declaring those things starts a team (and their boss) on a slippery slope
towards a situation that enables even more best practices. The slippery
slope on 3 leads to continuous integration.
Post by Phlip
I'd have to agree with this. I've worked on projects with a daily build which
propagated to everyone, often making people waste most entire mornings realizing
the reason their program didn't work is because someone else checked in
buggy code and it wasn't their recent checkin, this is the worst possible approach
in my experience, usually followed by people who simply don't understand
sophisticated build processes as followed on large projects.
"Daily build" without continuous integration, a partial-rebuild mechanism,
and a midnite timer, sucks. Checking in code without running at least the
build and test scripts local to the changed modules sucks. Ordering everyone
to manually perform a daily build sucks. Not having a chronic build & test
server, running in an infinite loop, sucks.

A system that remains continuously in a fully integrated & built state
doesn't suck.

Not having a common build environment sucks. Some programmers are too lazy
to install every library in a project. Not having a small amount of
administrative support for putting all these libraries into a deployable kit
sucks. Not being able to do a complete system build and test, at whim, on
any computer, sucks.

But, full agreement that the need to tell people "do daily builds!" also
sucks.
Post by Phlip
Last project we did weekly builds and all went very smoothly as compared to
the first project. Between the15 or so groups totalling about 100 people, everyone
had plenty of time to get their code reasonably correct before the build.
If I integrated each and every time my code got a tiny bit better, say 1 to
20 edits followed by passing tests, would the build script involved slow me
down? That would suck. But imagine if it didn't. The odds of a collision
with others get lower the more often we integrate.
Post by Phlip
If a team
whose code changed effected another team before the build, that other team could
recompile the required (and checked in) code themselves and coordintate, which
was common but on an as needed or desired basis.
We are looking at a "resolution & scale" issue. If the project is huge, then
one gets the 50,000 foot view by running a total clean-rebuild-test cycle.
But at the narrow resolution of my little change, I need a partial rebuild
on the local module, passing tests on this module (which indirectly test a
few other modules), before I commit.

But "coordination" should be automatic. The more often we integrate, the
less we need to apply manual processes, or e-mails, or oversight.

Ideally, every time I integrate, the source on my workstation all gets just
a little bit better, and I keep going.
Post by Phlip
While any approach will encounter
problems, such as the weekly manager's rush to get a code patch into the build
1 hour beforehand, the stability of a weekly build (with occasional required project
recompiles) is far preferable.
"Patch"?
Post by Phlip
Perhaps I don't understand what daily builds mean but if it means incorporating
other's code checkin's, (or even worse forcing all code to be checked in) into
everyone else's dev builds, maybe an XP'er can try to explain why.
To get instant feedback.

Everyone here knows bosses who delay integration - sometimes permanently.
That leads to "integration hell", which interferes with tracking and time
estimates.

The difficulty, time, and error spread in estimates needed to integrate
increases proportional to the square of the time delayed integrating. So,
chronic integrations every 10 to 90 minutes add up to less time (and
frustration) than sporadic integrations will. Keep the build machine oiled,
run it under load, and oil whatever squeaks.

Oh, by the way, XP teams (and lots of other teams, too) permit developers,
subject to review, to change any code anywhere. So if you change code I
wrote, I want the change on my desk in 10 to 90 minutes. Not next month, or
never. That would suck.
Post by Phlip
Maybe the 'daily buiilds' people are talking about only checking interfaces (although
won't work if interfaces are in occasionl states of flux) and only compiling running
code against stable builds. Perhaps the teams are so small the disastrous
'other people's bugs' problem doesn't show up every morning (or twice a day?)
Years ago when I heard Microsoft did daily builds, I attributed it to ignorance
of build process and reasonable SCM tools.
Uh, if we understand the 'make' clutter of scripts, and how they check
dependencies for a partial recompile, then integration depends on partial
rebuilds. Assume that works, don't force a >total< rebuild just to check in
a script, and leverage the tests to cover a little more than the current
module.

People who fear "other people's bugs" are unfamiliar with full test coverage
against continuous integration.
Post by Phlip
QA is supposed to facilitate reviews: requirements, use cases, test cases, design, and code
and provide support and training to teams, for example any chosen methodology should
be supported by QA with training and tools. I've worked at places where
some of this activity
Post by Phlip
was referred to by the sr manager as 'that sounds like something that should be done by QA',
meaning a group on another floor having nothing to do with development.
External QA
Post by Phlip
is important if done properly but most orgs make it so external they effectively provide
no support for dev! Clearly good QA occurs in groups with TSP as strong example and
also supporting them.
Let's call it "QC". Quality "Assurance" implies their job is to prove the
project has such-and-so quality levels. Quality "Control" implies executives
can control quality by turning the knob on that team up.

QC's job is to convert a project's requirements into its high-level
specifications, expressed as tests, and to rate the health of the entire
project. Including how well oiled its build scripts are.

All these objections are about moving from monthly integration to weekly
integration. We need to avoid integration hell. To do that, keep turning the
knob. The more often you integrate, and the more you automate the
integration mechanism, the easier all integrations get.

--
Phlip
http://www.c2.com/cgi/wiki?TestFirstUserInterfaces
Uncle Bob (Robert C. Martin)
2003-08-24 21:02:59 UTC
Permalink
Post by Bob Hathaway
I'd have to agree with this. I've worked on projects with a daily build which
propagated to everyone, often making people waste most entire mornings realizing
the reason their program didn't work is because someone else checked in
buggy code and it wasn't their recent checkin, this is the worst possible approach
in my experience, usually followed by people who simply don't understand
sophisticated build processes as followed on large projects.
I work on a system that we build several times per day. There is one
rule in place. You cannot check in new code unless:

0. You build your code test first.
1. All unit tests pass.
2. No previously passing acceptance tests fail.

We occasionally have the problems you describe, but almost every time
it's because someone broke one of the rules.



Robert C. Martin | "Uncle Bob"
Object Mentor Inc.| unclebob @ objectmentor . com
PO Box 5757 | Tel: (800) 338-6716
565 Lakeview Pkwy | Fax: (847) 573-1658 | www.objectmentor.com
Suite 135 | | www.XProgramming.com
Vernon Hills, IL, | Training and Mentoring | www.junit.org
60061 | OO, XP, Java, C++, Python | http://fitnesse.org
Bob Hathaway
2003-08-25 21:15:11 UTC
Permalink
Post by Uncle Bob (Robert C. Martin)
Post by Bob Hathaway
I'd have to agree with this. I've worked on projects with a daily build which
propagated to everyone, often making people waste most entire mornings realizing
the reason their program didn't work is because someone else checked in
buggy code and it wasn't their recent checkin, this is the worst possible approach
in my experience, usually followed by people who simply don't understand
sophisticated build processes as followed on large projects.
I work on a system that we build several times per day. There is one
0. You build your code test first.
1. All unit tests pass.
2. No previously passing acceptance tests fail.
We occasionally have the problems you describe, but almost every time
it's because someone broke one of the rules.
Thanks for the response. But I'm still wondering, in typical XP CM, how
builds are transferred to devs, not source checkin's. I presume a checkin is
labelled to target some build, checkins can be to simply save code and there
may be branching when differnt versions are worked on at the same time and
different builds are in various phases on different QC and UAT machines
and within this previous versions are also worked on for prior release patches.

So the question is, when do developers work on a new build (object code), which contains
other team members and other teams checkin's targeting that build:
- Developers download a new build as they wish to work with, perhaps twice a week (!)
- Developers must work with the new integration build whenever it is performed, twice a day.
- Implies they must download a codebase.
- Developers work off a shared central codebase and checkin's are immediately compiled and
impacts developer's work.

There are really two aspects to CM, source control/versioning/labeling and builds. I think
I understand some of the rigor in the XP checkin process (bravo) but I'm still unclear on the build process,
specifically when that build must be used by developers. The SCM school runs rergression tests
on integration builds, which would include 'acceptance' tests but standards wouldn't let the dev label
an item for a build unless it passed regression tests, a critical difference between quality SCM
(QA labels items as acceptable) and everyone else, who let developer's make the decision, tested or
not. This has been a pet point of mine in SCM for some time and from your comment that XP
dev's don't always follow the rules, it bites XP as well. Surprising to me these published standards
have been in place for decades and are typically ignored in practice, I would presume because
most SCM people aren't properly trained in SCM.

Regards,
Bob
Bob Hathaway
2003-08-26 02:48:51 UTC
Permalink
Phlip,

Thanks for the answer, I think I now get the continuous integration idea.
[...]
One example of 2 or more products to ship is "get Version 5 ready for
rollout, but we still must maintain Version 4". They are still >one< code
base, with only conditional compilation and such scripting to distinguish
the second version.
Java doesn't have conditional compilation, and from what I remember in C++ it
can make quite a mess. Is it really better thank branching (forking)
and keeping two separate copies? The eventual merge of features is a pain
when branching, but is doable. With very large projects these versions can grow
quite a bit apart, including within files and in new files, and then I would imagine
conditional compilation could become a nightmare.
In XP, Devs typically integrate by merging their code with the current CVS
head (or equivalent), running as many tests as they can without wasting time
(starting with the tests on the modules they changed), then commiting their
changes.
We've usually settled for Clearcase dynamic views, and even the nifty new CC projects,
just wondering if CVS was an example or if it is the XP standard.

Thanks,
Bob
Bob Hathaway
2003-08-26 04:50:08 UTC
Permalink
[...]
Asking if XP recommends CVS reveals you might not yet understand the
difference between such a guru's report and XP. Think of XP as a sane
Thanks, I think it's more that it's late...

Regards,
Bob
Ilja Preuss
2003-08-26 07:10:00 UTC
Permalink
Post by Bob Hathaway
One example of 2 or more products to ship is "get Version 5 ready for
rollout, but we still must maintain Version 4". They are still >one< code
base, with only conditional compilation and such scripting to distinguish
the second version.
Java doesn't have conditional compilation, and from what I remember in C++ it
can make quite a mess. Is it really better thank branching (forking)
and keeping two separate copies?
I recently had to do a major change in a component of our product (in Java).
I found a way to do it in tiny little steps, but the intermediate states
were quite inconsistent from a users perspective, so it wasn't something I
would have wanted seen to be deployed. As I wasn't very sure about the
estimate and releases of the latest features sometimes need to happen at
short notice, I had to find a way to support the old way until the new one
was finished.

I did decide to not use branching because, well, I don't like it. Instead, I
introduced system property which when set activated my new code (by
something along the lines of the Stategy pattern). Only after I finished my
changes, I made them the default and removed the old code from the system.

It worked liked a charm for me. The next time I have to do something like
this, I will try this strategy again.

Regards, Ilja
Phlip
2003-08-26 14:14:36 UTC
Permalink
Post by Ilja Preuss
I recently had to do a major change in a component of our product (in Java).
I found a way to do it in tiny little steps, but the intermediate states
were quite inconsistent from a users perspective, so it wasn't something I
would have wanted seen to be deployed. As I wasn't very sure about the
estimate and releases of the latest features sometimes need to happen at
short notice, I had to find a way to support the old way until the new one
was finished.
To the newbies: "Frequent Releases" does not always mean "frequently to real
end-users". One can productively releasing to a marketing department, who is
aware that the GUI is half this and half that.
Post by Ilja Preuss
I did decide to not use branching because, well, I don't like it. Instead, I
introduced system property which when set activated my new code (by
something along the lines of the Stategy pattern). Only after I finished my
changes, I made them the default and removed the old code from the system.
It worked liked a charm for me. The next time I have to do something like
this, I will try this strategy again.
That strategy is Deprecation Refactor: Leave an old system online while
writing a new one, then when the new one passes all tests, in parallel,
remove the old one.

--
Phlip
Uncle Bob (Robert C. Martin)
2003-08-29 17:05:07 UTC
Permalink
Post by Bob Hathaway
We've usually settled for Clearcase dynamic views, and even the nifty new CC projects,
just wondering if CVS was an example or if it is the XP standard.
XP doesn't recommend any particular CM system. Of the XP teams that I
work with I see a pretty uniform distribution of CM systems. However,
I don't think any of them actually need anything more than CVS.




Robert C. Martin | "Uncle Bob"
Object Mentor Inc.| unclebob @ objectmentor . com
PO Box 5757 | Tel: (800) 338-6716
565 Lakeview Pkwy | Fax: (847) 573-1658 | www.objectmentor.com
Suite 135 | | www.XProgramming.com
Vernon Hills, IL, | Training and Mentoring | www.junit.org
60061 | OO, XP, Java, C++, Python | http://fitnesse.org
Uncle Bob (Robert C. Martin)
2003-08-29 17:01:47 UTC
Permalink
Post by Bob Hathaway
Post by Uncle Bob (Robert C. Martin)
Post by Bob Hathaway
I'd have to agree with this. I've worked on projects with a daily build which
propagated to everyone, often making people waste most entire mornings realizing
the reason their program didn't work is because someone else checked in
buggy code and it wasn't their recent checkin, this is the worst possible approach
in my experience, usually followed by people who simply don't understand
sophisticated build processes as followed on large projects.
I work on a system that we build several times per day. There is one
0. You build your code test first.
1. All unit tests pass.
2. No previously passing acceptance tests fail.
We occasionally have the problems you describe, but almost every time
it's because someone broke one of the rules.
Thanks for the response. But I'm still wondering, in typical XP CM, how
builds are transferred to devs, not source checkin's. I presume a checkin is
labelled to target some build, checkins can be to simply save code and there
may be branching when differnt versions are worked on at the same time and
different builds are in various phases on different QC and UAT machines
and within this previous versions are also worked on for prior release patches.
An XP project tries very hard never to let any of that happen. In an
XP project every checkin *is* a complete build, with all unit tests
passing, and all appropriate acceptance tests passing. The iteration
proceeds from checkin to checkin, creating builds of ever greater
functionality.

So if we can help it, there will be nobody working on build 1.3 while
other people are working on build 1.4. There will be no branches.
Each check-in creates a build that could theoretically be placed in
production.
Post by Bob Hathaway
So the question is, when do developers work on a new build (object code), which contains
- Developers download a new build as they wish to work with, perhaps twice a week (!)
- Developers must work with the new integration build whenever it is performed, twice a day.
- Implies they must download a codebase.
- Developers work off a shared central codebase and checkin's are immediately compiled and
impacts developer's work.
The latter is closest to the way we work. Every checkin is an
integration. There is no final integration.



Robert C. Martin | "Uncle Bob"
Object Mentor Inc.| unclebob @ objectmentor . com
PO Box 5757 | Tel: (800) 338-6716
565 Lakeview Pkwy | Fax: (847) 573-1658 | www.objectmentor.com
Suite 135 | | www.XProgramming.com
Vernon Hills, IL, | Training and Mentoring | www.junit.org
60061 | OO, XP, Java, C++, Python | http://fitnesse.org
JXStern
2003-08-23 17:52:29 UTC
Permalink
On Sat, 23 Aug 2003 09:59:34 GMT, "Donald Roby"
Post by Donald Roby
And contrary to Stern's apparent take on this, I don't in fact think the
entire effect is from pair programming.
Now, I didn't say the entire effect was, I just suggested that a
simple statistical/group effect was in play and pushed in that same
direction.
Post by Donald Roby
I find it highly believable that
XP would play better with an inexperienced team than RUP because
inexperienced developers are likely to bog down in use case analysis
before getting any where near code.
Sure, fine. There are the usual questions of scale in the toy
projects students deal with in academia, and other issues.

My objection is that a methodology that takes individuals from level
zero to level one, may not be useful in the jumps up to levels two
through ten.

It is a hugely common experience in software and the world, that the
better a system is for the novice, the more limiting it is for
professionals. Maybe XP is the training wheels for teaching software
engineering. I doubt it, but maybe.

Joshua Stern
Shayne Wissler
2003-08-23 20:13:50 UTC
Permalink
Post by JXStern
It is a hugely common experience in software and the world, that the
better a system is for the novice, the more limiting it is for
professionals. Maybe XP is the training wheels for teaching software
engineering. I doubt it, but maybe.
I think some aspect of it could be construed that way.

Someone with a lot of experience can productively design for extended
periods without doing any coding at all, whereas the unskilled would just
flounder--they have to code in tiny steps because they can only think in
tiny steps.

But this is unsurprising. It is universally true that those with more skill
and experience--in any area of human life--can more effectively plan ahead
than the novice. But I wouldn't go so far as to say that XP is good for the
novice--it's only good relative to the fantastically stupid idea of trying
to get beginner programmers to use RUP.


Shayne Wissler
Kevin Cline
2003-09-02 16:21:27 UTC
Permalink
Post by Donald Roby
And contrary to Stern's apparent take on this, I don't in fact think the
entire effect is from pair programming. I find it highly believable that
XP would play better with an inexperienced team than RUP because
inexperienced developers are likely to bog down in use case analysis
before getting any where near code.
A good point. Analysis paralysis is definitely a product of
inexperience. Knowing when to stop is what separates the good
analysts from the hacks, especially in an iterative and incremental
process. And programmers do not always make good analysts (and vice
versa). The detail-orientation and focus on specifics that benefit a
good programmer can result in churn when doing analysis.
The ability to abstract away from detail is important whether
programming or designing. The way to stop analysis churn is to
analyze some and implement some. Until you implement, you have no
idea if your analysis is any good.
Cy Coe
2003-09-02 21:26:57 UTC
Permalink
Post by Kevin Cline
A good point. Analysis paralysis is definitely a product of
inexperience. Knowing when to stop is what separates the good
analysts from the hacks, especially in an iterative and incremental
process. And programmers do not always make good analysts (and vice
versa). The detail-orientation and focus on specifics that benefit a
good programmer can result in churn when doing analysis.
The ability to abstract away from detail is important whether
programming or designing. The way to stop analysis churn is to
analyze some and implement some. Until you implement, you have no
idea if your analysis is any good.
I agree with this, to a point. Implementing a small piece of
functionality would provide feedback on those specific features, but
may say little about how well the broader solution holds together.
And such feedback is not an excuse to avoid exploring the solution in
toto through domain and requirements analysis. I think a certain
amount of upfront analysis, divorced from coding and other detailed
implementation work, is an absolute must in any non-trivial project.
Maybe you only spend a couple of weeks or so, but I think there needs
to be a buffer between first-cut conceptual work and implementation.

This is not to say that I want analysis to be *completely* unfront.
Like any other element in an iterative and incremental process, the
analysis and understanding of the problem domain evolve with each
iteration. But I'm quite happy to have the first iteration in a
RUP-like process release nothing in the way of working code.


Cy
Ron Jeffries
2003-09-03 23:05:44 UTC
Permalink
In my experience this exercise would help us learn much more than up
front analysis that was divorced from implementation.
Then I'm afraid we're in disagreement. I consider the opposite to be
true.
Have you tried it both ways? Bob has, and I have. If you have, I'd be very
interested to hear more about the experience.
--
Ronald E Jeffries
http://www.XProgramming.com
http://www.objectmentor.com
XP is the only holistic software process in the universe. ;->
Ron Jeffries
2003-09-04 11:36:23 UTC
Permalink
Post by Ron Jeffries
In my experience this exercise would help us learn much more than up
front analysis that was divorced from implementation.
Then I'm afraid we're in disagreement. I consider the opposite to be
true.
Have you tried it both ways?
I've been on projects that have done similar things to what Bob
described, but in addition to, as opposed to instead of, the type of
analysis I described. And in those instances, I found it as much an
annoyance as a help. But no, I haven't been on a large project that
did end-to-end prototyping or simulation in the manner Bob described
*in lieu* of implementation-agnostic analysis.
Actually I believe that Bob is not talking about either prototyping or
simulation. I believe he is talking about incremental development. Of course he
could have changed his views since I saw him a couple of weeks ago, so maybe
he'll chime in here.
Post by Ron Jeffries
Bob has, and I have. If you have, I'd be very
interested to hear more about the experience.
You're using a well-worn Usenet debating tactic here.
It's a well-worn tactic because of its essential truth: we don't know how to
choose between two things very well if we have only tried one of them.

PC vs Mac? Mac sucks! Every tried one? No, they suck!
I don't really
think you're interested in my experiences, particularly if they don't
back up your viewpoint.
I'd advise you to get some experience, post it, see how I respond to it, rather
than theorize about how I might respond if you actually did it. If I display no
interest (and I don't read everything, so feel free to ping me via email), then
call me a liar. Until then, it might be inappropriate.

I freely grant that I'm more interested in having you try the ideas before
deciding, but if you were to try them and find little or no value I would be
very interested in how it turned out that way.
Reports based on your (or Bob's) experience, however expansive that
experience may be, still amount to nothing more than anecdotal
evidence, filtered through the perceptions of individuals with very
strong opinions and personal biases on the matter. And yes, it would
be no different with me.
Yes, except that you would /believe/ yourself. So you're the only person who can
do the experiment that matters to you. That's why I inquired.
It's much easier to measure whether or not you're doing a *thing
right* than whether or not you're doing the *right thing*. And it's
the latter that I focus on in my work.
Well, yes. And unless we try lots of potentially right things, how do we learn
to choose what's right?

Yes, I know that we all know without trying it that jumping off a tall building
is a bad idea. But incremental development and refactoring and such aren't
jumping off a tall building. They are well-known, long-standing techniques with
honored history. People are trying them at a higher frequency, in a more
integrated fashion, and reporting interesting results.

It could be worth a real look -- lots of folks think so.
I sometimes get the feeling you really do see this whole debate as a
white hats, black hats sort of thing.
Where did that come from? I just asked whether you had actually tried the
technique and offered to listen to your experience. I see no hats here.
Well, if that's the case, then
fit me for a black hat right now, because I think Extreme Programming
represents a backslide in the discipline of software engineering. And
no, I'm *not* going to try it before saying so!
That's fine. I think Ferraris suck, and I don't mind saying so before trying
one. ;->

Regards,
--
Ronald E Jeffries
http://www.XProgramming.com
http://www.objectmentor.com
XP is the only holistic software process in the universe. ;->
Ron Jeffries
2003-09-05 02:46:02 UTC
Permalink
Post by Ron Jeffries
Actually I believe that Bob is not talking about either prototyping or
simulation. I believe he is talking about incremental development. Of course he
could have changed his views since I saw him a couple of weeks ago, so maybe
he'll chime in here.
The specific scenario he described was that instead of spending two
weeks on implementation-agnostic analysis, instead use that time to
build a crude, stripped-down end-to-end version of the system, one
which may only handle the basic inputs and outputs, or that might
process only one record. I consider this a prototype or simulation,
rather than a first-cut of the actual system.
I think Bob considers it a first cut. I know that I do.
Post by Ron Jeffries
Post by Ron Jeffries
Bob has, and I have. If you have, I'd be very
interested to hear more about the experience.
You're using a well-worn Usenet debating tactic here.
It's a well-worn tactic because of its essential truth: we don't know how to
choose between two things very well if we have only tried one of them.
PC vs Mac? Mac sucks! Every tried one? No, they suck!
Have you tried MDA?
No, I have not. I have done fairly heavy up front analysis and design, and have
drawn a lot of UMLish models in my time, but I have not done MDA. Don't nkow
much about it.
Post by Ron Jeffries
I freely grant that I'm more interested in having you try the ideas before
deciding, but if you were to try them and find little or no value I would be
very interested in how it turned out that way.
Based on how I've seen you respond to others, you would try to dismiss
or invalidate any negative experience I reported, either by claiming I
applied the practises poorly or broke some rule, thus invalidating the
experiment.
I trust that my actual behavior is not quite that simple. However, if you were
to try a practice that works for me, and it didn't work for you, I certainly
would try to figure out what the differences were, and one likely point of
difference would be that we were actually doing different things.
Funny how you only seem to apply this sort of scrutiny to people who
claim to have used XP and found it unsatisfactory. You appear to take
those with glowing reports at their word that they used the
methodology fully and correctly. I suppose you consider it the
responsibility of those on the opposing side to "cross-examine" the
witnesses to determine their XP-compliance.
That's a good point. I think my actual behavior follows that pattern even more
broadly. When someone tells me something that worked, there's usually still some
stuff that didn't work so well. My natural instinct is to dig into the things
that could have gone better, so as to figure out how to do things better. It is
possible, of course, that when someone says "I tried pair programming and it
worked great", I assume he did it the same way I do, yet in fact he didn't. Even
so, though, the things that work for people are less interesting to me than the
trouble spots.
Post by Ron Jeffries
That's fine. I think Ferraris suck, and I don't mind saying so before trying
one. ;->
I can't say that Ferraris suck, but I know I wouldn't want to drive
one. Why? Because I don't like driving cars with standard
transmissions. That pretty much leaves out any kind of sports cars.
Well, actually, many current sports cars have what is called "sequential manual
gearboxes", which are manual transmissions with automatic clutches.
Similarly, I know I wouldn't like XP. Why? Because I don't like
coding, and XP would force me to code.
That's interesting. Certainly XP does expect the developers to code. Do you do
software development? How do you do it without coding?

Regards,
--
Ronald E Jeffries
http://www.XProgramming.com
http://www.objectmentor.com
XP is the only holistic software process in the universe. ;->
Cy Coe
2003-09-05 08:39:35 UTC
Permalink
Post by Ron Jeffries
Have you tried MDA?
No, I have not. I have done fairly heavy up front analysis and design, and have
drawn a lot of UMLish models in my time, but I have not done MDA. Don't nkow
much about it.
I was asking for a simple reason. Although you have never tried it
(neither have I, by the way), and admit to not knowing much about it,
I imagine you probably don't feel that strong an affinity for the
basic concept. You appear to prefer working in conventional source
code to working with diagrammatic models, so you probably don't need
to try an MDA approach to know it's not a direction you'd want to be
going in.

Here's another question. Have you ever worked on a RUP project? For
the purpose of this question, RUP's pre-unification ancestors don't
count.
Post by Ron Jeffries
Similarly, I know I wouldn't like XP. Why? Because I don't like
coding, and XP would force me to code.
That's interesting. Certainly XP does expect the developers to code. Do you do
software development? How do you do it without coding?
My involvement in software development straddles somewhat the
Customer/Developer line XP establishes, but is far more heavily
weighted to what you would consider the territory of the Customer
team. So no, I don't try to tell coders what objects to use. In my
current incarnation, I'm more of a business/requirements guy.

I'm aware that XP teams employ business analysts, but there still
seems to be a cultural bias against having any kind of analyst
involved. Bob Martin considers business analysts a given in many
projects, but your article on business analysts in XP registers with
me as more of a "grudging acceptance" of the concept than an
endorsement. As portrayed in your article, the business analyst
sounds like a very low-status position, one on which basing a career
would be difficult. And previous discussions with you have
established that you like Customer teams to consist of SME's rather
than analysts.

So perhaps I should amend my statement to saying that I know I
wouldn't like XP because I don't like coding and have little desire to
be hired as an line area end-user of corporate software just so I can
participate in IT projects.


Cy
Ron Jeffries
2003-09-05 12:59:49 UTC
Permalink
Post by Cy Coe
Post by Ron Jeffries
Have you tried MDA?
No, I have not. I have done fairly heavy up front analysis and design, and have
drawn a lot of UMLish models in my time, but I have not done MDA. Don't nkow
much about it.
I was asking for a simple reason. Although you have never tried it
(neither have I, by the way), and admit to not knowing much about it,
I imagine you probably don't feel that strong an affinity for the
basic concept. You appear to prefer working in conventional source
code to working with diagrammatic models, so you probably don't need
to try an MDA approach to know it's not a direction you'd want to be
going in.
True. And if all you're saying is similar, namely that you don't think you would
like XP, I'm OK wtih that.
Post by Cy Coe
Here's another question. Have you ever worked on a RUP project? For
the purpose of this question, RUP's pre-unification ancestors don't
count.
No, never. However, I wouldn't mind it, especially if I could be involved in
setting up the development case. RUP can be done in quite an agile fashion. That
is often is not is not a failing in RUP but perhaps in those who set it up.
Post by Cy Coe
Post by Ron Jeffries
Similarly, I know I wouldn't like XP. Why? Because I don't like
coding, and XP would force me to code.
That's interesting. Certainly XP does expect the developers to code. Do you do
software development? How do you do it without coding?
My involvement in software development straddles somewhat the
Customer/Developer line XP establishes, but is far more heavily
weighted to what you would consider the territory of the Customer
team. So no, I don't try to tell coders what objects to use. In my
current incarnation, I'm more of a business/requirements guy.
OK ... then you could readily be an XP customer, it would seem ... I'll read on.
Post by Cy Coe
I'm aware that XP teams employ business analysts, but there still
seems to be a cultural bias against having any kind of analyst
involved. Bob Martin considers business analysts a given in many
projects, but your article on business analysts in XP registers with
me as more of a "grudging acceptance" of the concept than an
endorsement. As portrayed in your article, the business analyst
sounds like a very low-status position, one on which basing a career
would be difficult. And previous discussions with you have
established that you like Customer teams to consist of SME's rather
than analysts.
SME, SME, I can't connect that to a phrase. In any case, yes, I would rather
have real users, real stakeholders, than analysts. This is consistent with XP's
values of simplicity, communication, and feedback.

Simplicity: The project has to please users and stakeholders. Therefore, where
possible, talk directly with them.

Communication: If analysts are used, placing them /in between/ real stakeholders
and programmers threatens the telephone game. Direct communication is to be
preferred, even with analysts there to facilitate getting the right ideas on the
table.

Feedback: Programmers will get faster and more accurate feedback about
requirements from stakeholders who are present. Stakeholders will get faster and
more accurate feedback about how things are going by being present.

I don't know whether "grudging acceptance" was my frame of mind when I wrote
that article or not. I do stand by its conclusion, namely

Customers don't always know how to express what they need, or how to
make the whole system hang together. To provide this necessary ingredient, teams
sometimes place business analysts or product managers between customer and
programmers. Business analysts should work as aides to the customer, not as an
interface between customer and programmer. This keeps rapid accurate feedback in
place while providing coherence and consistency to the product.
Post by Cy Coe
So perhaps I should amend my statement to saying that I know I
wouldn't like XP because I don't like coding and have little desire to
be hired as an line area end-user of corporate software just so I can
participate in IT projects.
Well, OK, there's no law against not liking stuff, even Ferraris. However, if
your reason is that you'd have to become an end user to participate, you may be
laboring under a misunderstanding.

Regards,
--
Ronald E Jeffries
http://www.XProgramming.com
http://www.objectmentor.com
XP is the only holistic software process in the universe. ;->
Cy Coe
2003-09-05 17:05:22 UTC
Permalink
Post by Ron Jeffries
Post by Cy Coe
I was asking for a simple reason. Although you have never tried it
(neither have I, by the way), and admit to not knowing much about it,
I imagine you probably don't feel that strong an affinity for the
basic concept. You appear to prefer working in conventional source
code to working with diagrammatic models, so you probably don't need
to try an MDA approach to know it's not a direction you'd want to be
going in.
True. And if all you're saying is similar, namely that you don't think you would
like XP, I'm OK wtih that.
I have philosophical concerns with XP, but I've never claimed that it
doesn't produce good software. Let's just leave it at that for now.
Post by Ron Jeffries
Post by Cy Coe
Here's another question. Have you ever worked on a RUP project? For
the purpose of this question, RUP's pre-unification ancestors don't
count.
No, never. However, I wouldn't mind it, especially if I could be involved in
setting up the development case. RUP can be done in quite an agile fashion. That
is often is not is not a failing in RUP but perhaps in those who set it up.
That's a concern that I have. You folks sometimes speak of the Agile
methodologies as if they're the only ones that are iterative and
incremental. The fact that many people implement RUP in a
waterfall-like manner makes it an easy target for some of you. But
just as you object to misapplications of XP being used to critique the
methodology, neither should RUP be judged by those who apply it
incorrectly.

And seeing that RUP is out there, has gotten a lot of attention and
has influenced the type of thinking that you're contending with in
your XP work, perhaps it might be a good idea to get some of that kind
of first-hand experience with it that you feel beneficial to those
concerned about XP. Just a friendly suggestion.
Post by Ron Jeffries
Post by Cy Coe
I'm aware that XP teams employ business analysts, but there still
seems to be a cultural bias against having any kind of analyst
involved. Bob Martin considers business analysts a given in many
projects, but your article on business analysts in XP registers with
me as more of a "grudging acceptance" of the concept than an
endorsement. As portrayed in your article, the business analyst
sounds like a very low-status position, one on which basing a career
would be difficult. And previous discussions with you have
established that you like Customer teams to consist of SME's rather
than analysts.
SME, SME, I can't connect that to a phrase.
Subject-matter expert. Someone who's contribution is based on their
detailed business knowledge.


Cy
Universe
2003-09-09 01:02:30 UTC
Permalink
Post by Ron Jeffries
Post by Cy Coe
Post by Ron Jeffries
Have you tried MDA?
No, I have not. I have done fairly heavy up front analysis and design, and have
drawn a lot of UMLish models in my time, but I have not done MDA. Don't nkow
much about it.
I was asking for a simple reason. Although you have never tried it
(neither have I, by the way), and admit to not knowing much about it,
I imagine you probably don't feel that strong an affinity for the
basic concept. You appear to prefer working in conventional source
code to working with diagrammatic models, so you probably don't need
to try an MDA approach to know it's not a direction you'd want to be
going in.
So feelings determine the generally correct methodology for most
software development, enhancement, and maintenance projects?

It's one thing to prefer working on particular kinds of projects that
operate as you like, but it's a whole 'nother thing to determine what
is objectively appropriate for any one project, or the whole mass of
projects taken as a whole.
Post by Ron Jeffries
True. And if all you're saying is similar, namely that you don't think you would
like XP, I'm OK wtih that.
Sure but the world is different from what most xp/allies desire.
xp/allies are big on "well this is what I learned in my experience,
not realizing or accepting that there is such a thing as *objective*
truth.

And in the objective truth regarding the overall nature of software
engineering, xp/allie has shown to be inappropriate for at least a
majority of projects.

This objective truth was achieved by applying the scientific method to
the directly and indirectly evaluated experience of the larger share
of projects, and developers. Experience of interest has spanned
decades since the inception of electronic computing in the '40's.

Hammered out in the works of most software engineering notables -
Dahle, Hoare, Worth, Dijkstra, Constantine, Yourdon, Coad, *JAMES*
Mrtn, Wirfs-Brock, Cardelli, Wegner, Struvstrup, Riehle, Berard,
Henderson-Sellers, Liskov, Budd, earlier Booch, earlier Rumbaugh,
earlier Jacobson, Sarson, Kay, Nygaard, Dahl, Vlissides, Lakos, etc.

It takes about 2 years to read much of what most here write, while
also linking it other direct and indirect observation - case studies,
articles, other books, your own experience (all of these, as with the
summaries of any one of the aforementioned, you should taken within
limits one see's as apt.)

The thing is to "mush" all this science together, over 2-3 years, and
come up with your conclusion. At least a plurality of contemporary
software engineers come up RUP and opposed to xp/allie in the general
case.

Elliott
Phlip
2003-09-09 02:46:30 UTC
Permalink
Post by Universe
Elliott
Just wondered ... what is the distinction between earlier Booch, Rumbaugh,
and Jacoboson?
Shane
He means before any of them heard of XP.

--
Phlip
http://www.c2.com/cgi/wiki?TestFirstUserInterfaces
Universe
2003-09-09 05:49:36 UTC
Permalink
Post by Universe
Hammered out in the works of most software engineering notables -
Dahle, Hoare, Worth, Dijkstra, Constantine, Yourdon, Coad, *JAMES*
Mrtn, Wirfs-Brock, Cardelli, Wegner, Struvstrup, Riehle, Berard,
Henderson-Sellers, Liskov, Budd, earlier Booch, earlier Rumbaugh,
earlier Jacobson, Sarson, Kay, Nygaard, Dahl, Vlissides, Lakos, etc.
It takes about 2 years to read much of what most here write, while
also linking it other direct and indirect observation - case studies,
articles, other books, your own experience (all of these, as with the
summaries of any one of the aforementioned, you should take[] within
limits one see's as apt.)
The thing is to "mush" all this science together, over 2-3 years, and
come up with your conclusion. At least a plurality of contemporary
software engineers come up RUP and opposed to xp/allie in the general
case.
Elliott
"Just wondered ... what is the distinction between earlier Booch, Rumbaugh,
and Jacoboson as opposed to later works by the same authors?"
Shane
Their earlier works, compared to the latest releases of UML, RUP, and
individual articles and interviews were more:

` "simulationist"
approached creating system design more oriented around the
essential LOI (logical object interaction) given in key application
domain entities and entity collaborative relationships;


' "collaborative"
Now they deprecate models that depict objects and classes
operating dynamically in collaborative relationships; I abjure


` comprehensive and expansive in terms of part/wholes
they made more and good use of the part/whole expressed as the
aggregate and composition; now they deprecate and are eliminating
aggregate; Kilov er al at a Usenix made a strong case for expanding
part/whole types and widening their coverage, i agree,

` wedded to the scientific method
Booch in a recent interview about models, patterns and
architecture, denied that science had anything to do with it!! He
also claimed it was pointless at this time to catalog architectural
patterns, when there are already a number of very good books that do
in fact catalog and explain architectural patterns in terms of
specific project needs and circumstances. My "OO Layered
Architecture" should be seen as a high cataloguing and explanation of
layering and OO system architectural design issues.


In general they had exuded more 1) "joie de vivre", 2) vigorous, 3)
big picture is key, while details counts, 4) "world of wonder" sense
and perspective.



Perhaps more later. Feel free to ask.

Elliott
The whole problem with the world is that fools and
fanatics are always so certain of themselves, but
wiser people so full of doubts." (B. Russell)
-----
Actually, *part* of the problem is the certainty of
those so full of it about doubt.
Universe
2003-09-09 23:19:30 UTC
Permalink
Post by Universe
"Just wondered ... what is the distinction between earlier Booch,
Rumbaugh, and Jacobson as opposed to later works by the same authors?"
Their earlier works, compared to the latest releases of UML, RUP, and
A.) "simulationist"
...
B.) abstraction collaborative interaction oriented
Now they deprecate models that depict objects and classes
operating dynamically in collaborative relationships; I abjure
Formerly they all placed more emphasis on dynamic models and how
classes and object interacted. Now in UML they state that single
class static models should sufficient in most cases. They also state that
object models of all kinds are mostly not necessary. I take it they mean
useful. Gosh, that's news to me! News to how I see systems where the instantaneous
population, kind and effect of objects may vary greatly and that depicting
various states of object play can deliver tremendously powerful insights.**
Correction. While failing to emphasize object models, including talk
about eliminating "link"** notation from object diagrams (links are
not used in UML collaborative class diagrams), UML authors explicitly
mention having no use for class diagrams beyond static class diagrams
that depict a *single* class. That means eliminating probably the
diagram seemingly most associated with OO modelling--the dynamic
*class* collaboration modelling diagram.

** ["Link" denotes a general, "non-specific" connection relationship
between 2 object instances. Often links are depicted as bundles of
links that connects a set of objects at one end with another set of
objects at the opposite end. Typically, the set at any end are
multiple instantiations of a single class, or multiple instantiations
of the various related class types in an inheritance hierarchy.]

The dynamic *class* collaboration modelling diagram usually depicts a
number of class boxes with relationship lines forming connections
amongst all classes in the diagram. Or at least between all key
classes. Some inheritance hierarchy class types may be drawn
independently free for the purpose of documenting the free class where
showing in all likelihood the base hierarchy class connecting to other
non-hierarchy classes is sufficient for the particular context within
which the dynamic class diagram has been created and employed.
Post by Universe
C. comprehensive and expansive in terms of part/wholes
...
D.) deprecation and denial that the scientific method should and can be applied
...
In general they had exuded more 1) "joie de vivre", 2) intellectual vigor,
3) big picture is key, while details counts, 4) "world of wonder" sense
and perspective.
Elliott

*Part* of the problem is the certainty of those so full of it--so full of invalid doubt.
Phlip
2003-09-09 16:51:57 UTC
Permalink
Post by Universe
...
That "blurb" along with last weeks' witless "Mrs. Coates" "shtick"
by Phlip
Elliott
Dude, I did not write those "Mrs. Coates" posts last week, and I ain't
gonna congratulate whoever did.

I would hope to think I'm more subtle...

But provoking you is too danged easy to find any sport in it.

And I hope you realize nearly anyone here had motivation.

Show a little mental health.
--
Phlip
http://clublet.com/c/c/why?ZenoBuddhism
-- Please state the nature of the programming emergency --
Universe
2003-09-09 01:45:01 UTC
Permalink
...Bob Martin considers business analysts a given in many
projects, but your article on business analysts in XP registers with
me as more of a "grudging acceptance" of the concept than an
endorsement.
Believe me now, but you can check the Google archives later
tonight/tomorrow on - 'analyst' 'martin' - RCM *definitely* begrudges
analyst involvement.

And RMartin especially begrudges the *essential* deliverable of
Analysts - the high level, logical system design solution.

It's a pity that BM applies and its difficult for those not so
familiar with him to see through his, *sophistry*. That in part is
the great danger posed by RCM.

RMartin is often *conscious*ly just obscurative enuff, just enuff if a
smoke screen that he thinks is OK enuff to get a away with not
presenting his actually more "ragged" views. Especially given the
particular way an issue is raised, or question is presented, To get
away with obscurative, and or only partial presentation of his real
position, RCM resorts to:
i) heavy non-commital straddling
ii) pro-"ambiguiation" of involved matters
iii) silence on this and that relevant point
iv) 1/2 truth vs the whole truth

All 4 of which, many of those new to OO are unaware RCM is practicing,
because they are not fully enuff aware of the specific issue BM is
addressing.

Typically RCM remains MUM about what left unsaid here in comp.object
where straightforward presentation of his actual super conservative,
pent ultimate pragmatist, anti-theorectical notion, ideas and
solutions often quickly run into hot water and he's exposed to the
newer OO crowd wiling to buy his material and pay his speaker fees

RCM's real deal comes out in his material and at the forums where due
mono-serminar producers, there are like minded panelists and most of
those attending are just starting on the OO road.

Elliott
Richard MacDonald
2003-09-07 03:27:44 UTC
Permalink
Post by Cy Coe
I think a certain
amount of upfront analysis, divorced from coding and other detailed
implementation work, is an absolute must in any non-trivial project.
Maybe you only spend a couple of weeks or so, but I think there
needs to be a buffer between first-cut conceptual work and
implementation.
If I were coaching a team who strongly held to this belief, I would
let them do it. We aren't talking about a huge investment; and the
time would not be wasted. However, I think there is a better way to
spend the time.
Imagine taking that same two weeks, and implementing a miniature
version of the system, all the way from inputs to outputs. This
miniature version wouldn't have to do much; but it *would* have to go
all the way through the system. If we were implementing a payroll
system, for example, it would perhaps have to print a single paycheck
from a single employee record. If we were implementing a catalog
website it might have to send a single true order from a database
with a single item, using a shopping cart with no options.
I applaud the fact that your simulation model (that's how *I* would
label it anyway) would represent an end-to-end exploration of the
solution. I'd still rather explore the problem domain conceptually
and abstractly before plunging into full-blown solution mode. But I
recognize that we're speaking to each other across the
rationalist/empiricist divide.
Sure its not the "plan to throw the 1st one away...you will anyway"
divide :-?
Phlip
2003-09-03 00:43:12 UTC
Permalink
A good point. Analysis paralysis is definitely a product of
inexperience.
I have sufficient experience to stall any analysis session for any amount of
time you name.

Naw, let's try Prototype there. Naw, let's try Abstract Factory there. Naw,
let's try Proxy there. Naw, let's try Iterator there. Naw, let's try...

--
Phlip
http://www.c2.com/cgi/wiki?TestFirstUserInterfaces
H. S. Lahman
2003-09-03 15:55:38 UTC
Permalink
Responding to Coe...

Off topic, but...
A good point. Analysis paralysis is definitely a product of
inexperience. Knowing when to stop is what separates the good
analysts from the hacks, especially in an iterative and incremental
process. And programmers do not always make good analysts (and vice
versa). The detail-orientation and focus on specifics that benefit a
good programmer can result in churn when doing analysis. They're two
different jobs, really.
Fortunately there is a more objective exit rule to eliminate analysis
paralysis -- provided one produces high quality OOA models. When the
OOA model executes correctly against the functional requirements, it is
done and not before. (One uses the same functional test suite to
validate the model as for the final code; only the test harness changes.)


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
***@pathfindersol.com
Pathfinder Solutions -- Put MDA to Work
http://www.pathfindersol.com
(888)-OOA-PATH
Ron Jeffries
2003-09-03 20:53:44 UTC
Permalink
Post by H. S. Lahman
Responding to Coe...
Off topic, but...
A good point. Analysis paralysis is definitely a product of
inexperience. Knowing when to stop is what separates the good
analysts from the hacks, especially in an iterative and incremental
process. And programmers do not always make good analysts (and vice
versa). The detail-orientation and focus on specifics that benefit a
good programmer can result in churn when doing analysis. They're two
different jobs, really.
Fortunately there is a more objective exit rule to eliminate analysis
paralysis -- provided one produces high quality OOA models. When the
OOA model executes correctly against the functional requirements, it is
done and not before. (One uses the same functional test suite to
validate the model as for the final code; only the test harness changes.)
Yes, though apparently you are almost the only person in the universe (tm) who
knows how to do it this way. That's probably quite unfortunate.

I would suggest that XP's micro-cycles of a little analysis, a little design, a
little test, a little code, have much the same effect of limiting analysis
paralysis, and design paralysis to boot. The insistence on concrete feedback (GO
/ NO GO) on a short-term basis tends to keep us on track.

Also off topic, of course ...
XP is the only holistic software process in the universe. ;->
H. S. Lahman
2003-09-04 16:29:49 UTC
Permalink
Responding to Jeffries...
Post by Ron Jeffries
Post by H. S. Lahman
Fortunately there is a more objective exit rule to eliminate analysis
paralysis -- provided one produces high quality OOA models. When the
OOA model executes correctly against the functional requirements, it is
done and not before. (One uses the same functional test suite to
validate the model as for the final code; only the test harness changes.)
Yes, though apparently you are almost the only person in the universe (tm) who
knows how to do it this way. That's probably quite unfortunate.
Not quite the only one. There are at least five formal methodologies
(S-M, xtUML, MBSE, ROOM, and ROPES) for which successful model execution
is the exit criteria. Not to mention that OMG's MDA initiative is
largely focused on standardizing a semantic infrastructure for it.


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
***@pathfindersol.com
Pathfinder Solutions -- Put MDA to Work
http://www.pathfindersol.com
(888)-OOA-PATH
Ron Jeffries
2003-09-05 02:47:18 UTC
Permalink
Post by H. S. Lahman
Responding to Jeffries...
Post by Ron Jeffries
Post by H. S. Lahman
Fortunately there is a more objective exit rule to eliminate analysis
paralysis -- provided one produces high quality OOA models. When the
OOA model executes correctly against the functional requirements, it is
done and not before. (One uses the same functional test suite to
validate the model as for the final code; only the test harness changes.)
Yes, though apparently you are almost the only person in the universe (tm) who
knows how to do it this way. That's probably quite unfortunate.
Not quite the only one. There are at least five formal methodologies
(S-M, xtUML, MBSE, ROOM, and ROPES) for which successful model execution
is the exit criteria. Not to mention that OMG's MDA initiative is
largely focused on standardizing a semantic infrastructure for it.
All true. Yet I have never encountered a team that was working this way. Is the
technique currently limited to a narrow range of applications? Or still not
widely used? Or ... ?
--
Ronald E Jeffries
http://www.XProgramming.com
http://www.objectmentor.com
XP is the only holistic software process in the universe. ;->
jason
2003-09-05 12:28:51 UTC
Permalink
Post by Ron Jeffries
Post by H. S. Lahman
Responding to Jeffries...
Post by Ron Jeffries
Post by H. S. Lahman
Fortunately there is a more objective exit rule to eliminate analysis
paralysis -- provided one produces high quality OOA models. When the
OOA model executes correctly against the functional requirements, it is
done and not before. (One uses the same functional test suite to
validate the model as for the final code; only the test harness changes.)
Yes, though apparently you are almost the only person in the universe (tm) who
knows how to do it this way. That's probably quite unfortunate.
Not quite the only one. There are at least five formal methodologies
(S-M, xtUML, MBSE, ROOM, and ROPES) for which successful model execution
is the exit criteria. Not to mention that OMG's MDA initiative is
largely focused on standardizing a semantic infrastructure for it.
All true. Yet I have never encountered a team that was working this way. Is the
technique currently limited to a narrow range of applications? Or still not
widely used? Or ... ?
Executable UML seems limited to a small number of realtime development
teams who have come from a Formal Methods background. Having said
that, the sheer amount of money being pumped into MDA rsearch and
tools development, plus the recent rush by Microsoft, IBM and others
to hire MDA luminaries for their tools & technologies groups suggests
true MDA may not be that far away. It will still take quite some time
to filter into the mainstream (OOP took over 30 years, remember, and
where is AOP?)

Precise UML modeling, on the other hand, along the lines of Catalysis
or UML Components is enjoying an ever-widening audience. Mostly this
is done on paper or on whiteboards and is not about code generation or
xUML to any degree. Rather, its about reducing the number of
requirements defects (errors, inconsistencies, omissions) finding
their way into the code. In that sense, adoption of a
quasi-Model-driven approach is very much on the rise.

Indeed, some of the techniques endorsed by Catalysis can be very
effectively applied in agile methods like XP and SCRUM - provided one
doesn't get bogged down in tools. The idea is not to deliver a model -
rather to ensure everybody who needs to understand the problem has
exactly the same understanding.

One area where MDA will undoubtedly be huge is in web services.
keeping schemas and code (in several different implementation
languages) in step manually is an absolute nightmare. Tying it all to
a single UML model and then automatically generating the XML schema
and the domain classes, serialization code and persistence code seems
the only reasonable option.

Jason Gorman
http://www.objectmonkey.com
Phlip
2003-09-05 13:49:20 UTC
Permalink
Post by jason
One area where MDA will undoubtedly be huge is in web services.
keeping schemas and code (in several different implementation
languages) in step manually is an absolute nightmare. Tying it all to
a single UML model and then automatically generating the XML schema
and the domain classes, serialization code and persistence code seems
the only reasonable option.
Jason Gorman
Writing the same thing twice in two languages violates Once And Only Only,
so most Web applications, stitched together from, say, SQL, HTML, XML, Java,
JavaScript, etc., instantly violate this. If the folks writing them are not
trained to recognize OAOO violations, they won't try to defeat them early.

So the agreement here is that the facility to defeat those violations must
emerge very early and grow with the project. Some kind of code generation
(either light or off-the-shelf) would be ideal to reduce the number of
places you must edit to change a feature.

--
Phlip
http://www.c2.com/cgi/wiki?TestFirstUserInterfaces
H. S. Lahman
2003-09-05 19:21:57 UTC
Permalink
Responding to Jason...
Post by jason
Post by Ron Jeffries
Post by H. S. Lahman
Not quite the only one. There are at least five formal methodologies
(S-M, xtUML, MBSE, ROOM, and ROPES) for which successful model execution
is the exit criteria. Not to mention that OMG's MDA initiative is
largely focused on standardizing a semantic infrastructure for it.
All true. Yet I have never encountered a team that was working this way. Is the
technique currently limited to a narrow range of applications? Or still not
widely used? Or ... ?
Executable UML seems limited to a small number of realtime development
teams who have come from a Formal Methods background. Having said
that, the sheer amount of money being pumped into MDA rsearch and
tools development, plus the recent rush by Microsoft, IBM and others
to hire MDA luminaries for their tools & technologies groups suggests
true MDA may not be that far away. It will still take quite some time
to filter into the mainstream (OOP took over 30 years, remember, and
where is AOP?)
Translation grew up in R-T/E and is most commonly practiced there
because that unforgiving environment made good A&D a matter of survival.
However, it has nothing to do with Formal Methods per se. It simply
requires that one define the OOA model in an unambiguous fashion (i.e.,
one can't assume any A&D problems will be identified and fixed later).
IOW, it just /requires/ one to devote the same thinking effort necessary
for good A&D in any methodology rather than making it optional.

[One could have applied translation to SA/SD if the SA/SD models had
played together properly. OO uses essentially all the same diagrams as
SA/SD. OO just modified the SA/SD notations so that the various
diagrams were constructed under the same methodological umbrella to
remove inconsistency and ambiguity between views. However, OO's
emphasis on abstraction makes it relatively easy to separate the
concerns of the customer problem space (functional requirements) from
those of the computing space (nonfunctional requirements). Thus the
first widespread, general-purpose use of translation was in the OO arena.]

Though translation grew up in R-T/E, it is not at all limited to that
playground. Nor is it unique for R-T/E to pioneer software Good
Practices (such as modularity, behavior polymorphism, layered
architectures, block structuring and a bunch of other ideas we take for
granted today).

The reason the Heavy Hitters are now jumping on the bandwagon is because
(a) the technology has been proven through nearly two decades of use on
real projects and (b) it is an obvious next step in the advance of
automation in software development. The computing space is enormously
complex but it is also highly deterministic because of its ties to a
rigorous computational model. As a result it is ideally suited to
automation.

The inevitability of that evolution has already been clearly
demonstrated. Nobody programs with plug boards anymore and Assembly has
become an arcane specialty for 3GL compiler developers and some
unfortunate souls doing embedded development. RAD IDEs have automated
traditional IT UI-to-DB pipelines by employing translation to hide the
computing space details behind the IDE facade. Almost as soon as GUI
and web paradigms were defined the world was filled with builders that
automated the details behind WYSIWYG IDEs. Display and data storage
were automated first because they were the most narrowly defined
paradigms. Soon to follow were the layered model infrastructures like
COM+ and EJB that hid all the mundane details of transforming one
representation to another in Computer Land.

The technology for general-purpose translation has existed since the
early '70s. Given a well defined, consistent semantic meta model, like
an MDA profile, it is actually surprisingly easy to automate full code
generation from an OOA model. The main technical problem has been with
optimization. Because of the complexity of the computing space there
are a vast number of alternatives and the rules for choosing among them
to satisfy nonfunctional requirements can be very complicated. So
developing a translation engine for a particular computing environment
requires solving a much tougher optimization problem than a 3GL compiler
faces.

[For example, at OOP time a developer may need to implement a 1:*
relationship between classes. There several ways to do that but most
developers make the choice almost instinctively because they recognize
patterns in the collaboration characteristics (e.g. maximum number of
participating instances, average number of participating instances,
whether the participating instances are fixed, etc.) that dictate a
particular, optimal implementation for the relationship in hand.]

It generally took about a decade for a new procedural 3GL's compiler to
have competitive optimization with established 3GL compilers. It took
the '70s, '80s, and much of the '90s to provide reasonable optimization
in translation engines. So it is only recently that translation has
emerged from the Early Adopter stage as a viable, mainstream technique.
Now that it has, the Heavy Hitters are positioning themselves for the
next paradigm shift.

[Interesting aside. In the '90s there was a series of well-publicized
debates at various OO conventions between Steve Mellor and the Three
Amigos (in fact, Steve provided that appellation in the debates) about
whether translation was viable. Despite his contrarian position at the
time, Ivar Jacobsen wrote the forward in Mellor's '02 book. More
significantly, in his keynote address at UML World in '01 he predicted
that writing 3GL code would soon be as rare as writing Assembly is
today. (Someone with a more Machiavellian bias than I might be just a
bit suspicious about those debates.)]
Post by jason
Precise UML modeling, on the other hand, along the lines of Catalysis
or UML Components is enjoying an ever-widening audience. Mostly this
is done on paper or on whiteboards and is not about code generation or
xUML to any degree. Rather, its about reducing the number of
requirements defects (errors, inconsistencies, omissions) finding
their way into the code. In that sense, adoption of a
quasi-Model-driven approach is very much on the rise.
The technology at issue here is different than Catalysis. For
translation there is complete separation of the customer and computing
space. In addition, Catalysis, though pretty formal, is still an
elaboration technique (i.e., the developer still produces the OOD).

The round-trip and reverse engineering tools would be closer to the
mark, but the deliverable at each stage is still produced by the
developer (albeit with substantial help by automating transfer of
semantic content across stages). Real executable models are only found
in the translation methodologies where the application developer
provides no added value to OOD/P.
Post by jason
Indeed, some of the techniques endorsed by Catalysis can be very
effectively applied in agile methods like XP and SCRUM - provided one
doesn't get bogged down in tools. The idea is not to deliver a model -
rather to ensure everybody who needs to understand the problem has
exactly the same understanding.
XP and SCRUM are development processes, not OOA/D/P methodologies. One
does the OOA/D the same way for translation as for the OOP-based agile
processes (which is another reason why Formal Methods are not
particularly relevant to executable UML). When one eliminates the need
for the application developer to touch OOD/P through translation, one
just changes the where the conceptual work is done. In Robert Martin's
words, the OOA model then becomes the "code".

To put it another way, one can apply the practices of XP or SCRUM (e.g.,
TDD, test-first, YAGNI, etc.) to the OOA model within a translation MDA
profile in pretty much the same way as they are currently applied to 3GL
code.
Post by jason
One area where MDA will undoubtedly be huge is in web services.
keeping schemas and code (in several different implementation
languages) in step manually is an absolute nightmare. Tying it all to
a single UML model and then automatically generating the XML schema
and the domain classes, serialization code and persistence code seems
the only reasonable option.
MDA will be important in any arena where the implementation paradigms
are well defined and choices are deterministic (i.e., the entire
computing space).

[BTW, MDA is a much broader initiative than UML or even software
development. It is much closer to the sort of notion represented by
IEEE standards like 716 (ATLAS and related electronic test specification
standards). Among other things it is intended to support collaboration
between software and non-software (e.g., hardware) contexts, which
implies that the profile include a semantic meta model of the
non-software context.]


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
***@pathfindersol.com
Pathfinder Solutions -- Put MDA to Work
http://www.pathfindersol.com
(888)-OOA-PATH
Marc Gluch
2003-09-06 14:49:04 UTC
Permalink
On Thu, 04 Sep 2003 22:47:18 -0400, Ron Jeffries
Post by Ron Jeffries
Post by H. S. Lahman
Responding to Jeffries...
Post by Ron Jeffries
Post by H. S. Lahman
Fortunately there is a more objective exit rule to eliminate analysis
paralysis -- provided one produces high quality OOA models. When the
OOA model executes correctly against the functional requirements, it is
done and not before. (One uses the same functional test suite to
validate the model as for the final code; only the test harness changes.)
Yes, though apparently you are almost the only person in the universe (tm) who
knows how to do it this way. That's probably quite unfortunate.
Not quite the only one. There are at least five formal methodologies
(S-M, xtUML, MBSE, ROOM, and ROPES) for which successful model execution
is the exit criteria. Not to mention that OMG's MDA initiative is
largely focused on standardizing a semantic infrastructure for it.
All true. Yet I have never encountered a team that was working this way. Is the
technique currently limited to a narrow range of applications? Or still not
widely used? Or ... ?
According to your own earlier statement,
you haven't been on a RUP project.

Does that mean hardly any projects use RUP? Or...?

Marc Gluch
Post by Ron Jeffries
--
Ronald E Jeffries
http://www.XProgramming.com
http://www.objectmentor.com
XP is the only holistic software process in the universe. ;->
Cy Coe
2003-09-03 20:37:50 UTC
Permalink
And programmers do not always make good analysts (and vice
versa). The detail-orientation and focus on specifics that benefit a
good programmer can result in churn when doing analysis. They're two
different jobs, really.
That depends upon your definition of analyst. If you are talking
about someone who analyzes customer needs and designs features to meet
those needs, then I agree. If, on the other hand, you are talking
about someone who analyzes features and designs software structures
and object models that will implement those features, then I disagree.
Then we find ourselves in agreement. I was using your first definition.


Cy
Mrs Coates
2003-09-04 03:06:28 UTC
Permalink
To clarify from over a decade, a summary *after* the fact of system
construction.
Or more important, a summary that is rarely if ever used to *lead* *overall*
system *deployment* (implementation, including coding)--especially for large
or complex projects, or systems. This is *counter* to the methodolgy
practiced by at least the plurality - as it seems from industry literature,
seminars, case studies, surveys, etc - of OO engineers.
Ceaselessly presenting this question in a manner that religiously avoids
confronting the fact I raise here, a fact that RCM is well aware of after
reading myself and others over the years, unquestionably represents an
"oppurtunistic dodge" and on his part. This "dodge" betrays RCM's visceral
understanding of the truth of my "fact" above.
Only that can explain why RMartin fails to deal with the truth that I raise
here; that explians RMartin's undying dedication to "code is design",
principally using refactoring and test first for nearly all design
adjustments, great and small. Only that explains why he sees coding as the
key lever rather than iterative and incrementgal development (IID) analysis
and architecture that handles at least all key factors involved at each
critical milestone.
Of course nickel and dime, piecemeal dedication to "code is design",
principally using refactoring and test first for nearly all design
adjustments, great and small) are the larger foundations of hackery (in any
and all
fields, businesses, areas, domains, sciences, etc).
Elliott
Now now Elliot stop bothering these nice people and come and take your
medication. Luv Nora.
Covad News
2003-09-04 03:30:00 UTC
Permalink
Ohhhh, jeez. Get a grip, but better get the point and respond to it like a scientist and engineer, who disagrees using rational argument, not like a cluelss idiot with only an emotion.

P.U.!!!!!!!
To clarify from over a decade, a summary *after* the fact of system
construction.
Or more important, a summary that is rarely if ever used to *lead* *overall*
system *deployment* (implementation, including coding)--especially for large
or complex projects, or systems. This is *counter* to the methodolgy
practiced by at least the plurality - as it seems from industry literature,
seminars, case studies, surveys, etc - of OO engineers.
Ceaselessly presenting this question in a manner that religiously avoids
confronting the fact I raise here, a fact that RCM is well aware of after
reading myself and others over the years, unquestionably represents an
"oppurtunistic dodge" and on his part. This "dodge" betrays RCM's visceral
understanding of the truth of my "fact" above.
Only that can explain why RMartin fails to deal with the truth that I raise
here; that explians RMartin's undying dedication to "code is design",
principally using refactoring and test first for nearly all design
adjustments, great and small. Only that explains why he sees coding as the
key lever rather than iterative and incrementgal development (IID) analysis
and architecture that handles at least all key factors involved at each
critical milestone.
Of course nickel and dime, piecemeal dedication to "code is design",
principally using refactoring and test first for nearly all design
adjustments, great and small) are the larger foundations of hackery (in any
and all
fields, businesses, areas, domains, sciences, etc).
Elliott
Now now Elliot stop bothering these nice people and come and take your
medication. Luv Nora.
Covad News
2003-09-04 03:31:28 UTC
Permalink
To clarify from over a decade, a summary *after* the fact of system
construction.
Or more important, a summary that is rarely if ever used to *lead* *overall*
system *deployment* (implementation, including coding)--especially for large
or complex projects, or systems. This is *counter* to the methodolgy
practiced by at least the plurality - as it seems from industry literature,
seminars, case studies, surveys, etc - of OO engineers.
Ceaselessly presenting this question in a manner that religiously avoids
confronting the fact I raise here, a fact that RCM is well aware of after
reading myself and others over the years, unquestionably represents an
"oppurtunistic dodge" and on his part. This "dodge" betrays RCM's visceral
understanding of the truth of my "fact" above.
Only that can explain why RMartin fails to deal with the truth that I raise
here; that explians RMartin's undying dedication to "code is design",
principally using refactoring and test first for nearly all design
adjustments, great and small. Only that explains why he sees coding as the
key lever rather than iterative and incrementgal development (IID) analysis
and architecture that handles at least all key factors involved at each
critical milestone.
Of course nickel and dime, piecemeal dedication to "code is design",
principally using refactoring and test first for nearly all design
adjustments, great and small) are the larger foundations of hackery (in any
and all
fields, businesses, areas, domains, sciences, etc).
Elliott
Now now Elliot stop bothering these nice people and come and take your
medication. Luv Nora.
Ron Jeffries
2003-09-04 11:37:13 UTC
Permalink
Post by Mrs Coates
Post by Universe
Elliott
Now now Elliot stop bothering these nice people and come and take your
medication. Luv Nora.
Now he is going to call you "Nor", for dropping one of his t's. ;->
--
Ronald E Jeffries
http://www.XProgramming.com
http://www.objectmentor.com
XP is the only holistic software process in the universe. ;->
Bob Hathaway
2003-08-24 06:39:18 UTC
Permalink
Hi Ron,
Post by Ron Jeffries
On Fri, 22 Aug 2003 13:15:30 GMT, "Greg Chien"
[...]
I didn't express myself clearly. I meant "Inexperienced teams using RUP" vs
"Inexperienced teams using XP" as the two kinds of teams. Ralph reports better
results with XP.
I recall Robert Martin in a talk on XP said Paired Programming worked in all combinations
but one, inexperienced with inexperienced. He said EE and IE are Ok but that II is bad.
I'd have to agree, I've worked with very junior people on many occasions to help bring them up
to speed and provide guidance/tutalage (EI), and two inexperienced programmers (II) is like
the blind leading the blind.

If the above says that a lot of inexperienced people should be using XP, are they referring to practices
besides Paired?

Thanks & Regards,
Bob
Ron Jeffries
2003-08-24 12:20:58 UTC
Permalink
Post by Bob Hathaway
Post by Ron Jeffries
On Fri, 22 Aug 2003 13:15:30 GMT, "Greg Chien"
[...]
I didn't express myself clearly. I meant "Inexperienced teams using RUP" vs
"Inexperienced teams using XP" as the two kinds of teams. Ralph reports better
results with XP.
I recall Robert Martin in a talk on XP said Paired Programming worked in all combinations
but one, inexperienced with inexperienced. He said EE and IE are Ok but that II is bad.
I'd have to agree, I've worked with very junior people on many occasions to help bring them up
to speed and provide guidance/tutalage (EI), and two inexperienced programmers (II) is like
the blind leading the blind.
If the above says that a lot of inexperienced people should be using XP, are they referring to practices
besides Paired?
Not everyone agrees with Bob on this. I share his concern ... certainly it is
the combination of least "horsepower". On the other hand, people point out,
rightly, that EI pairings may result in the I partner just reading along, not
contributing much. Or, it can turn into a mentoring session, perhaps good for
the I but again not helping E much.

The II pairing lets them move more at their own pace, and to build up skill in
the giving side of pairing, not just the getting. Many people find that a
judicious amount of II pairing is effective, so Bob's and my fear is perhaps
just fear.

Since Ralph said he was having these teams do XP, I would imagine that they were
doing enough of the practices to make him feel confident in using the term. I
would think that what makes the teams successful is that by the nature of XP
they try a lot of things, they get quick and solid feedback on how it works, and
they focus on delivery.

Those are good things for any team. For a team with little experience in getting
to a delivery, Ralph reports that it works better than RUP. I'm not surprised.

Regards,
--
Ronald E Jeffries
http://www.XProgramming.com
http://www.objectmentor.com
I'm giving the best advice I have. You get to decide whether it's true for you.
Phlip
2003-08-24 13:36:34 UTC
Permalink
Post by Bob Hathaway
I recall Robert Martin in a talk on XP said Paired Programming worked in all combinations
but one, inexperienced with inexperienced. He said EE and IE are Ok but that II is bad.
I'd have to agree, I've worked with very junior people on many occasions
to help bring them up
Post by Bob Hathaway
to speed and provide guidance/tutalage (EI), and two inexperienced programmers (II) is like
the blind leading the blind.
Well, that's what school is for.
Post by Bob Hathaway
If the above says that a lot of inexperienced people should be using XP,
are they referring to practices
Post by Bob Hathaway
besides Paired?
To interpret Ralph accordingly, II pairing + other XP practices still sucked
less than III + RUP.

--
Phlip
Uncle Bob (Robert C. Martin)
2003-08-24 21:09:29 UTC
Permalink
Post by Bob Hathaway
I recall Robert Martin in a talk on XP said Paired Programming worked in all combinations
but one, inexperienced with inexperienced. He said EE and IE are Ok but that II is bad.
The scenario I talk about is two novices pairing exclusively, and not
pairing with the others. The coach should encourage them to pair with
some of the more experienced folks too.



Robert C. Martin | "Uncle Bob"
Object Mentor Inc.| unclebob @ objectmentor . com
PO Box 5757 | Tel: (800) 338-6716
565 Lakeview Pkwy | Fax: (847) 573-1658 | www.objectmentor.com
Suite 135 | | www.XProgramming.com
Vernon Hills, IL, | Training and Mentoring | www.junit.org
60061 | OO, XP, Java, C++, Python | http://fitnesse.org
Universe
2003-08-23 18:44:27 UTC
Permalink
There is one type of team, however, for which XP works much, much
better than RUP. So much better, in fact, that I stopped allowing
those teams to use RUP, even though they often wanted to. These are
the teams with little programming experience.
There are exceptions to the rule and this is one of them. It just doesn't
make *any* sense whatsoever to suggest that those typically lacking in
knowledge and practice with the software development lifecycle (SDLC) would
do best, *on their own*, in any scientific, rational meaning of the word
"best", adopting XP. Not XP usually being "do your own/pair design/code
thing" no all-around documents to inform or track, focusing upon code
normalizaion (refactoring) as primary means to design code and to rapidly
adopt existing code to key feedback changes (rather than code design and
feedback change mainly derived from iterative and on-going subsystem and
system wide planning led by on-going, iterative analysis [requirements
discovery, investigation and logical solution].

XP being best for novices on their own for multi-year, millions of dollars,
minimum 15 to 50+, etc and so on. It's ludicrous! On the face of it.
"C'mown, now!" Shhhhooootttt!

:-}

Elliott
Cy Coe
2003-08-21 12:11:49 UTC
Permalink
Post by 4Space
[snip]
There is one type of team, however, for which XP works much, much
better than RUP. So much better, in fact, that I stopped allowing
those teams to use RUP, even though they often wanted to. These are
the teams with little programming experience.
[snip]
That certainly is interesting. I've always found that amongst non-XP'ers the
perception is that only experienced programmers would succeed, where less
experienced programmers would become hackers.
Brute-force incrementalism is probably a safer choice for a completely
inexperienced team than an approach based on forethought,
conceptualization and big-picture thinking. If you really don't know
what you're doing, then by all means take baby steps and use
trial-and-error. Once you have the experience, you can start to
leverage it by being less reactive and more proactive.

Hindsight is *not* more valuable than foresight, merely easier.


Cy
Ilja Preuss
2003-08-21 15:08:32 UTC
Permalink
Note: I try to access Johnson's home page, but uiuc.edu seems to be
unreachable (or crawling) at this moment :-(
The original thread is archived at
http://groups.yahoo.com/group/extremeprogramming/message/77775 - perhaps
there is some more information for you.

Regards, Ilja
Continue reading on narkive:
Loading...