> My story would go in a similar vain but the second team did get the
> program built in one week and it did work 'OK' and it made management
> very happy. I was placed on the list for the next lay-offs and they
> were promoted.
I'm sure these stories abound. In my own equivalent, the other team's
code was delivered to the customer in half the time, and I *was* laid
off. Their code was "mostly working".
> If you can spit working binary out your left ear then you are
> management's pet.
Management always seems impressed with this.
> Then, three months later they wanted to use it as the basis of a new
> product and nobody could read it much less figure out how it worked.
> Project scrapped.
In my case, the customer was *not* impressed: The team kept going back
to them with patch after patch. A few months later, the company folded
and no longer exists.
But the point of my original email was not to elicit these stories
(satisfying as they are); instead, it was an attempt at understanding
the phenomenon. Why does "the other team" always seem to exist? Earlier,
> the developers either 1) don't see their software as being sucky --
> a training/competence issue -- or 2) they tolerate suckiness in order
> to meet deadlines, ie, they accept the development schedule along with
> all the other requirements their product must meet.
The common thread in our stories is that we reject the premise of #2:
we refuse to accept a deadline that will induce sucky software. But
the "other team" does accept it. Are their standards lower than
ours? When I talk to these developers, I get a different perspective:
they see themselves as "pragmatic", "getting the job done", "not a
blue-sky academic", etc. Our view is that "suckiness is intolerable and
the deadline is unrealistic;" their view is that "flawless software
delivered late is sucky." They have made a tradeoff that to us is
abhorrent: some degree of suckiness is tolerable versus missing the
deadline; or more simply, some degree of suckiness is always tolerable.
I really don't like arguing this point, but I think it's a reality
that we in the Ada community tend to be blind to. We seem to think
(hope) we can eradicate suckiness, via better tools and environments,
better training and curricula, better processes and methods, and yes,
by using Ada versus other languages. But if all these were in place,
would suck-inducing deadlines go away? I think not; they would only
get shorter :-(
Am I being overly pessimistic here? Is there some Utopia we can aim
for, where sucky software is no longer built? Or, are we just being
unrealistic in hoping so, since some degree of suckiness will always
be tolerable no matter how things improve? I ask this from a software
engineering maturity perspective, subsuming Ada in the question: would
our profession be better served by learning to cope with suckiness
(rather than just disdaining it), making its assessment an overt part of
the engineering process? This seems to be an area that academia doesn't
want to address, and industry doesn't want to admit (but that maligned
managers seem to accept). For starters, answering these questions
will force us to define just what "suckiness" is: as indicated above,
the view of the "other team" already disagrees with our view.
C. Daniel Cooper ==========v=======================.
Adv Computing Technologist | All opinions are mine |
206-655-3519 | and may not represent |
[log in to unmask] | those of my employer. |
The question is not "What is the answer?"; rather, |
the question is "What is the question?" --Poincare |