ROGER: I think the flaw in your scenario is that the Ada project's
development process did not include prototyping, incremental or
evolutionary development, and the resulting earlier user (or customer)
usage & feedback. That was wrong, or at least a poor project/management
There is absolutely nothing about Ada that requires 1970's style
"waterfall" development ("no coding before CDR") and DOD-STD-2767/A style
documentation. When Walker Royce defined his Ada Process Model in the
mid-1980's he (semi-facetiously) used to say "No coding after PDR!" What
he was really saying is that the best design review for critical aspects is
conducted over increasingly complete, running prototypes including main
architectural elements and at least glimpses of the most critical/important
(or risky) functionality. The functionality increases with successive
prototypes, as assessment of risks and user needs evolves based on feedback
& evaluation of the earlier prototypes; ideally, in this paradigm, the easy
stuff is then done last -- except in reality, a certain portion of it is
necessary to adequately demonstrate the essential objectives of the
incremental objectives. Still, it's accurate to call this "risk-driven."
Note that "risk" can and often does include include uncertainty about
actual user requirements, not just perceived tough-to-implement
capabilities. Also note that in these processes, there is no single
earth-shaking demarcation such as the old-fashioned PDR & CDR used to be;
some capabilities are "past CDR" while others aren't even to the stage of
"frozen" requirements. I could view this as many small "waterfall-ish"
cycles (one per increment or per software component, with reviews &
evaluations for each, if not actually separate prototypes or demos for each
one individually -- we have several units with noticeable evolution per
demo/delivery) cascading across the system development lifecycle, but even
that is a simplification because it ignores the over-arching management
risk-driven perspective that defines the partioning & scheduling of each
unit, capability set, demo, etc.
Of course, there need be no particular coupling between PL choice and
development process model. Stating your scenario to imply that Ada is
coupled to a waterfall-ish process and C to an incremental or evolutionary
process masks the real issue which is that incremental/evolutionary rule
today for most developments. Here at TRW we still have several large Ada
projects going, and virtually every one of them is an evolutionary
development process with very early executable capabilities submitted to
user/customer evaluation. Ditto for our C/C++ projects (& mixed-language
projects). I am talking incremental evolutionary "deliveries" scheduled at
frequencies averaging beween 10 weeks and 10 months depending on project,
customer desires, access to users, and other factors.
Please do not read my descriptions of incremental/evolutionary processes as
anti software engineering. Selecting the right development process is part
of software engineering. Making judicious use of PL features vs "hacking"
is almost totally divorced from what I have described above. And, very
good code can be produced in many languages, not just Ada; and that code
could be "maintained" almost as efficiently as Ada if things were done
right during development. (But of course I believe Ada increases
productivity & quality relative to other languages & their tools! Some
organizations, including some TRW projects, make the needed investment to
do it right in other PLs, and believe me, in systems houses building large,
long-lived embedded systems, the difference in cost due to PL choice is not
substantial as a percentage. When you're bending metal as part of your
delivery, e.g., satellites TRW builds, often software costs are a small
minority and it is not hard to justify budgeting what the smart s/w manager
requires. Personally, I think we over-sold the productivity angle of Ada
in the early days, and managers and doubters have not seen compelling
statistics to bear that out on a system-wide basis. Ada provides an
"edge," more so for high reliability, RT, etc. systems, and that's enough
to keep Ada viable and thriving, IMO.)
Roger Racine wrote:
>I have an interesting anti-Ada argument that I am having difficulty
>refuting. Any help?
>The argument goes like this:
>Project A uses Ada. Project C uses C (use C++ or Java if you like).
>Project A uses good Ada development process and spends a lot of effort up
>front to make sure maintenance will be easy. Project C starts coding
>immediately, and documents the design "later" (i.e. not at all).
>By the time Project A is ready for a detailed design review, they have
>thousands of pages of design documentation, they have done walkthroughs on
>everything, and they have spent a good deal of money. By this time,
>Project C has had a number of demonstrations, has a good deal of problem
>reports (due to the usual C pitfalls), and has made a few major design
>changes based on the early demonstrations to the customer.
>At Project A's design review, the customer sees a major problem in the
>basic design. There were interpretation problems with the requirements.
>The customer says they need the problem fixed. The developer says: "That
>will cost $10M. We have to update thousands of pages of documentation, go
>through all those walkthroughs again, etc."
>At Project C's design review, it is less likely that this will happen
>because the customer has been seeing the system being built. But even if a
>major design change is needed, Project C's cost will be much lower to make
>I don't think it is sufficient to simply say "The money will be made up
>during maintenance." While probably true, the initial cost overrun might
>cause the program to be canceled. And the total cost, while possibly
>higher for the C case, is likely to be more deterministic (you know how
>many bugs are likely, but it is much more difficult to tell how many major
>design problems will occur).
>Draper Laboratory, MS 31
>555 Technology Sq.
>Cambridge, MA 02139