TEAM-ADA Archives

Team Ada: Ada Programming Language Advocacy

TEAM-ADA@LISTSERV.ACM.ORG

Options: Use Classic View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Topic: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Roger Racine <[log in to unmask]>
Thu, 10 Jun 1999 14:31:50 -0400
text/plain (374 lines)
At 06:14 PM 6/9/1999 , Samuel Mize wrote:
>Greetings,
>
>I hope you won't feel I'm slamming on you in the following, but
>it seems best to speak clearly and bluntly.

I do not feel slammed on at all.  I have compressed this a bit to make it
less long.  I also hope I have addressed the questions of Mark Lundquist in
this message.

>In your analysis I see three major errors:
>1. You are still confusing language with method.

I don't think so.

>2. You are missing Ada's advantages for prototyping.

Now you (and others) are getting away from the common method (incremental
development).

>3. You are strongly mis-stating the probability and cost of different
>   kinds of errors.

I need data.

>I'll go into more detail.

Me too.

>1. CONFUSING LANGUAGE WITH METHOD
>
>The difference between Ada and C in no way requires a disparity like
>65% versus 20% to get to detailed design.  You are still comparing a
>heavily front-loaded, waterfall approach with an incremental,
>prototype-driven approach.  I'm sorry, but you can do either in Ada.

No, I am not comparing waterfall with incremental.  This is simply assuming
-standard practice-, incremental development in both cases (no
prototyping).  I do not know if it is a myth, but there is certainly the
perception that to simply get a well-designed first iteration of a given
set of functionality, it takes longer with Ada than with other languages.
The 65% and 20% were made up to make the end costs end up very close, but
they are not way off from what I heard back in the early 80s when I started
learning the language.  And I have never seen any data stating the
opposite.  And I have used Ada now for about 17 years (yes, prior to the
first ANSI standard), and it does cost more up front, in my experience.

>The C project's "detailed design" was a set of viewgraphs and a
>rapidly-coded, loosely-designed prototype.  You can easily build such
>a prototype in Ada.  You've already identified the major techniques:
>use predefined types, don't modularize into packages much.

Project C's code was not considered a prototype; it was the first iteration
of the finished product.  I said to forget about the documentation aspects.
 The 65% / 20% figures are not supposed to include documentation.  Just
software design and code.

>With either language, the next step would be to take the the
>prototype code and reorganize it into a final design, then proceed
>with final coding.  (Or just finish off the prototype.  This is not a
>snide comment.  For something like aircraft avionics, that would be
>phenomenally stupid, but for a one-delivery commercial system it
>might be just the ticket.)

Both cases were iterative development, so it is really off the subject to
talk about prototyping.  You state that the first iteration of Ada
development could have used what would be considered horrible practices
(predefined types, un-modular).  Others have pointed out (good job!) that
Ada still prevents errors like null pointers and array constraints.  But, I
would argue that this is not -standard practice- in Ada projects (although,
looking at some open-source Ada software, I might be open to persuasion :-)
).

Compare Ada text books with C books.  A quick quote from Barnes: "But a big
difference is the stress which Ada places on integrity and readability".
The language stresses those qualities?  Not really.  But the language
certainly -allows others- to stress those qualities.  One of the best books
on C is a reference manual.  Never mentions design issues.

So, Ada software engineers are taught from the start that they are supposed
to write readable, modular programs.  C coders are taught the language.
Don't say "The Ada people could have done it the same way as the C people."
 That is not the way anyone is teaching them to do it.

>Either approach is easy in Ada.
>
>If you don't clean up the design, you don't get Ada's
>maintainability/reliability advantages, but you lose very little.
>You still get a lot of error checking.  This can be turned off in
>the final system, but it will typically reduce your test time a
>lot more than it increases your coding time.
>
>If you do clean up the design before going on to coding (or during
>coding), Ada is a clear winner here: with a running prototype as a
>baseline, you can incrementally introduce typing and modularity.  Cut
>apart the "subsystems" (sets of related packages), get their design
>well defined (package specs and types), re-integrate them.  You may
>want to do this one, or a few, subsystems at a time, re-integrating
>before cutting out more subsystems.  Once you've partitioned your
>design properly, you can pretty much just finish out the code.
>

In my experience, it is very difficult to go back and clean up working
code.  It happens sometimes (in fact, it happened in Project C to some very
limited extent), but it is extremely difficult to justify.

>And your work product really gives you a lot of information about the
>design approach it embodies.  I find that significant design problems
>-- the kind of thing that can make you start over -- surface a lot
>earlier in Ada.  And, you aren't as tempted to "paper over" a problem
>with a kludge that will grow as the system grows.

Why?  This would be an excellent argument (if true :-) )!  Note that I am
not accusing you of telling a lie, just that it would be easier to convice
others if there were constructs of the language that could be pointed to
that A) did not take any longer to use; and B) found design problems
quickly.  Note also that (as others have pointed out) a small sample of
experience does not make a good argument.

>My experience in Ada is that you can build up some very useful
>support packages for prototyping and preliminary design.  Even a
>simple TBD package will give you a lot of help.  Consider:
>
>  package TBD is
>    procedure Statement (S: String);
>  ...
>
>This is used like:
>
>  case User_Input is
>    when Forward =>
>      Movement.Forward;
>    when Backward =>
>      TBD.Statement ("move backward");
>    when others =>
>      TBD.Statement ("not handled by prototype");
>
>Depending on what you are doing, the body of TBD.Statement can print
>out the string, or just raise an exception.  Later, you can remove
>TBD from your library, and the compiler will tell you if you've left
>any TBD items in your code.
>

Reasonable in any language.  In C, it might require the linker to tell you
about the unresolved reference, but most would consider that a minor
difference.

>Finally, most Ada compilers these days provide tools that obviate the
>use (and development and maintenance) of "make" scripts.  For
>example, GNAT provides gnatmake, which will rapidly and certainly do
>all the compiles you need to get an up-to-date executable, and only
>the compiles you need.  The tool that does this for you when using C
>is an intern, if you're real lucky.

It is really called a makefile, but one can use the intern to keep the
makefile up to date.  And the makefile is used for many other things than
just compile and link dependencies (such as checking out the source from
the version control system, moving executables into their final location,
etc.).  This is typically greeted with "Ho hum" when used as an argument,
but I agree it is part of the argument.

>3. THE PROBABILITY AND COST OF DIFFERENT KINDS OF ERRORS
>You said:
>> My use of the word
>> "develop" was in reference only to design and code.
>
>This is the real Achilles' heel of the argument.  "End of coding" is
>a meaningless milestone for comparing costs.  Let's ignore maintenance.
>To get to the first delivery, you must design, code, TEST and DELIVER.
>
>(But before I entirely discard maintenance, I will mention that it is
>MUCH easier and cheaper to handle that first wave of trouble reports
>if you have a coherent, modular design, and if it's well expressed in
>the code.)

So you agree that you have been brainwashed into the Ada mentality (as have
I)!!!

>> The argument is completely with the -risk of cost growth-.
>First, let's pick apart your detailed worst-case:
>> For example, to get to a detailed design review, let's say 65% of the work
>> is done in the Ada case, but only 20% in the C case.
>
>Here you're swapping methods again.  If you're doing waterfall-style
>development, you don't even start coding until after preliminary
>design -- if then -- so language choice is not relevant.
>

I said "detailed design review", not "preliminary design review".  In an
iterative process, much of the coding has been done by the time the first
iteration of detailed design is complete.

>> In the simplest
>> case, where a design flaw is found requiring starting completely over,
>In this case, the management team and top-level technical staff
>should be summarily fired.

I can think of 2 major Ada projects where something like this happened (not
firing the management and top-level technical staff).  I imagine others
can, also.  There are some very capable people who worked on them, and I
disagree that interpretation differences should be cause for firing
(everyone I know would have to be fired if this were the case.  English is
not the best language for expressing requirements).

>I honestly can't imagine a case where something got to a detailed
>design review, and then required scrapping the entire work product.
>Even if there's a major conceptual problem, big chunks of the
>requirements analysis, design and code should be reusable.  And in
>stripping out an existing analysis and design and rebuilding it, I
>would far, FAR rather start with something that was made in a modular
>fashion, or that at least would let me partition it during the
>rebuild.

You are refuting your own previous argument, where you said it was OK to
start out with un-modular coding.  But that is OK, I did not buy the
previous argument. :-)

Look at it from the customer's point of view.  It took longer to get there
and will take longer to get it fixed, for every design flaw.

>Granted, I have no experience with a program that was this screwed
>up, but I DO have experience upgrading systems well beyond their
>original design, ripping out some things and adding new elements.
>Ada makes this much, MUCH easier.

I will not argue with this.  Not relevant.  In both cases the customer
-thinks- the system is close to firmware, so upgrades are not likely (of
course the customer is crazy, but many think this way).

>So the real numbers for your worst case might be that the Ada program
>spent 25% instead of 20% of its budget, and will spend 80-90% of the
>original budget instead of 90-100% to start over.

My number was 65%.  Where did you get 20%?

>Now, let's look realistically at the -risk of cost growth-.
>> However, for the C case, my risk
>> of cost increase is very low, because there are many ways to use metrics to
>> estimate the number of errors that will be put in during coding.
>
>This is false, in the public domain.  (That is, not patently.  :-)
>
>For the typical project, it may be fairly accurate.  But for the
>typical project, we don't scrap everything through detailed design
>and start over.  Let's compare either typical or worst-case scenarios.
>
>1. For normal development, your C project's risk of cost increase is
>higher, because you will tend to discover problems later.  That's the
>whole point to strong typing and run-time checks: you don't find out,
>late in testing, that you've missed something.  Sometimes you find
>out that you left out a parameter, but sometimes you find out that
>you missed a significant point that affects design, or even your
>understanding of the requirements.  (Prototyping reduces this risk,
>equally in either language, but it doesn't eliminate it.)

Sorry, but I have to disagree.  The common pitfalls of coding, at least in
C, are very well known (there are books on the subject).  Therefore, in a
large project, one can anticipate the cost of finding and correcting these
pitfalls.  Note that the SEI's CMM is very good at using metrics to help
anticipate the true end cost.

>It's documented -- and just common sense -- that the later you find a
>design or requirements error, the more it costs to fix.  You have
>more work to redo, and if it affects other parts of the system, you
>have to redo them too.

This is documented.  Is it relevant?  The common errors using C are
-coding- errors, not design errors or requirements errors.  It is true that
you are more likely to let the coding errors get in to integration than
with Ada, but the cost of fixing coding errors is well known and can be
estimated reasonably well.  So, again, the risk of cost increase is low.

>Not only that, errors found in late coding or testing have the worst
>impact on final schedule.  If we scrap the design, we may be able to
>work hard and smart and deliver close to schedule.  If we find a
>major problem two weeks before delivery, how likely are we to be able
>to rework the whole system in time?

I need some examples here of projects where this happened.  It has not
happened in my experience, in any language.  Except for the time TeleSoft
told me they would have a complete Ada compiler in a month, and it came out
a year later and was useless for the next year.  This was in 1982.  So I
will revise my statement and say that it has not happened in any well-run
software project not driven by commercial time-to-market constraints.

>Since Ada surfaces many conceptual errors earlier -- often during
>design-time compilations, even in a strict waterfall development --
>you are reducing your risk of cost growth by using it.

Here is one of those English words that could be misinterpreted:
"conceptual".  How is Ada going to surface errors in the concept of the
system?  If the system is a guidance system, how is the accuracy of that
system affected by language choice?  How is the CPU utilization determined?
 These, to me, are conceptual errors in a software system.

Ada is very good at surfacing coding errors early.  Are my interfaces
correct?  Hide implementation details to avoid dependencies.  These are not
conceptual errors from the customer's point of view.

>And, if you used Ada's facilities for modularity, you're likely to
>get off requiring fewer changes to less code.

I agree with this, but it is hard to quantify.

>2. For a worst-case scenario, let's look at a problem found in late
>testing that turns out to have ramifications throughout the program.
>Changes are going to run through the code like cancer, and you have
>almost no time to do it.
>
>In the first place, as I said, this is FAR less likely with Ada.
>
>In the second place, you're more likely to have modularity "firewalls"
>isolating the problem, making its solution faster and easier.
>
>In the third place, such changes are much easier to make when the
>compiler is telling you what you've just made obsolete with each change.
>
>Historically, this is much, MUCH more likely to happen, to the extent
>that a delivery delay announced just before the expected delivery date
>are hardly remarked on in parts of the commercial software world.
>And, this kind of problem can cause a huge cost increase.
>
>>With the
>> Ada estimate, the risk of cost increase is higher, due to the extra cost
>> associated with the design work associated with a well-designed Ada
>> program.  This cost is related to higher-level design problems.
>
>Again, here you're assuming that only the C project is prototyping
>and communicating effectively with the customer.  If the other
>project is charging forward with tunnel vision, it will have this
>risk, whether or not it's using Ada.
>

No I am not.  I am assuming that the cost of the C project is spread more
evenly across the lifecycle.  That is all.  So that if I find a problem
when it is introduced, the cost is more even.  So, for C, a problem
introduced in the design phase will cost less to fix (if it is found during
design reviews) than a similar problem found in an Ada design, simply
because standard practice is to spend more time working on the Ada design.

Coding errors are more likely in C (any arguments?).  However, coding
errors are relatively easy to estimate, compared to requirement or design
errors.  The CMM is built around good estimation of defects.  Since there
will be fewer requirements interpretation errors than coding errors (I hope
we can agree on this, for any language), it is difficult to estimate these
types of errors.  Therefore, it is more likely that these errors will cause
problems for the schedule than coding errors.  Who wants to go to a
customer and say "I am assuming I will misinterpret your requirements 10
times"?  No one has a problem saying "Integration and testing will take 10
months, assuming the normal number of coding errors".

So, for Project C, a design flaw (performance problem) took a month to fix.
 For Project A, their design flaw (over-designed interface) was a major
problem.  If they had had a similar problem, it would have taken more than
a month to fix it, simply because of the extra work it takes to write good
Ada software.

>The extra effort up front in a WELL RUN Ada program is insurance AGAINST
>cost increases, later in the project.
>
>I hope you find this of interest, and worth considering.

This is getting to be helpful.

Myths will not go away by calling them myths.  We need to prove it ("I
don't have to; I am in the majority, say the C people." :-(  ).

Roger
Roger Racine
Draper Laboratory, MS 31
555 Technology Sq.
Cambridge, MA 02139
617-258-2489

ATOM RSS1 RSS2