>> It takes less time in Ada to make a *working* product. One that pretty
>> much does just about all of what it's supposed to. But one that
>> sorta kinda appears to work if you don't look closely - good enough for
>> release to a non-discerning public used to Microsoft's lack of quality -
>> that can be done quicker in C, C++ etc.
>You've not said why you think this is inherently so.
Thinking about it, it's not inherently so, but is so in practice.
Programmers who use Ada *tend* - sometimes after many years, but usually only
a few months - to be Software Engineers rather than programmers. It becomes
second nature to define subtypes rather than always use unconstrained Integers,
to check for exceptions and take corrective action. They design first, rather
than start cutting code from day 1. In fact, it becomes such a habit that after
a while, it's more difficult to write "careless code" than it is to write robust
In C, it's far harder to test for all the possible, or even all the likely,
errors. When programming in C (or Java), it's almost impossible to avoid "crossing
one's fingers and hoping for the best" at every third or fourth line. Yet it's
really quick to knock something together that will work providing everything
goes well, and you don't stress it by checking boundary values etc. I know that
even I often assume that returned values of C etc procedures are dropped on
the floor unexamined.
What this means is that in practice, a quick n dirty module can be coded quicker
in C. And with fewer lines. It won't have the same functionality, it won't be
robust, it's bound to have horribly difficult to detect index over-runs etc
etc. But it takes up fewer lines (because it does far, far less..).
Now you can, in theory, do the same sort of thing in Ada. Everything's an INTEGER
or a FLOAT. Use UNCHECKED_CONVERSION willy-nilly, even GOTOs. But to do this
really goes against the psychological grain, in my experience I just can't do
it without taking more time than I would to do it right.
So Ada - either 83 or 95 - doesn't force you to do it right, and provide 100%
of the functionality rather than the 60% immediately visible. But having the
additional power available makes creating good code feasible in a reasonable
time, and once you've made systems that are as reliable and error-free as some
of the systems I've seen, going back to the "old way of doing things", the "rather
more or less" is just plain hard. Even if it would take less time.
Now a really good C, C++ or Java programmer would be able to assemble a number
of standard Objects/Classes/Modules, ones that have been patiently crafted to
be paranoid about all inputs, to return sensible error values, with macros to
always examine even the simplest call for correctness. It's just that in my
experience, there are far fewer really good C++ etc programmers than Ada ones,
because in Ada it's so much easier. It also takes far longer to write such robust
code in C++ etc, because of the lack of expressive power.
As for Hard Data collection, I'm currently doing my Master's. But I'm doing
some initial work for a Piled Higher and Deeper, involving experimental design
to quantify the effects of language are on Software Development, all other things
being equal. Getting a sufficiently large sample size while shackling all other
variables is tricky, but doable, I hope. My main problem is not to bias the
experiment, as everything I've learnt in 20 years of Software Engineering leads
me to one conclusion before I even start. Thus retaining Objectivity is more
than usually important, I have very strong views a priori.
You'll find some hard data mirrored at my web site:
From Prof McCormick, State University of New York
Some rather mouldy (old) figures from Reifer Consultants
and from Rational a direct comparison
(hmmm... must mirror this one too...)
For an argument (OK, a rant) on why Software is generally so bad, see http://www2.dynamite.com.au/aebrain/program.htm