[log in to unmask] said: > If an 'experiment' trying to compare a C and an Ada team resulted > in the C team creating a new CASM (C Ada Subset Macro) language > that was better than either C or Ada, then we'd surely like to > hear about it. If it was better than C but worse than Ada (or > the opposite) then it should prehaps replace whichever it beat, > and if all three were in a dead heat then it's evidence of 'no > difference' - at least for that application domain and set of > programmers. The result was definitely Ada better than Ada in C which was much better than "ordinary" C. The one that killed me though, and I still remember it well, was when the support people had to give their evaluation. (At this point six subsystems have been out in the field in several thousand machines, three subsystems in Ada, two in C, one in Pascal. Everything else, except for a few FORTRAN subroutines, was in assembler.) Pages and pages of charts on more bugs in C than Pascal, but Pascal bugs and assembler bugs took longer to find, etc., etc. "What about Ada maintenance costs." (Asked by the VP who headed the division.) "We don't have any data." "Why not." "No STARS (bug reports) have been closed." "How many STARS are open on the subsystems written in Ada?" "None." Sound familiar? (And it wasn't stupidity. This guy knew was Ada would do to the size of the maintenance staff, but there were too many people around who knew the answers for him to lie.) > Clearly such an experiment, especially with a 'Western Electric > effect', and across multiple apps and programmer sets, is not > something to be done definitively by a few folks, even at a large > company. I assume this is the same as the Hawthorne effect? (Based on a stud which showed that almost any change improved productivity, if it showed management interest in productivity.) > Even non-experimental collection and analysis of data on > past projects is a big job. But if we're looking at a likely > DOD-wide, or nationwide, or world-wide, benefit on the order of > $10**8 or 9, then it's well worth a *substantial* investment. We try. MITRE, NASA and the University of Maryland have a lab which has been doing this sort of data collection for years. Unfortunately, the data is on langauge effects only, not programming methods, and see story above. The more you prove that (well developed) Ada code doesn't have bugs--unless you change the requirements--the more trouble you get Ada into. Another similar story. There was a project a few years back, where we (MITRE) wanted to insure that the code was VERY trustworthy. (If you have seen the movie War Games, the HAC/RMPE portion of REACT is Whopper.) It was written in Ada by a contractor selected after a SEE--Software Engineering Exercise--which was designed to show management and staff alike how important doing it right was. The bidders were required to write code to match a package specification we provided, and to do error correction on EAMs. One of the "test" messages read "Let's do lunch." if you followed the rules. If you didn't follow the instructions carefully, you were likely to end up with an extra 'a' in the last word. As far as I know, the Air Force still hasn't decided when to transition to "organic" maintenance on the system. They are supposed to do that to reduce costs, but first they have to have some costs to reduce. Robert I. Eachus with Standard_Disclaimer; use Standard_Disclaimer; function Message (Text: in Clever_Ideas) return Better_Ideas is...