Pascal responded to the compilation speed with a list of times
> conception time
> writing time
> compile time
> debugging time
> maintenance time
This is a good list. However, before adding up the weighted sums
(multiple the TIME by the number of times it will be done), one
additional time has to be added to the list:
Some programs are executed more than once over their lifecycle
after they are written, though not most.
Most programs are throw away utilities, student programs,
prototypes, or the kind of programs that ask a query and
once the answer is gotten, they are thrown away.
However, the minority of programs (still a large number) is
intended to be executed many times, and their speed of
execution is the critical factor.
Ada has improved in all of these areas since its inception,
including in compilation time. This chart is for a computer
program with approximately 20,000 lines of code, according to
the SLOC of the Unquoted, Uncommented, Semicolon.
My notes show the following compilation times for MIMS with no optimization.
YEAR TIME CPU MACHINE (LOC/MIN)/MHZ
1982 18 hours 25 MHz mainframe 2.4
1985 18 hours 6 MHz PC 286 4.6
1993 2 hours 50 MHz PC 486D 3.3
1997 40 minutes 90 MHz Pentium I 5.5
1998 8 minutes 266 MHz Pentium II 9.4
With full optimization (for gnat: -O3 and -gnatn):
1998 17 minutes 266 MHz Pentium II 4.4
This indicates that the speed of compiling per MHZ has improved, but
that more of the compilation speed increase is due to increases
of MHz than is due to compilation technology.
A second thing this chart shows is that the two biggest leaps
were Alsys's decision to sell a 4 MHz board with their compiler
for the 80286 (which doubled the adjusted compile speed),
and gnat's decision to permit compilation in any order (which
also doubled the adjusted compilation speed).
The third thing this shows (horribile dictu!) is that after
adjusting for CPU speed, the time it took to compile on a
286 was only twice as slow as the most modern technology.
It would have been hard to convince me of that back in 1985,
as a user of Borland's Turbo Pascal environment which compiled
an equivalent number of lines of Pascal code in about 15 minutes,
two orders of magnitude faster.
Since more Free software is developed in C++ or Java right now,
that is what Ada compilation speed should be compared to. Ada seems
to be a little faster than most C++ compilers, but a quite a bit slower
than most Java compilers. However, the C++ does not as much
the compilation time and execution time error detection capability
that Ada gives. And neither has the concurrent programming capability
in the Ada language, so the Ada compiler is doing a lot more work.
However, it is fast enough on today's Pentium workstations and
Unix Personal Computers, that compilation speed is NO LONGER AN ISSUE.
On the Ada wish list for Ada 2005X are some very minor error repairs
(already discussed on this list), a large amount of speed improvements
in generics (keep it static, and pass non-generics to generis), and
a third category: find even more bugs at compile time. This third
category will cost MORE compilation time: cross check more things,
generate EXTRA constraint checking code at parameter passing junctions,
catch uses of variables by multiple simltaneous objects, and pattern
recognize some of the common algorithmic design errors.
We are at a point where it is okay to request features that will cost MORE
compilation time, like keeping static variables across generic instantiations.
The speed issues from now on are how long does it take to RUN, and
how long does it take to MAINTAIN it.