Wes > The compiler then leads the engineer by the nose from one problem to
Wes > the next. When the program finally compiles, the engineer is so
Wes > relieved that the immediate desire is to see if it will run.
Jim > This describes quite well one stage of the software folly that I've
> sometimes called "empirical programming". The next stage begins
> where this description ends: with seeing if it will run.
Jim, thanks for a good post about empirical programming. Here is some
more information about it.
Empirical programming is very common in the presence of compiler bugs,
unknown requirements, buggy operating systems, buggy windowing systems,
any kind of x-windows programming, buggy runtime systems, buggy
tasking (threads) systems, incomplete compiler implementations,
non-standard parts in a compiler system, hardware-software interface,
realtime programming, interrupt handlers, etc. In other words,
empirical programming is the rule for everything except programming
at the Application layer.
Jim > [In empirical programming, when] the program fails to run,
> the "engineer" tinkers with it to try to get it to work,
There might be a better word for this than "engineer." When you
do experiments you are performing the job of a Scientist, not
that of an Engineer.
Jim > Eventually, the program may seem to work as intended,
> but it is littered with the debris of failed experiments.
These failed experiments are proof positive that you are doing
computing SCIENCE, not engineering or mathematics.
Jim > ... its design (if there was one) muddled ...
Now muddling (that is making the necessary compromises to make
something work despite lack of accurate specs and design) is
ENGINEERING. Computing then is a balance of Computing Science
(failed experiments) and Computing Engineering (muddling).
Jim > If significant testing is done, the failures revealed may lead to
still more tinkering.
Now the science of adequate testing is well established and carrying
out that science is again the responsibility of the Engineer.
So far, we have one part Computing Science and two parts Computing
Jim > This is clearly not the path to reliable, maintainable software,
What? Something jumped here from a bunch of statements to a conclusion
that does not follow from those statements. A paragraph must have
been missing from the message!
Could you add in the missing information, as to why you think this
is clearly not the path to reliable, maintainable software?
Which reliable, maintainable software did not have failed experiments
and compromise with inadequate specs and design, with faulty tools
leading the programmers by the nose?
Jim > but it is an approach that can be inadvertently encouraged by
> programming environments that promote quick recompilation.
There is nothing wrong with quick recompilation. No matter what kind
of process you follow, including a chaotic, unrepeatable process,
fast recompilation will not hurt you in carrying out that process.
Lengthy compiles mean fewer experiments with the faulty tools,
whether you are using a good process or a bad process.
Jim > ... (snip) ...
> It is surely difficult for a novice in any language to get started
> without some of this kind of experimentation, but the goal should
> be to move on to a reasoned approach.
Yes, and in order to do this, we need to:
(a) do many experiments to identify the faults in the tools
(b) work projects that permit us to be funded to do those experiments
(c) track those experiments so we can get those faults reported
(d) raise the consciousness of the Net to the existence of those faults
(e) use Ada for our own interfaces to increase its bindings and
publish the results of our experiments with those bindings
(f) recognize the balance between Experimental Science,
Engineering, and Mathematics.
(g) continue sharing on all the forums (team-ada, cla, etc.)