Interesting commentary on validation vis a vis risks. At present, validation
is not so much about reducing all the risks of a compiler than that of
ensuring the compiler conform to a reasonable set of minimal expectations.
Example: I was interested in the potential for an Eiffel compiler for a
project at one point in time. I rather like some of the features of Eiffel
and it seemed a appropiate option. As always, "The devil is in the details."
It seems that the Eiffel compiler in question was a C-path compiler. That is,
it generates interemediate C code that is then compiled to the target. When
one thinks of C as what Stowe Boyd used to call a "universal assembler" this
makes some sense. But there is a problem. What about the little problem in
C of "integer overflow?" No need to go into the details of that here,
but it turns out this was a potential flaw in the C-path solution.
About the time I was interested in this problem, I ran into Tucker Taft and
asked him about how they dealt with this problem with their C-path Ada
compiler at Averstar. Tucker acknowledged this as a problem and indicated
that they had to go to special lengths in their design to accomodate the
issue. Why? The integer overflow problem would not pass the conformance
(nee validation) testing. It may be true that conformance testing does not
intercept every possible problem, but it does ensure that obvious problems
I would rather have a validated compiler for the work my customers do
than one that could vary all over the place on otherwise simple things.
Frankly, some of the Ada compiler efforts I used to see were so cavalier
that it was only validation that made those compilers usable at all.
[log in to unmask]
AdaWorks Software Engineering
6 Sepulveda Circle
Salinas, CA 93906
On Fri, 2 Jun 2000, Roger Racine wrote:
> At 05:20 PM 6/1/2000 , AdaWorks wrote:
> >On Thu, 1 Jun 2000, Brashear, Phil wrote:
> > > Unfortunately, it no longer seems to be the case that customers (DoD or
> > > otherwise) insist on validation of their Ada compilers either, so maybe
> > they
> > > are applying "the same standard" to C++.
> >That is really sad. I wonder if the DoD has any idea of the risks
> >it is taking with its software decisions. Is everyone so overwhelmed
> >by economic considerations that the concerns of national defense
> >have been preempted by shortcuts? Are we seeing a phenomenon that
> >corresponds to fast-food and younger whiskey in defense policy? Have
> >our decision-makers had their minds so polluted by TV sitcoms that
> >they cannot see beyond the next thirty minute commercial?
> >Richard Riehle
> Risks? What risks? Validation is not a complete (nor even a very good)
> test of a compiler's correctness. A compiler is much too complex to find
> all errors with any set of tests that might finish in a reasonable
> time. So a "validated" compiler comes with errors. I would much rather
> use an unvalidated compiler that is used by a million other people than a
> validated compiler used by a few thousand. I really worry about the Ada 83
> projects out there for that reason. There are some compilers for which
> there are only a few users (VAX to 80386 bare cross compiler comes to mind).
> Validation does test that each language feature is implemented according to
> the Standard. This matters for portability, not safety.
> Please note that I am not saying that validation is bad. Just that Program
> Managers are not terribly interested in portability. Portable software
> will help the -next- project, not the current one. So one can argue that
> Program Managers are being short-sighted (which, unfortunately, is their
> job), and that the costs of projects are higher than might otherwise be the
> case if portability was a major concern. But I do not think validated
> compilers are in any way safer than unvalidated compilers.
> Roger Racine