"Thomas A. Panfil" <[log in to unmask]> wrote on Wednesday, February 06, 2002
> Hi All,
> I'd like to be able to cite a good paper on why Buffer Overflow
> susceptibility is common in software written in some popular
> language(s), and rare or relatively easy to prevent when using
> other languages. Advice, anyone?
Some hopefully relevant points.
For a buffer overflow vulnerability to be actually exploitable, it is
(a) the underlying operating system or execution environment to fail to
provide or deploy protection against the execution of code that lies in an
area of memory which is read-write ;
(b) the underlying operating system or networking software configuration to
fail to isolate the executional environment of the (TCP) service application
to the maximum extent feasible ;
(c) the service application to have a twofold combined kind of bug, whereby
the client is able to write binary data into a certain area of the service
application's memory, and then cause somehow the service application to
start the execution of instructions somewhere within that area of memory
 The C or C++ language often prevents the use of such protection, even
when it is available (at no executional cost) on the architecture. While
some forms of protection can be used, others cannot (because of C's need for
a 'flat' address space). Ada does not require a flat address space (but
typically suffers from the limitation of having to interface to C software
to be able to use operating system specific functions, of course).
 This especially pertains to running the service application as a normal
user (rather than root), and ensuring that user has the minimal (file)
permissions necessary to do its job. A typical situation on most UNIX-based
operating systems, unless the system administrator is very sophisticated,
and an even more typical situation on Windows NT (with IIS), is that such
elementary precautions are not taken.
 This is a theoretically extremely unlikely bug, that nevertheless
demonstrably tends to crop up within (large) C and C++ software, and to my
knowledge never in software written in any other language.
The thing that is most deadly about a successful buffer overrun exploit
attack (and similar types of attack), is that the attacker gets to run his
own code, often with root privileges, and can thus truly "do anything he
likes" from that point on. Typically the host computer is totally
compromised; this in turn (if the attacker is skilled and persistent) can
lead to whole networks being compromised.
I believe there is a negligible likelihood of a TCP or UDP service
application written in Ada, especially with most or all checks left on,
suffering from a buffer overrun vulnerability, or any vulnerability that
permits the client to cause it to execute arbitrary code. This is regardless
of the compiler, the host machine (architecture), and the host operating
system. (Note however that this is not to be confused with the case of an
Ada main program using substantial library code written in C.)
A good starting point for more information may be the US DoE Computer
Incident Advisory Capacity (CIAC) at:
I believe that with a bit of digging you will find much fodder for your
research. Happy hunting!