TEAM-ADA Archives

Team Ada: Ada Programming Language Advocacy

TEAM-ADA@LISTSERV.ACM.ORG

Options: Use Forum View

Use Proportional Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
"Kester, Rush W." <[log in to unmask]>
Reply To:
Kester, Rush W.
Date:
Wed, 15 Mar 2000 12:45:18 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (79 lines)
I am simpathetic with Bob Leif's desire for "hard data" on the relative
utility of programming languages.  However, having worked in the Software
Engineering Lab. (SEL) at NASA Goddard, where we attempted to measure this,
I can assure you that this is not easy.  The SEL has been collecting human
effort data, program size data, error rates, and many other metrics for over
two decades.

Collecting a minimum set of effort, size, and error data is definitely cost
effective, but don't get carried away collecting too much or performing too
much historical analysis.

It is impossible to conduct a "controlled experiment" where all factors
other than programming language are the same.  Thus researchers are faced
with conducting semi-controlled experiments, where factors such as: team
skill & experience, training, motivation, software requirements, tools, etc.
are similar but not equivalent.  Probably the hardest to control is team
motivation and researcher bias.  Two teams may start out similarly
motivated, but over the course of the experiment some spark causes one team
to take-off or some "glitch"  demotivates a team.  Researcher bias makes it
very easy to reach the wrong conclusions due to subjective importances given
to uncontrolled variables.

What you're left with is a situation where only many experiments over time
can be used to arrive at statistically valid results.  By then the topic of
the study is often no longer relevant or of interest.

The other difficulty is that even where this type of data is available it is
often considered proprietary and won't be published.  Even when it's
published, there is often the question of whether conclusions reached in one
environment can be applied in another different environment.

Rush Kester
Software Systems Engineer
AdaSoft at Johns Hopkins Applied Physics Lab.
email:  [log in to unmask]
phone: (240) 228-3030 (live M-F 9:30am-4:30pm, voicemail anytime)
fax:   (240) 228-6779
http://hometown.aol.com/rwkester/myhomepage/index.html



-----Original Message-----
From: Robert C. Leif, Ph.D. [mailto:[log in to unmask]]
Sent: Friday, March 10, 2000 2:26 PM
To: [log in to unmask]
Subject: Our Lack of Hard Data is a Disgrace. Was RE: Help -- ammunition
wanted!


From: Bob Leif, Ph.D.
To: Team-Ada

Although I totally agree with Richard Riehle on software education, the
present state of knowledge concerning the relative utility of programming
languages is a disgrace. A field that employs sophisticated tools, such as
compilers, that either has no means to measure their relative utility or has
not made the effort to do so should not be referred to as science or
engineering. Traditional folk-art or trend-driven would be appropriate.

In the USA, we are seeing newspaper articles describing the need to grant
work visas because we do not have enough endogenous programmers. I suspect
that we do not have enough competent technology managers. Research
concerning software education and productivity should be a priority for the
US National Science Foundation.

I might note that an accurate study will require that the same faculty
members teach both languages. Otherwise, the study will also measure
teaching ability. The relative bias of faculty members for a given language
can be compensated to some extent by finding a teacher who knows both Java
and Ada and prefers Java. We can supply the Ada enthusiasts. There will be a
secondary effect, which can not be compensated. There probably is
correlation in efficacy as a teacher and taste in computer languages.

I do not wish to disparage anyone's previous work. However, all of the data
that I have heard would be described as anecdotal or perhaps equivalent to a
Phase 1 clinical trial. A new drug has to go through three clinical trials.
After Phase 3, the US FDA is sent a report, which consists of at least a
truck-load equivalent of paper.

ATOM RSS1 RSS2