TEAM-ADA Archives

Team Ada: Ada Programming Language Advocacy


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
"Team Ada: Ada Advocacy Issues (83 & 95)" <[log in to unmask]>
Sun, 13 Jun 1999 08:15:58 -0700
"Robert C. Leif, Ph.D." <[log in to unmask]>
text/plain; charset="iso-8859-1"
text/plain (76 lines)
From: Bob Leif
To: Kevin Radke et al.

I have taken the liberty of cross-posting this to Team-Ada since I believe
that this discussion now is not specific to ObjectAda.

Kevin Radke asked, "The described metrics do sound interesting.  Is anything
similar being done for other languages?"

I have not heard of this being done with other languages. Marketing managers
of other languages would probably be ill-advised to allow the release of
sophisticated metrics to quantitate productivity. As most of the members of
this group believe, if good measurements of the effort to develop and
maintain software were actually kept, Ada '95 would rout the competition.

This type of record keeping is plain old good manufacturing practices.
However, it is in the best interests of companies which sell a group of
software development products, such as Aonix, to create and sell this type
of tool for at least Ada and if possible other languages. The most
ridiculous articles now appear in Cross-Talk and other Journals, "My company
has achieved Carnegie Mellon (CMU) level 3 and I am closer to Nirvana." I do
not wish to cast aspersions on any methodology; however, where is the data?
If companies, such as Aonix, can market tools to objectively measure
closeness to Nirvana, then there would be objective feedback on the software

This is why I wish to measure language efficiency. I think software
engineering has to develop the equivalent of thermodynamics before we
attempt quantum mechanics.

You are correct, ASIS should make this possible. As I stated at SIGAda '98,
the capacity of reasonably accurately, objectively measure the amount of
software provides the very intriguing possibility of equitably dividing up
the royalties for large projects amongst multiple developers.

> -----Original Message-----
> From: [log in to unmask]
> [mailto:[log in to unmask]]On Behalf Of Kevin M Radke
> Sent: Friday, June 11, 1999 8:41 AM
> To: [log in to unmask]
> Subject: RE: Intel-OA: Adacounter source code counter
> This message was from intel-objectada please reply to the list
> rather than the sender
> _
> > One very significant future enhancement would be to
> > provide separately the same statistics for instantiated
> > generics, class type calls, renames, and subtype declarations
> > which do not include ranges. These subtypes function as
> > renames.  The reason for this request is that no one has
> > obtained solid data on reuse or what I like to call
> > efficiency. Efficiency is the number of lines that would
> > have been produced if generics and class-wide operations
> > did not exist divided by the number of lines written.
> > Although, Software Engineering is a fine sounding term, it
> > presently needs much work to evolve into a standard
> > engineering discipline. The first step in developing sound
> > engineering practices is to obtain accurate data.
> This sounds like a perfect job for an ASIS tool.  The
> benefit of using ASIS is that it then becomes a tool
> that can be used with any compiler supporting ASIS.
> The described metrics do sound interesting.  Is anything
> similar being done for other languages?
> Thanks!
> Kevin