TEAM-ADA Archives

Team Ada: Ada Programming Language Advocacy

TEAM-ADA@LISTSERV.ACM.ORG

Options: Use Classic View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Topic: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender: "Team Ada: Ada Advocacy Issues (83 & 95)" <[log in to unmask]>
From: "Crispen, Bob" <[log in to unmask]>
Date: Wed, 19 Nov 1997 07:55:51 -0600
Content-Type: text/plain
MIME-Version: 1.0
Reply-To: "Crispen, Bob" <[log in to unmask]>
Parts/Attachments: text/plain (70 lines)
> Paul D. Stachour[SMTP:[log in to unmask]] sez:
>
>   An organization with which I have been affilated uses McCabe.
> They set a goal of 10 or less for an Ada subprogram.
> If the module is over 15, it needs "an explanation".
> However, McCAbe routinely gives high numbers for a set of
> simple if's inside of a case statement.  This is highly
> maintainable, but the McCabe numbers would tend to indicate
> otherwise.
>
The last time I looked at the McCabe metric, it
didn't distinguish between big and little IF blocks.
Perhaps it still doesn't.  It is, of course, everyday
experience that an IF with an ELSE two pages
away in the listing is a booger to maintain, while
an IF block you can see at a glance is little more
complex than an arithmetic statement -- and if you
do Euler angles, a heck of a lot less complex than
some arithmetic statements!

So right from the start there are some construct
validity reasons to suspect that McCabe complexity
doesn't correlate -1.0 with complexity.

Well, of course it doesn't.  That would be like
blaming a thermometer for providing only a marginal
metric for humidity.

One of the things we tested in our flight simulator code
for the Ada Simulation Validation program back in 1987
was McCabe complexity (thanks to the folks at Harris
Computer who made this possible).

We looked at whether McCabe complexity was a
valid metric for Ada 83 source code, and we applied a
little eyeball face validation to it and discovered no
perceptible correlation between McCabe complexity
and perceived maintainability in the middle ranges.
Extremely high and extremely low values did correlate
moderately well with our judgments.

Some wag once remarked that "IQ is whatever IQ
tests measure".  The McCabe complexity metric
is at bottom only a measure of McCabe complexity
(i.e., of the code features and their weights
specifically provided for in the metric).

If someone wants to use that metric to measure
some real-world quality of interest such as
maintainability, then the metric has to be
validated not only for the general case (which I
believe it has been), but for the domain to be
measured.

Our nonrigorous validation showed that McCabe
complexity correlated poorly with the perceived
maintainability of the kind of Ada 83 source code
we develop.

I'm amazed (and I'll bet McCabe would be amazed
as well) that a manager who wouldn't dream of
using a torque wrench that didn't have a current
metrology stamp on it will nonetheless use software
metrics straight out of the box and make decisions
based on their results, when no one has validated
the metric in his domain.

Bob Crispen
[log in to unmask]

ATOM RSS1 RSS2