In essence, Christoph said that he was currently maintaining some legacy
code, and was interested in knowing what complexity values were too high.
(At least, that's what I read into his message.)
The way I would approach this problem is to apply whatever complexity tool
is available to all the compilation units. Find the mean and standard
deviations of the complexity measurements of all the compilation units.
Then, pay special attention to the compilation units that are more than 2
standard deviations away from the mean (in both directions).
What is it about the units with high complexity ratings that make them so
complex? Are these the compilation units that have had the most errors in
them? What could be done to make them simpler and more reliable? Should
they be revised to make them simpler? Is the tool producing incorrect
complexity measures for certain programming structures?
What is it about the units with low complexity ratings that make them
score so low? Are they really simple? or does the tool just give
artificially low readings in certain cases. If they really are simple,
what programming technique expressed the algorithm so simply? Is there
something we can learn from these units?
Dr. Stachour (in another response to the original message) made the
important point that units that exceed a threshold "require an
explanation." Just because a compilation unit's measured value falls
outside the normal range doesn't necessarily mean that it is bad and must
be rejected. It simply means that the unit is unusual. If one can
explain why the unit is unusual, and that the approach taken is
appropriate, then the unit should pass the walkthrough. Often, however,
when trying to explain the out-of-range measurement, it will become
apparent that there really was a simpler way to implement the algorithm,
and that the units hould be simplified.
When used in this way, metrics can be very useful. The danger is that
management will adopt fixed rules (no unit can be more than 10 pages long
or have a complexity greater than 15) that can cause perfectly good units
to be rejected in some cases. Re-writing unusual algorithms to fit usual
limits may result in a unit that is actually harder to understand and
Metrics help to point out units that are unusual. Unusual units should be
inspected more closely. Close inspection will often lead to improvement.
This is an appropriate use of metrics. Don't use metrics as go/no-go
criteria in walk-throughs.
| Know Ada |
| [Ada's Portrait] |
| Will Travel |
| wire [log in to unmask] |
[Charset ISO-8859-1 unsupported, filtering to ASCII...]
> ** Reply to note from Angel Fern_ndez Guerra <[log in to unmask]> Tue, 18 Nov 1997 21:39:06 +0000
> > Hi ada activists,
> > We are currently performing a code walkthrought (ada source code of
> > course). In the checklist we have an entry for maintainability along
> > with others like readability, conformance with coding standard, data
> > flow errors, testability, etc.
> > We found some problems to decide whether a compilation unit passes or
> > fails on the maintainability topic. One option could be to evaluate the
> > complexity of the code using Mccabe, Haltstead, or/and Harrison figures.
> > In this way the problem is the establishment of the valid range for
> > these figures.
> > If someone can enlighten me or provide any useful information, thanks a
> > lot.
> > Angel Fern=E1ndez Guerra.
> Please put your answers in the list, I am looking for the same information.
> I am currently maintaining "heritage code" in Ada and driven by that
> experience, I would like to find some reasonable guidelines concerning
> maintainability, readability and coding standards. Just to avoid the same
> errors in new projects (try it at least...).
> Could anybody name some reference documents or URLs for this? The only
> document I have up to now are the Ada83 coding standards. I want to avoid to
> reinvent the wheel, which might lead to a very personal kind of "wheel" then.
> Probably the subject of Ada (or programming in general) style is one where
> almost everybody has very personal and strong opinions, so the definition of
> guidelines should be based on a broad range of experience.
> Any hint is welcome,
> Christoph Seelhorst, Houston, Texas