# Cyclomatic Complexity

Cyclomatic complexity was first mentioned in 1976 by Thomas J. McCabe, making it one of the oldest software metrics. But it's not just some old theory, but the foundation of probably the most exciting insights into code. Many younger metrics extend, build upon or otherwisely 'orbit' around this rather simple thing.

Cyclomatic complexity is a count of the control-flow variants or execution paths of a piece of code, most commonly a function or method.

In an ideal world, a method should only have one execution path (and some have) but in many cases, decisions and loops are necessary.

### Complexity?

Consider the following piece of code and the things that make it complex:

 123456789101112131415161718192021222324 public function doSomething($a_some,$a_thing){ if($a_thing == 'type_1') { for ($i = 0; $i <= strlen($a_thing); $i++) { echo 'Ding.'; } } elseif ($a thing == 'type_2') { switch(\$a_some) { case 'Dong': echo 'Dong.'; break; case 'Dung': echo 'Dong.'; break; default: echo 'Dang.'; } }}

First, we have to sort out that complexity is different from complicated. The complexity we measure does not take your experience reading and understanding of code into account.

Complexity is this:

The construct containing the code itself "scores" 1 and for every language construct adding a new execution path, being it a decision (if, else, elseif, switch) or a loop (for, while, do) another point.

The method above has - for the few lines and statements it consists of - quite a high complexity score.
There's:

• a function
• an if decision
• a for loop
• a switch decision
• two cases on the switch (not counting 'default')
All these things add to the complexity.

If you want to find out more on the details of the metric (I am not entirely 'precise' here.), check the following links:
Manuel Pichlers Article on Cyclomatic Complexity and Wikipedias Article on Cyclomatic Complexity (de)

In this guide, we will instead focus on some more practical approaches to the topic.

### So it's complex.

A complex piece of software like ILIAS of course has some complex methods and business rules in it. So, one may fall for the idea that the code has to be complex as well. And: Even the most complex chunks of code can work well.

There is another metric around, that builds on top of cyclomatic complexity that makes the point here quite well:

# Change Risk AntiPattern - CRAP

Alberto Savoia came up with a metric called CRAP in 2011. See his blog for the story.

Basically, what the metric does is the following:

CRAP1(m) = comp(m)^2 * (1 – cov(m)/100)^3 + comp(m)

Where CRAP1(m) is the CRAP1 score for a method m, comp(m) is the cyclomatic complexity of m, and cov(m) is the basis path code coverage from automated tests for m.

If CRAP1(m) > 30, we consider the method to be CRAPpy.

In laymans terms: If a complex piece of code is not under test, it is at some point considered crappy, because it becomes hard to understand (high WTF/minute, a not so serious metric), hard to change and hard to test. Such methods are long, show the "new york skyline indentation syndrome" (if you tilt your head to the right you will see this). Developers - for a reason - hesitate to change these methods.

In the ILIAS build, we can see these metrics at work.
For the cyclomatic complexity, there's a graph in the "Plot"-section. The CRAP-metric is a bit "hidden". You can find it in the CLOVER-Html-Report, when you click on the tiny "dashboard" link on top of it.

11 Apr 2014 MB: There's so much "Test & Assessment" on the screenshot because we do have got tests. All the files that (still) have zero coverage are not in the report.

### Should I care?

Yes, you should.
If the code is crappy, it becomes two things you don't want to have:

1. A risk for the project.
If the code fails, a number of users may become unhappy. If the code fails too often, they will go away from the product.
2. A cost factor in development.
The longer time you need to change it, the lesser you earn or the higher your customers need to pay.
So, here are two things to consider when you see a piece of code being crappy:

Refactoring
You can extract complexities into their own methods. This reduces the complexity of the former method, enhances its readability and so its maintainability. This will not reduce the overall complexity of the surrounding code, it will just spread it out nicely.

Unit Testing
If a piece of code cannot be refactored (which may be the result of the high complexity), or you are finished extracting, you can unit test it. Covering a method with tests makes it "safe" and uncrappy.

Since both of these approaches mean "work", you can easily calculate what is know as "technical debt".

Theres a video in which Ward Cunningham - who coined the term - explains what this means. (Video & Transcript)
Wikipedia also offers a good read on the matter.
To keep things simple, here's a practical way to think about the problem:

If you want to test a method, you need as many tests, as there are execution paths through it.

Since a method should do one thing, only one thing and do it well, you could extract code into new methods, until there is a low cyclomatic complexity of 2-4 reached. Which is realistic at all times. (See also Robert Martin, explaining in this video (or here). Great talk, skip to minute 3:00)

What's the cost for one unit test? What's the cost for extracting a method? Not a great deal.

However, with the CRAP metric in place, we see how this debt escalates, because the larger and messier a method becomes, the harder it is to write a test or to identify candidates for extraction.

Think of the CRAP-metric as something you could put a currency symbol to and that's basically the price to get things right.

### Get the most out of this metric

To get the maximum benefit out of this metric, do the following:

• See to it that your components classes are all tested (at least a bit) so they are included in the reports.
• In intervals - weekly or so - check your components status in Jenkins' clover report by browsing to the module using the clover html report and clicking "dashboard" when you have reached your component. (This report is on the Community-Jenkins, free to access for everyone.)
• Act accordingly: Do not get over-indepted. If you see a part of your component drifting off, refactor and test before the metric gets to a critical mass.

This is part of the report for Services/Math/classes/.