# Requirements Decay

May 17, 2012 Leave a comment

The line of thinking in this Post was prompted by noticing the disparity between the set of requirements given at the start of a project with the functionality actually implemented at the end.

## Rule of Thirds

What I noticed across several projects was the ‘rule of thirds’. That is in the final system one third of it is in the original plan, one third is in there but very different to how it was envisaged and the final third was not in the original plan at all. It is made up of new requirements that were not considered at the start.

Upon investigation it was not that the requirements were “wrong” but that the world had moved on since they were written. Ah the benefit of Hindsight.

It occurred to me that there is a similarity between system requirements and radioactive decay.

## Requirements Decay

The idea is that as soon as a requirement is defined it suffers a risk that it will ‘decay’. This decay is not to be confused with discovering a bug but instead is where the requirement itself becomes wrong.

As a system is developed it gets bigger in terms of the number of requirements it contains. Now, the bigger a system is the greater the rate of requirements decay. Eventually you reach a point where you are adding requirements just as fast as they are decaying; this can be categorised as ‘maintenance mode’ i.e. the system stays at roughly the same size (in terms of functionality) but requires work to keep it there.

## Iterative Theory

We can gain some insight modeling this situation with Iterative Theory. What we do is assume the system is being built in Iterations and then vary the Iteration length and see what happens.

## Some Modelling

You can skip the maths and jump ahead if you like; I have included it here because it is fun.

If a team has an *absolute velocity* of *v* then that means they are able to implement *v* individual requirements *per unit time*. This is different from normal velocity which is a measure of the number of requirements per Iteration. As usual for the sake of simplicity I am assuming that every requirement is roughly the same size.

Lets also give all of the requirements a mean lifetime of *τ* (pronounced tau). This again is for simplicity.

We could talk about the half life *t _{1/2 }*of the requirements as the time it takes for half of the requirements to decay. It is certainly easier to think about than the mean lifetime. Fortunately the two are related:

We can now introduce the iteration length *Δt* and start building our software iteration by iteration.

We are able to develop *v.Δt* requirements in the first Iteration – but a percentage of them will decay while we are doing this. By the end of the first Iteration we will expect to have this many requirements left:

Because we are developing Iteratively we can start the second Iteration with a fresh set of requirements. We are not stuck with the requirements as they were originally envisaged at the beginning of the project. So the second set of requirements only begin to decay from the point at which we begin the *second* Iteration.

By the end of the second Iteration we have added another set of requirements but some more of the first lot have decayed while we were doing that. So now only:

of the first set of requirements are left. In fact at the end of each Iteration we loose a constant factor:

of however many requirements there are.

If we extend this process on and on then eventually we reach equilibrium; the number of fresh requirements we are adding each Iteration exactly balances the number decaying from all the previous Iterations put together.

We can work out how many requirements *R* we have left at any given time *T*. It turns out to be:

*T* is the total time that has passed since we started the project. Note: This equation smooths out the jumps that occur at the end of each Iteration so in effect it overestimates how many requirements we actually have. This effect can be seen in the graph below. But that does not matter for the thrust of the argument – it just makes the maths a bit simpler.

This Graph shows system growth for two projects one with Iterations of tw0-weeks and the other Iterations of a year. The half life *t _{1/2}* of the requirements in both cases is one year. The absolute velocity

*v*is the same in both cases.

We can see that in the case of two-week Iterations the total size of the system increases quite quickly at the start and then gets slower and slower until it stops. The one year Iteration project clearly shows the jumps as each Iteration is released and the deacay that occurs in between. In the long run as *T* gets very large then:

This does not depend on *T* and represents the upper limit on the size of the system where effectively it is in pure maintenance mode. The only way to get a bigger system is to either increase your velocity *v* or reduce your iteration length *Δt *– assuming you can do nothing about the requirement half life. On the other hand if your velocity decreases or your Iterations get longer then the size of the system will quickly drop. This effect can be seen in the one year graph.

In the limit of *Δt* being significantly smaller than *τ* we find:

So with short Iterations – in other words an incremental approach – as *T* increases we reach the maximum possible system size of:

In words this makes sense; the maximum system sizze is the product of the velocity and the mean lifetime of the requirements. However the longer your Iterations are the more it limits your total system size. In fact:

So it would seem that with longer iterations your total system size is limited to being one half of an iterations worth of functionality smaller than it could be. This does not matter if your iterations are short but becomes significant as they get bigger.

## Enough Maths Already

If you skipped here from above what you missed was:

If

- Requirements suffer from ‘Requirements Decay’

Then

- Shorter Iterations let you build a system to a given size faster
- Shorter Iterations let you support a larger system for the same cost

## Half Life

The question arises, what* is* the half life of requirements? Really this can only be determined empirically within a given systems context. I have evidence that in my field (derivatives trading systems) that the half life is around a year giving a mean lifetime per requirement of around 18 months for a new system.

Thus a system that takes one year from requirements definition to deployment (developed as a single Iteration with no tinkering of requirements in the middle) will be only 50% complete by the original planned delivery date. With Iterations of two weeks duration the system would be 75% complete after the first year.

This discrepancy continues in the long run. For a given velocity using two week Iterations would let you support a system almost 50% larger than one with an Iteration length of a year!

## Cost

The cost of a system is proportional to the absolute velocity *v* and the total time that has passed *‘T*‘. So, going back to our example above a system with 2 week iterations would be around 66% the developer cost of a system with one year Iterations not only to build up to a certain size but to maintain at that size going forwards.

Looking at it the other way around the efficiency of the team goes up as the Iteration length shortens.

## Lean

A few thoughts from Lean development. The longer your Iterations are the more waste there is in each Iteration. Certainly Lean thinking implies having as short Iterations as possible.

## Real Options

A few thoughts from Real Options. Shortening the Iteration length means postponing your decision about what requirements to develop. This means you can make better choices later. In other words there is value in postponing the development of a requirement until the ‘last responsible moment’. By accepting long Iterations you are forced to make early decisions about what to build long in advance of when it will be delivered – only to find it is no longer needed by the time you finish.