Dismiss maturity models
I wanted to give you a short break from my “Simplify!" blog series. Well, admittedly, I also needed a little break … ;)
Therefore I will discuss a different topic in this post. The topic is the difference between maturity and capability models and why you should prefer the latter. 1
Whenever improvement comes up, the next maturity model is just around the corner. E.g., the Wikipedia page for maturity models lists about 50 different maturity modes, around 15 for IT alone.
Most of them – at least the ones I know – follow the same kind of reasoning:
- We want to improve something bigger, a “whole”. This is the original goal.
- Where are the levers, i.e., the parts (called “capabilities”) that, if we improve them, will improve the whole? This is usually an intellectual derivation.
- Let us define maturity levels for the different capabilities we derived and determine our current levels.
- From now on let us work on reaching higher maturity levels for the capabilities as this will also improve the overarching whole, i.e., support reaching the original goal.
Typically, the maturity models also define an overall maturity level that is a function of the maturity levels of the capabilities.
If companies decide that they need to improve something, e.g., their general software delivery ability, their operations processes or the RESTfulness of their web services, just to name a few arbitrary “somethings”, first they look for a corresponding maturity model. If one exists they use it to drive their improvement initiative.
They first determine the current maturity levels of the different capabilities the maturity model distinguishes. Then they calculate their overall maturity level, if the maturity model provides such a calculation schema. After that they try to improve the maturity levels of the different capabilities which also improves their overall maturity level.
Shortcomings of maturity models
While this approach sounds reasonable at first sight, it has a huge shortcoming: The feedback loop to the original goal is missing.
After breaking down the whole into smaller capabilities that need to be improved, it is never checked again if and how the local improvement activities add to the original goal.
You might say that it is obvious that improving the parts will automatically improve the whole. Unfortunately, most of the time this is not true.
His basic reasoning was that usually we try to improve some kind of a system. The properties and behavior of a system (the subject of improvement) are not only defined by its parts and their properties and behavior, but in a big part by the interaction patterns between the parts. If you only try to improve the “maturity” of the parts (the “capabilities”), Ackoff concludes:
If we have a system of improvement that’s directed at improving the parts taken separately you can be absolutely sure that the performance of the whole will not be improved.
You can find a more in-depth discussion of Ackoff’s presentation in my post “Systems thinking and quality”. He discusses a lot more in just 12 minutes. I can only recommend to watch the whole presentation as it is a real eye opener.
William Edwards Deming phrased it even a bit harsher in a slightly different, yet related context:
People with targets and jobs dependent upon meeting them will probably meet the targets – even if they have to destroy the enterprise to do it.
Goal sheets and targets are a lot like capabilities and their maturity levels. They were also defined to achieve an overarching goal usually getting better in something.
Unfortunately, as soon as the individual targets are defined and there is an incentive in meeting them, they become an end in itself. People will no longer check if their acting will actually support the overarching goal. They will only focus on meeting their local goals.
The same thing tends to happen with maturity models. As soon as the maturity levels for the different capabilities are set up, people will only try to improve these levels, without checking if their activities actually improve the whole. To make things worse, often the responsible persons have targets in their goal sheets that read: “reach the next maturity level within the evaluation period”. You can certainly guess where that leads.
Reaching a tipping point
Even if improving the maturity of a capability initially improves the overarching whole, especially if the initial maturity level was low, beyond a certain point additional improvements of the parts will not improve the whole any more. Often the whole even gets worse if a part is improved beyond a certain point, i.e., the further improvement of the parts (the capabilities) becomes counter-productive.
Most of us have seen that in software development processes. Simply put, the goal of a software development process is the ability to reliably and repeatedly deliver software at a defined quality level. Typically, software development processes were split up in parts, i.e., capabilities: Business analysis, requirements engineering, architecture, design, implementation, testing, assembly, deployment – or alike, depending on the actual process variant.
Then people tried to improve the different capabilities: The requirement engineers tried to improve requirements engineering. The architects tried to improve architectural work. The developers tried to improve implementation. Quality assurance tried to improve testing. And so on. Everyone tried to improve their part in isolation.
Eventually we ended up with huge, bloated process monsters with tons of work that needed to be done. Yet, the ability to reliably and repeatedly deliver software at a defined quality level usually did not improve.
If you did not have any software development process at all and started to pick up practices from the different parts, i.e, started to capture requirements, did some architecture and design work, introduced some coding practices, started testing, and so on, your ability to deliver software improved a lot.
But when you picked up more and more of those practices, tried to improve each part further and further, your overall ability improved lesser and lesser until it eventually even started to get worse.
Again, the problem is that software development forms a system and a lot of its overall quality is determined by how its parts interact. If you only try to improve the parts in isolation, with ongoing local improvements the probability increases that your local improvement will impair the interactions between parts. Eventually, the local improvement will reach a point when it certainly will work against the overall goal.
E.g., if you improve requirements capturing more and more, coming up with more and more elaborate documentation schemes, it will also become more and more laborious for all the other parties to understand and work with them.
The initial improvement steps actually improved the overall quality as the initial capturing and structuring of requirements helped a lot to improve the probability that the software will work as expected.
But if you drive your local improvements further and further, eventually you will reach a point where you as a requirements engineer are totally proud of the accurateness and perfect consistency of the requirements due to the ingenious documentation schema you used – perfect local optimization.
Unfortunately, at the same time it became so cumbersome to understand and process the requirements schema for all other people who need to work with the requirements you captured that in the end your local optimization downgraded the overall quality of the software delivery process.
Overall, it can be said that all local improvement activities eventually reach a tipping point where they become useless or even counter-productive if you do not keep an eye on the whole.
Capability models to the rescue
This is the big difference of capability models. They always keep track of the original goal. The reasoning for capability models goes like:
- We want to improve something bigger, the “whole”. This is the original goal.
- How can we measure if the whole improved? This leads to the core quality metrics.
- We measure our current status using the core metrics.
- What are potential levers, i.e., the parts (called “capabilities”) that, if we improve them, will likely improve the whole?
- Let us look at the levers, identify an improvement activity that sounds promising and try it.
- Then measure if the core metrics improved. If yes, keep the improvement, otherwise discard it.
- Go back to 5.
This approach explicitly takes into account that we might have missed a relevant interaction pattern between different parts when we came up with our local improvement idea. Therefore, we always validate if our local improvement idea actually improves the whole. Basically, the steps 5 to 7 form a PDCA cycle as it is meant to be.
The huge difference between this approach and the common maturity model approach is that here we always keep the original goal in sight. We always check if our local activities improve the whole. We explicitly take into account that we might have missed a dependency between the parts of the system when we decide about a local improvement activity.
Maturity models are still the predominant way to approach improvement initiatives. The problem of their usual implementation is that they focus on improving the parts without validating if the whole also becomes better according to the original improvement goal.
This happens because people tend to neglect that the properties and behavior of a system are not only defined by the properties and behavior of its parts, but to a large extent also by the interactions between the parts.
Capability models on the other hand always keep track of the whole. While they also apply local improvements, they do not check if the affected part improved, but if the whole improved. If it did not, the improvement activity is discarded.
While maturity models can result in good quality improvements in the beginning, the improvement of the whole tends to diminish the higher maturity levels the parts achieve – to a point where the overall quality starts to decrease with ongoing local improvements.
Capability models do not have this problem because they always focus on the improvement of the whole, not the parts. Therefore I prefer capability models over maturity models even if they seem to provide less guidance than maturity models at first sight.
I hope I was able to bring the point across and would be glad if it contained some valuable ideas for you.
As far as I know, a strict distinction between the two terms does not exist. E.g., maturity models usually try to improve capabilities. To complete the confusion, there is even a Capability maturity model. My goal is not to push towards an official distinction. I discuss maturity models as they are usually applied and compare this approach with the concept of capability models as Nicole Forsgren, Jez Humble and Gene Kim describe it in their book Accelerate. ↩︎