Simplify! – Part 8
In the previous post of this series I started to discuss drivers on the project level that lead to accidental complexity regarding unnecessary requirements that do not add any value to the solution.
In this post, we will continue on the project level, but with a bit different focus. The topics of this post will be:
- “Agile” and “DevOps” cargo cults, not improving anything
- The “high utilization means more productivity” fallacy
- Enterprise Architecture Management as an impediment
- Compliance/governance implemented wrong adding accidental complexity
These topics do not directly affect the requirements, the platform or solution design, but they tend to create constraints that negatively affect projects in a way that leads to more accidental complexity.
If we go back to the little framework from the 4th post of this series, we had 4 core sources of accidental complexity in IT:
- Unnecessary requirements not adding any value to the solution
- Overly complex platforms regarding the given problem
- Lack of adequate platform support regarding the given problem
- Overly complex software regarding the given problem
The topics we will discuss in this post can influence all 4 types of accidental complexity negatively.
Old wine in new bottles
One should think that Agile and DevOps would help to reduce the problem of excessive accidental complexity. Originally, Agile is about tackling complexity and uncertainty and DevOps is about becoming faster without compromising quality, including time for continuous improvement. This sounds like perfect preconditions to tackle accidental complexity: We accept that requirements are uncertain and move fast. We have the means to learn quickly and try alternatives. And we improve the way we work all the time. Great, isn’t it.
Unfortunately, things evolved differently. While Agile and DevOps in their original meaning actually would be great measures to tackle accidental complexity of all types, we rarely find those original ideas implemented in practice. Instead, we usually find some “Agile” or “DevOps” cargo cults that just renamed traditional industrial practices with a hipper terminology without changing anything. That is why I tend to put those approaches in quotes.
“Agile” cargo cults
As written before, a true agile mindset, accepting internal and external uncertainty as a given, has the potential to address many of the challenges listed above. But in practice the agile ideas usually met a strictly industrial mindset that builds on the assumption of certainty and hence only focused on increasing the efficiency of the development process.
Fueled by agile sales promises of extreme productivity boosts 1, most agile transformations resulted in the same old industrial practices, focused on (minor) efficiency gains, just disguised behind some new terminology.
E.g., in most “Agile” projects I have seen, there was always a predefined backlog that needed to be implemented completely before anything got released to the customers. The requirements (disguised as “user stories”) never were validated and discarded if they did not resonate with the customers as expected. The only way to go was to add new requirements that also were not validated – of course without changing the release date because “we are agile”.
Real agility would mean that you know you need feedback from the users to understand if your assumptions regarding their needs were correct. You would implement a very small story that would help you to understand if the general idea behind it makes sense. You would roll that story out to your users as fast as possible (e.g., with the end of the current sprint) and then collect feedback from the users by measuring usage and asking for explicit feedback.
Based on that feedback you would decide if it makes sense to implement the remaining stories adding to the same idea, if you need to adapt the idea or if you rather discard the idea and remove all remaining stories belonging to it from the backlog and continue with a different idea. This would resemble a hypothesis driven development approach where stories come and go as you learn along the way.
But if you implement the whole backlog without learning along the way, why bother with agility? A traditional iterative-incremental approach like RUP would be a lot better fit – and more honest.
Another popular cargo cult indicator is “velocity”: If the most important metric in “Agile” projects is team “velocity”, you are not agile. Measuring team velocity, trying to make it comparable between teams and then focusing on increasing it is nothing but industrial, cost-efficiency thinking through the back door: “If I can increase the velocity of the teams, I get more features for the money”.
Velocity can be useful for a team as a warning indicator. If velocity starts to decrease, chances are that some undetected problem is lurking somewhere and the decreasing velocity gives the team a hint that it should investigate. It also helps to make reasonable guesses regarding the contents of a sprint. But that’s basically it. All other widespread uses of excessive controlling of velocity are only clear signs of industrial thinking disguised behind more fashionable terminology.
Only business features have value
Another misunderstanding in that context leading to a lot of accidental complexity is the fallacy that only business features create value. Very often people from the business departments are chosen to become product owners. While this is not a bad decision per se, these people usually do not understand IT, its complexities and intricacies.
As a consequence, only business features have value for them. As the goal of an agile project is “to create as much value as possible” 2 and the project owners only understand the value of business features, they only prioritize business features in the backlog. Non-functional features are systematically ignored and declared a problem of the team (I will discuss in a future post why this belief is fundamentally wrong).
Often, the “Agile” teams tend to make it worse by still falling for widespread misdirected “agile architecture” discussions of the past years. The short version is: “Scrum does not define an architect role. Thus, an architect is not needed.” 3. This in combination with some “emergent architecture” myths 4 then results in systematically neglecting architectural work in the delivery teams.
Admittedly, in the past years the understanding improved that architectural work is also required in “Agile” projects. Still, from all I see there is still a long way to go to find the right balance. Most of the times, business feature are still the only measure of progress. Time to step back for a moment considering complexity and robustness of the solution is still systematically cut, leading to a lot of accidental complexity.
Overall, in most places, the “agile revolution” fizzled out without any actual improvements: the requirements became vaguer, controlling became tighter, still solely focused on efficiency, time for design and implementation dropped and accidental complexity increased.
“DevOps” cargo cults
Basically the same is true for DevOps. DevOps is a movement that helped several modern companies to go fast without compromising quality. In its core, DevOps is a continuous improvement process that builds on many feedback loops, aiming at accelerating the whole IT value chain. This means of course changing (not only) the IT organization and its processes a lot.
Unfortunately, this would contradict the self-image of an industrially thinking enterprise. Thus, they focused on merely copying the tool chains (to improve efficiency) plus some industrial mindset based interpretations of process and organization augmentations – of course without establishing the feedback loops or the improvement process which are the core of DevOps.
So, usually instead of trying to reduce cycle times (or lead times, if you prefer Kanban terminology), the sole focus was on improving cost-efficiency. It became totally obvious in those places where decision makers confused “DevOps” with “NoOps”, thinking that developers take over operations on the fly and all ops people can be fired – cost-efficiency thinking in its purest form.
Overall, it can be said that most “Agile” and “DevOps” transformations created conditions that resulted in more accidental complexity than less – the opposite of their original ideas. I will discuss this topic in more detail in some future posts. Please bear with me until then.
The “high utilization means more productivity” fallacy
I would like to continue with the widespread industrial production belief that higher utilization means higher productivity. I will discuss this fallacy in more detail in a future post. Here I will just briefly explain it. 5
The general rule of queuing theory says: The higher you utilize your resources 6, the longer the lead times become. I.e., the wait time until an item of work gets processed rises exponentially the closer the utilization of your resources get to 100%.
But that is the simple variant where arrival times and processing duration of work are deterministic. You can find such conditions typically in industrial production processes on shop floors. 7
In software development and operations the arrival times and processing durations are usually stochastic, i.e., work arrives at unpredictable times and you cannot predict precisely upfront how long it will take to process the work. This results in harder to predict queue lengths, i.e., how long it will take until a piece of work gets processed. 8
In practice, requests often have a high priority, meaning the requester expects the request to be processed in a short period of time. Even if their requests do not have a high priority, the requesters typically expect them to be processed in a timely manner. Of course, this is in conflict with long, unpredictable lead times due to the effects of maximizing utilization of the people involved.
Additionally, the times humans need for context switches are typically not taken into account. Especially software engineers often need to mentally dive into complex, interacting, invisible software systems before they can start with their actual task to make sure that their work will have the desired effect and does not create any unintended damage. This required deep focus results in long context switch times – which are usually ignored.
This means, the actual utilization of the people involved is a lot higher than the calculated utilization as the required context switch times are usually neglected.
Overall, maximizing utilization results in long, unpredictable lead times for work to get processed while the requesters of work expect short, predictable lead times. Additionally, context switch times are usually ignored resulting in even longer lead times.
The resulting effect is similar to the one that you get if you continuously increase the delivery pressure on software development teams in “Agile” contexts. Thus, we find similar consequences:
- As pressure is always high, time for considering appropriate platforms and solutions is missing, leading to suboptimal, overly complex solutions.
- Missing required context switch times, the number of suboptimal and buggy solutions rises, resulting in more accidental complexity and increased pressure.
It can be said that the utilization maximization fallacy besides several other negative effects creates a reinforcing loop leading to more and more accidental complexity.
Project constraints adding accidental complexity
As I already described in the post discussing the different types of accidental complexity, constraints can increase accidental complexity on the problem side as well as on the solution side. Typical originators of such kinds of constraints are Enterprise Architecture Management (EAM) rules, compliance rules, other governance rules, and more.
E.g., EAM might limit the allowed programming languages. You have a machine learning problem to solve, but EAM prohibits the use of Python where solving the task would be straightforward. Or EAM forces you to use a certain product for “standardization reasons”, even if the product completely misses your needs 9. As a result, you have to develop “around” the product, leading to accidental solution complexity.
Or, e.g., storage is charged at an extremely high cost as the mandatory backup and auditing solution that is approved by compliance is available only for the mainframe computer. While many projects have lower compliance needs and less expensive solutions could be found easily, the compliance department refuses to define different compliance levels or to approve other solutions for reasons of time. As a result, you need to find a way to extremely minimize storage requirements, leading to lots of accidental solution complexity, not adding any real value to the solution.
Or, e.g., governance and controlling do not allow to change any requirements after project approval without issuing a change request which is coupled to a highly cumbersome and time-consuming change approval process. The reason given for this rule is to ensure “better transparency of project costs”. I think, it is obvious how the resulting efforts to evade the annoying change approval process lead to actually less cost transparency and lots of accidental complexity of all types.
I could add a lot more examples and I am sure that you could also list a lot of constraints from your projects leading to unnecessary added complexity. The key point is that they add accidental complexity, either on the problem or the solution side.
Improving the situation
All drivers described in this post tend to increase accidental complexity in IT, but in an indirect way. Hence, it is a lot harder to find good measures to mitigate them than, e.g., with requirements as I discussed in the previous post.
All the drivers listed in this post have negative effects on projects that go far beyond more accidental complexity. Therefore it is important to address them in any case. Still, usually it will be a long and winding road.
The first step is to create awareness in your environment that the drivers exist. A good tool for this is asking “Why”: “Why do we need to do it this way?”, “Why does this procedure exist?”, and so on. Not everybody will love you for asking those questions and often you will get repulsing answers. Still, from my experience it is the only way to initiate a change in those places.
And even more important: If you ask “Why” long enough and do not let others discourage you with repulsing answers, you might figure out the original cause for the useless constraint. Often you will learn, that the original cause still makes a lot of sense, but the implementation, i.e., the resulting constraint does not.
Knowing the original cause gives you a powerful basis to suggest an alternative procedure that suits the same goal without constraining you in a negative way. This way, you have the chance to turn a dull fight of power and stamina into a constructive discussion.
Thus, whenever you tackle one of the discussed complexity drivers, try to excavate the original motivation behind the driver and then try to offer an alternative in a constructive way. This alone gives you the chance to move a lot of things. Nevertheless, you will still need a lot of patience and persistence – as with all change processes.
“Agile” and “DevOps” transformations that went wrong as well as the utilization maximization fallacy are more complex topics that have a lot more implications than just increasing accidental complexity. That is why they need a dedicated consideration that also takes the other aspects into account. Therefore, I will discuss them – including additional means to deal with them – in dedicated later posts (I will try to link those posts here then).
Regarding accidental complexity, as a start you should try to make sure to do enough architectural work and resist the rat race. It will pay back multiple times – even in terms of saving a lot of time in the longer run. 10
In this post, we have looked at several topics that tend to drive accidental complexity by creating adverse constraints.
From what I know, easy mitigation patterns regarding those constraints do not exist. The only way (I know) to address them in a constructive way is to make the constraints and their effects transparent, try to figure out the original motivation behind the constraints and offer an alternative approach that satisfies the original motivation in a better way.
While this recommendation might not sound too promising, I still think it is important to understand how those topics and the resulting constraints drive accidental complexity. Otherwise, there is no way to change the situation. After all, reducing complexity – as all optimization problems – works best if we tackle it at multiple levels, even if tackling on some levels is hard.
In the next post, we will move on to the tool and technology level where we will find a lot of potential to fight excessive complexity. So, stay tuned …
E.g., Jeff Sutherland, one of the creators of Scrum promised that Scrum will make people “do twice the work in half the time”, i.e., quadruple the productivity. For a person with an industrial mindset, this clearly states that with “Agile” you get four times the bang for the buck, that you can produce a lot cheaper. It is irrelevant what Jeff Sutherland or other people who spread similar messages tried to actually express with such statements. Fact is that this type of message had a big share in all the “Agile” transformations that went wrong because especially at the decision maker levels they created expectations of producing a lot cheaper while delivering faster as a convenient side effect by just sending the employees to a simple 2-day training. ↩︎
While this statement is not wrong per se, it is too simplistic. “Value” is a very complicated term which makes it hard to define what creates value and what does not. Then the imponderabilities of the markets add another level of uncertainty on the value assumptions. Additionally, value needs to be considered over the lifecycle of the system affected and not only at the moment of the feature release. And so on. Overall, I think it can safely be said that the discussion of project goals has a lot of nuances that need to be considered. ↩︎
My normal response to this nonsense sentence is: “Scrum does not define a software developer role. Thus, software developers are not needed.” Most people (unfortunately not all) then understand the flaw in their reasoning. Scrum is generic method not specific to software development. It only defines a “team” for solution delivery and demands that the team needs to incorporate all skills needed to make the required decisions. Scrum does not say anything about which skills are needed. Due to its domain-agnostic approach it simply cannot. Thus, concluding that architects are not needed because Scrum does not mention them is plain nonsense. ↩︎
I will discuss in a later post why the “emergent architecture” myth is highly dangerous and tends to create lots of accidental complexity. I will try to remember to add the link here. Sorry for the ordering problem. Most of the topics I discuss have so many relations to each other that no matter in which order I discuss them, I will run into an ordering problem, i.e., I will need to make references to future posts. Please bear with me … ↩︎
If you do not want to wait for my future post to get a more detailed explanation of the utilization fallacy, I can recommend to read “The Principles of Product Development Flow” by Donald G. Reinertsen. ↩︎
To be clear: People are not “resources” for many reasons. Here, the term is only used to describe general concept of queuing theory. As soon as it becomes clear that this general concept boils down to humans in a given context, the wording should be changed not only for ethical reasons. ↩︎
I already discussed in an earlier post that you cannot compare software development to industrial production. Yet, many people – especially those with a deeply industrial mindset – still believe in that fallacy. ↩︎
Many people think that reducing the variance of arrival times and processing duration of work is the key to improving the situation. Unfortunately, this is another fallacy rooted in industrial thinking. If you do not not to wait for my future post discussing this topic, I can recommend reading “The Principles of Product Development Flow” where Donald G. Reinertsen debunks that fallacy. ↩︎
Standardization is a two-edged sword. While it has a value not to have many different solutions for the same problem in place, overdoing it can easily destroy the value. There is a sweet spot to be met between too much and too little standardization. I will discuss this topic in more detail in a future post. ↩︎
I do not talk about reintroducing the BDUF (Big design upfront). I talk about pondering high-level structure and NFRs just enough upfront to have a reasonable starting point and then learn along the way. I will discuss these ideas in more detail in some future posts. ↩︎