The invisibility dilemma of software

Why software is hard to understand

Uwe Friedrichsen

8 minute read

Shrubs packed in sacks to protect them in the winter (seen in Vienna, Austria)

The invisibility dilemma of software

In the previous post, I mentioned the second law of program evolution that Lehman described in his paper Programs, life cycles, and laws of software evolution

The law of increasing complexity

As an evolving program is continually changed, its complexity, reflecting deteriorating structure, increases unless work is done to maintain or reduce it.

It was part of my explanation why I think, so-called “legacy systems” often are in a bad shape. Quite some of you probably noticed the “… unless work is done to maintain or reduce it” part of the law and wondered if I did not miss an important aspect in my discussion of legacy systems.

Well, sure I did. I did not want to mix two different discussions in a single post (and I wanted to keep the post short). Therefore, let me fill in the missing discussion here.

Counteracting increasing complexity

So you might say: “Lehman mentioned the resort of spending work to avoid increasing complexity. It is the fault of the companies always pressing for more ‘features’ instead of spending the money to tackle complexity, to tackle piling up technical debt.”

And yes, I think there is some truth to that claim. Most companies overshot their goal when they started to align their IT projects with business demands to become more responsive with respect to required market changes. Today, we often see extreme versions of this overshooting in organizations that went “agile”, putting excessive pressure solely on business features, thereby speeding up the deterioration of their software by orders of magnitude.

Still, I think it is too easy just to blame “the others” – here management and the business departments – and lean back: “I did not do anything wrong.”

An IMO much better approach is to ask: “What can I do to change the situation – by doing things differently myself and by helping others to make better decisions?”

There are many facets to this discussion and we could easily fill a longer blog series with it (which maybe I will do sometime in the future). But for this post, I would like to focus on a single aspect, an aspect that I think is essential, an aspect that we as IT people fail to explain good enough to non-IT people.

Software is invisible

As IT people we often fail to support non-IT people in understanding the consequences of their demands towards IT. For non-IT people, it is extremely hard to understand those consequences due to a unique characteristic of software: software is invisible.

In his famous essay “No silver bullet” 1, Fred Brooks states that one of the drivers of complexity of software is its invisibility:

Invisibility. Software is invisible and unvisualizable. Geometric abstractions are powerful tools. The floor plan of a building helps both architect and client evaluate spaces, traffic flows, and views. Contradictions become obvious, omissions can be caught. Scale drawings of mechanical parts and stick-figure models of molecules, although abstractions, serve the same purpose. A geometric reality is captured in a geometric abstraction.

The reality of software is not inherently embedded in space. Hence it has no ready geometric representation in the way that land has maps, […]

In spite of progress in restricting and simplifying the structures of software, they remain inherently unvisualizable, thus depriving the mind of some of its most powerful conceptual tools. This lack not only impedes the process of design within one mind, it severely hinders communication among minds.

For people, it is a lot easier to ponder the reasonableness of their demands in a tangible world. Assume, someone would come up with the following demand: “We face demand for the (brick and mortar) store we just built in several locations. Thus, let us attach wings and engines to it and fly the store to all those locations.”

Or imagine this requirement: “I am not sure if the skyscraper we just built is stable enough to deal with a storm. Thus, let us add some rods of the same height next to the corners of the building and connect them via duct tape. That should do the trick.”

Or: “Why do you want to design new chairs for that roller coaster? We already have a chair design from the living room armchairs. Just reuse it.”

Or: “You built that road over a river (i.e., a bridge). That’s nice, but affects the panorama. Therefore, place it under the river (i.e., a tunnel). You already have everything in place and it is just a minor change – actually just a single word (“over” vs. “under”). So, it can’t be that hard. I expect you to be done until Friday.”

Everybody immediately understands that all these demands are downright nonsense. Yet, in software we face them every single day. The reason for this is not, that non-IT people would be stupid. Most of them are not. The problem is the invisibility of software. Non-IT people do not have any useful reference system they could use to validate the reasonableness of their demands.

Help understanding the consequences

If we combine this problem with the extreme malleability of software (you can represent virtually everything in software) and the fact that – in contrast to most tangible products – software always needs to be changed after its initial production to keep its value, we see that software has some very unique properties that almost no other material or product in the world has.

If we additionally take into account that most humans are basically unable to deal with consequences that only unfold over time, especially if they are not immediately affected 2, it is no surprise that non-IT people have a hard time to ponder the consequences of their acting. To be fair: most IT people are also pretty bad pondering the consequences of their acting over time.

Therefore, if we want to change the situation, we first need to support non-IT people to understand what their demands mean in the invisible world of software, why every change has intensifying ripple effects long into the future and why the extreme malleability of software is a two-edged sword that easily can cut you badly if you handle it in the wrong way.

The trust trap

You may counter that the non-IT people will not listen to you. That might happen. The willingness to listen is a matter of trust:

  • If you trust a person, you are willing to listen to that person and to ponder their input.
  • If yo do not trust that person, you are not willing to listen.

IT has lost a lot of trust in the last decades for many reasons. Again, we could fill several blog posts with an analysis of those reasons (and again, maybe I will sometime in the future).

But no matter if you are currently trusted by the non-IT people in your company or not: if you do not help them to understand the consequences of their demands in the invisible, ever-changing world of software, the situation cannot change. A lack of trust makes your way longer, harder and sometimes more frustrating, but the task remains the same.

Summing up

Software is invisible. That deprives humans – especially those who are not familiar with software development – from a reference system that they can use to ponder the reasonableness of their demands.

Additionally, software is extremely malleable, which allows you to bend it in almost any imaginable way – including all the bad ways, and often it is hard to anticipate if and how much the – maybe hurried – design of today will backfire in the future.

This leads to the third unique property of software: Software solutions need to be changed all the time after their initial production until the end of their lifecycle to keep their value.

These three properties make it extremely hard for non-IT people to understand the consequences of their demands regarding the solution. Therefore, it is our task to explain the invisible, extremely malleable and ever-changing material “software” better to all the people who are not so familiar with it.

As long as we fail to do this, all other discussions are mostly pointless from my experience. As long as people are not able to understand the consequences of their acting, they are not able to question and change their behavior.

To be honest: Even if you are able to help them understand, it still is not a guarantee that things will change. But the likelihood is a lot higher.

Based on what I see in my daily work, we still have a lot of homework to do regarding this task.


  1. In case you do not want to download the paper from ResearchGate: if you search for the paper in the Internet, you will find various download options. It is also included in the essay collection The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition (2nd Edition) by Fred Brooks which is still available. ↩︎

  2. This is a natural trait of humans. For example, if you tell a person that a certain behavior will make the person ill in a few hours, most likely the person will change their behavior. If you tell the same person instead that the same behavior will most likely kill the person in a few years, maybe or maybe not the person will change their behavior. If you tell the same person instead that the same behavior will render the planet inhabitable in several years, most likely the person will not change their behavior. This is not ill will. The reason is that humans are really bad in understanding long-term consequences of their acting. While this was a very useful trait to survive in the stone age when surviving the day usually was the predominant requirement, today this trait often misleads us. (As it could be misunderstood: The example did not contain a hidden request to change any of your behaviors. IT was meant solely for illustration purposes.) ↩︎