The microservices fallacy - Part 6

Debunking the technology migration fallacy

Uwe Friedrichsen

8 minute read

Toadstool on the forest soil

The microservices fallacy - Part 6

This post discusses the fallacy that microservices make technology changes easier.

In the previous post we discussed the fallacy that microservices lead to better solution design. In this post we look at the last fallacy from the list – that microservices make technology changes easier.

Easier change of technology

The last fallacy I will discuss is that microservices make it easier to change technologies and to explore new technologies.

Well, this one is partially true. It is true that smaller runtime units make it easier to rewrite a single unit in a new technology. Still, the Principle of technology migration effort sameness that I discussed in this post holds true:

It takes the same effort to migrate an existing system to a new technology as it took to develop the system up to its current state.

While it might be quite easy to migrate a single microservice to a new technology, the effort for migrating a whole application that is composed of many microservices will be as expensive as migrating a corresponding monolith – which in both cases usually is a lot more expensive than expected.

The advantage is that you can stretch the migration better as you can migrate the application service by service. This is indeed a big advantage as you do not have a long standstill time while migrating the whole application at once. You can balance the migration and the required ongoing functional evolution of the application much better. This way the decomposition of the application in services supports risk management in this context nicely.

Additionally, you can test new technologies quite simple by rewriting a single service using the new technology. Yet, you usually do not want to run lots of different technologies at the same time because this increases complexity a lot 1. Having this in mind, there are also options for exploring new technologies in a monolithic context.

Still, my key point is that the root cause for the whole technology change discussion are not the monoliths.

Technology update pain

The core question is: Why are we so focused on, often even obsessed with the question how we can update technology better?

If we ask ourselves honestly how often we actually need to update technology compared to other types of application changes, we have to admit that technology updates are needed ridiculously seldom.

Therefore, why the obsession, that we are even willing to introduce an architectural style that is orders of magnitudes more complex (microservices) than the style we have (monoliths)?

From what I have seen, it is a question of pain. It is pain we experienced so often. Even if technology updates are needed comparatively seldom, we are surprisingly often confronted with outdated technology and have a hard time updating or replacing it.

Even if everybody involved knows that updating or replacing the given technology is urgently needed, it seems impossible. And the longer we are stuck with an outdated technology, the more painful it becomes.

This raises the next question: Why do we experience this kind of pain so often, that we are stuck with outdated technology and updates are so hard (or merely impossible)?

Basically, I have seen two drivers that impede technology updates:

  • Broken IT governance processes
  • Application dependencies on the OS (operation system) level

Both have nothing to do with monoliths in themselves.

Broken IT governance processes

Let us start with the broken IT governance processes. My observation is that in most companies the IT governance processes are deeply flawed. One of the main reasons is that software-based applications are still confused with physical goods.

As a consequence, the fact that software needs to be changed continuously not to lose its value is neglected because physical goods typically are not (or only very rarely) changed after their initial release.

Hence, only the production costs are taken into account. The long-term costs are neglected, the costs of changing and evolving the code we just created. Therefore, the focus always lies solely on implementing the current requirement as cheaply as possibly 2, resulting in a constantly deteriorating code base with more and more hacks, workarounds, clotted code, and so on (for a more detailed discussion of this evolution, please see, e.g., here, here, here and here).

In short: If you only focus on the costs of “producing” code and neglect the long-term needs of software, it results in an ever-growing “big ball of mud” until you eventually hit the too-complex-system trap. Your system has grown so big over time and is so messed up, i.e., tightly coupled internally as well as externally that:

  • even simple changes and updates have become an incalculable risk and thus are shirked.
  • we eventually are stuck with a hopelessly outdated technology.
  • it becomes merely impossible to replace it within reasonable time, cost and risk boundaries.

Many of us experienced the tremendous pain from such situations before. That is why we are so focused on technology updates even if we actually need them so seldom.

So, in the end the whole discussion that microservices make it easier to change technology is just an attempt to evade the pain caused by deeply flawed IT governance methods.

We try to mitigate the symptom instead of fixing the root cause.

Application dependencies on the OS level

The second impediment regarding easy technology updates that I have observed are application dependencies that wandered into the OS level.

Many years ago the job of an OS was simply to manage the resources available – compute, storage, network, etc. From the viewpoint of an application the OS was a resource scheduler, making sure it gets the resources it needs. The OS coordinates the needs of multiple applications that compete for the same resources.

Yet, over time more and more parts multiple applications shared were moved to the OS level. It started with shared libraries like libc. Over time more and more shared artifacts were no longer deployed with the application, but provided at the OS level: From shared libraries over language runtimes to complete application servers and more.

Eventually, it became virtually impossible to create an application that did not have an OS dependency, i.e., that did not need an artifact provided at the OS level to be able to run 3. This created an unhealthy tight coupling between applications and the artifacts installed at the OS level:

  • You cannot update the technology on the OS level unless all applications are ready to deal with it.
  • You cannot update the technology on the application level unless the OS provides the required artifacts.

Often this resulted in sort of an update deadlock situation. The fact that the OS (including the artifacts installed there) typically “belonged” to the operations department while the applications “belonged” to the development department did not make the situation any easier. Frustration was high on both sides. Everybody wanted and needed to update, but the tight coupling made it a complete nightmare.

Until container technology became popular.

I think the actual superpower of containers is that they readjusted the separation line between application and OS back to where it was in the beginning: The application with all its runtime dependencies is bundled in the container while the OS (or actually the container scheduler) coordinates access to compute, storage, network and other resources for a set of applications.

The decade-long paralyzing dependencies between applications and the OS level are finally gone. Yay!

Container schedulers these days (with Kubernetes currently being the most widespread one) still feel quite a bit different from the (mostly invisible) OS schedulers we know. But most likely they eventually will dissolve into the OS level, replacing or augmenting the traditional schedulers.

Especially in the cloud, they will eventually disperse behind higher-level abstractions. AWS Fargate (while not being perfect at the moment) shows the direction, feeling a bit more like a mature, “just working” OS scheduler than the comparatively low level abstractions Kubernetes currently offers (see this post for a more detailed discussion of this topic).

The key point is that with the rise of containers, from a technical level the application-OS dependency is gone that for long years impeded simple technology updates.

If we need a technology update on the application level, we just build a new container image with the new technology, test it and roll it out. Nothing else needs to be changed with it.

While it is of course possible to put microservices in containers, containers are not limited to microservices. Containers are all you need for an easy change of technology from a technology point of view. Microservices are not required.

Moving on

This was the discussion of the last fallacy from the list. As with all the other fallacies there would be a lot more to say, but I will leave it here. In the next post, I will continue with actual reasons that justify the use of microservices.

As we have Christmas next week and New Year the week after, I think most likely you will have other more enjoyable and exciting things to do than to read blog posts about microservices … ;)

Thus, I will pause for the next two weeks and continue the series in three weeks. I wish you all great holidays and … stay tuned … ;)


  1. You can mitigate the added runtime complexity partially by using containers. But even this uniform runtime unit appearance will leave a lot of added runtime complexity – plus the added complexity at development, build, test (and partially) deployment time, of course. ↩︎

  2. I deliberately used the term “cheaply” and not “cost-efficiently”. ↩︎

  3. Note that “OS level” here does not mean just the OS itself. It includes all the stuff that you install independent from a given application on the OS level. E.g., also a JRE, a web server, a database server or a message bus would be artifacts provided at the “OS level” – everything an application expects to be “there” to be able to run and do its job. ↩︎