The microservices fallacy - Part 1
This is a blog series discussing the fallacies of microservices. It complements a talk I have given.
In this series I will start with a little motivation why I think we need to revisit microservices critically. Then I will have a look at the origins of microservices (which already reveals a lot about the common misconceptions).
After that I will go through the most widespread fallacies one by one, followed by actual reasons to use microservices. Based on that, I will provide a few recommendations when to use microservices, when not to use them and which alternative approaches to use instead. Finally, I will complement the series with a few general considerations.
As my recent posts tended to become too long, at least for the casual reader, I try to discipline myself more in this series and keep the individual posts shorter – not so much to read at once.
The whole series consists of the following posts:
- The origins and fallacies of microservices (this post)
- Debunking the scalability fallacy
- Debunking the simplicity fallacy
- Debunking the reusability and autonomy fallacies
- Debunking the better design fallacy
- Debunking the technology migration fallacy
- Actual reasons for microservices
- When (not) to choose microservices
- Microliths (link will follow when post is released)
- General complementing recommendations (link will follow when post is released)
If you listen to discussions about microservices, they often feel like: Microservices are it! It’s how modern IT systems are built! Microservices reduce complexity! Microservices are needed for scalability! Everybody does Microservices! We want them! We need them!
Microservices were a big hype in the last few years and meanwhile they reached mainstream. Often they are sold like the only possible way to build software these days.
But are the really the panacea? Do they really solve all the problems magically as many people claim (or hope)? Are there not any drawbacks? Are there not situations where microservices are a bad fit?
Based on my experiences, I think it is about time to critically revisit one of the biggest hypes of the recent years.
There is not a single clear origin that microservices can be traced back to. There were several influencing parties that led to the famous blog post about microservices by James Lewis and Martin Fowler in 2014 which became (and probably still is) the best accepted definition of “microservices” – also the one I use here.
A big influencing factor for sure were some of the popular hyperscalers. Amazon, Netflix and a few other hyperscalers faced some novel and unique problems with their IT landscapes:
- They needed to scale their IT services to a size nobody else ever did. The known and popular IT architectures did not work at that scale.
- At the same time they needed to move extremely fast in their highly competitive markets. As their whole business models were mainly built on IT, this meant that they needed a very fast IT value chain – down to intraday cycle times from a new idea until their customers experienced it and could give them feedback.
- Finally, many of their customers relied on their services. This meant, compromising the quality of their IT systems for shorter cycle times was not an option.
They put a lot of effort into mastering these challenges on all levels. They rethought their organization. They rethought their processes and collaboration patterns. And they rethought their IT: What kind of architecture is able to support the aforementioned challenges?
The result of their endeavor were: Microservices, a service-based architecture style that had a few different properties than the usual traditional SOA implementations. The different properties are nicely described in the blog post by James Lewis and Martin Fowler I mentioned before.
Along their implementation efforts, they also learned that microservices are not a free lunch:
- Operations became at least an order of magnitude harder. After all, microservices mean highly distributed applications, i.e., all the imponderabilities of distributed systems strike at runtime. As a consequence they needed to put lots of effort into creating the right infrastructure, implementing observability and a lot more.
- Software engineers needed to be trained. Writing a microservices-based applications is very different from writing traditional enterprise applications. You need to take the specialties of imponderabilities of distributed systems into account when writing your applications.
- The systems needed to be designed differently. Traditional design “best practices” that were prevalent in enterprise software contexts did not work well for designing distributed microservices applications. They had to establish new design practices. Luckily their organization aligned with market capabilities helped them a bit with that. But still they had a steep learning curve to master.
- And so on …
Yet, after all the advantages that microservices provided in their specific situations outweighed the new challenges that microservices introduced.
Eventually, the hyperscalers that adopted microservices started to talk about what they did. They went to conferences. They wrote articles in IT journals. They wrote in their developer blogs. Etc.
They were proud of what they achieved. They wanted to influence the market to make sure they do not end up in a dead end with their decisions. They wanted to attract talented people. The reasons were manifold. They talked about what they did.
And other people, often coming from more traditional companies listened to what they talked about. What they heard, coming from their own enterprise challenges were messages like:
- Microservices are so scalable. (That will solve our scaling issues.)
- A single microservice is easy to understand. (Not like the big monolithic applications we have.)
- The cool companies are doing microservices. (We also want to be cool.)
- It sounds exciting from a software engineering perspective. (Not like the boring, old-fashioned stuff we need to grapple with.)
And so the hype started … We want microservices! Death to all the monoliths!
This microservices origins story of course is not complete. There were additional drivers. E.g., Fred George worked with some flavor of microservices for a long time 1. And there were more influencers.
But the biggest influencer of the microservices hype were probably the hyperscalers and the stories they told about their microservices endeavor. Microservices became a big hype. After a while the hype detached itself from the original drivers that led to this architectural style, until microservices eventually became an end in itself: “We want microservices, because … microservices!”
In conjunction with this hype several justification schemes for the introduction of microservices evolved. The most widespread ones are:
- Microservices are needed for scalability.
- Microservices are simpler than monoliths.
- Microservices improve reusability (and thus pay for themselves after a short while).
- Microservices improve team autonomy.
- Microservices lead to better design.
- Microservices make change of technology easier.
Unfortunately, all of them are fallacies. Either they are completely wrong or at least you do not need microservices (and the price you pay for them) to achieve the promised properties.
We will debunk these fallacies in the next posts one by one. The next post I will start with the first fallacy, the very widespread fallacy that you need microservices for scalability reasons. Stay tuned …
Fred George used (and still uses) microservices in a quite “radical” way – at least from a traditional enterprise’s point of view. He also typically works in business domains that require constant experimentation which also feels strange for the average enterprise. For these reasons the microservices ideas of Fred George probably did not become as influential as the ideas of the hyperscalers. ↩︎