The microservices fallacy - Part 2

Debunking the scalability fallacy

Uwe Friedrichsen

5 minute read

Cinnamon buns on a baking tray

The microservices fallacy - Part 2

This post discusses the widespread fallacy that microservices are needed to tackle scaling issues.

In the previous post we looked at the origins of microservices, how the hype started, and we listed the most widespread fallacies regarding microservices. In this post we take a closer look at the first fallacy – that microservices are needed for scalability.


One of the most commonly used justifications for microservices is scalability: “We need microservices to scale our application dynamically.”

If you ask the same people if a deployment monolith would not be sufficient for their scalability needs, you get a firm “No!”

The scalability discussions are often related to scenarios where a big number of customers gets access to a company’s application – web or mobile applications most of the times.

Well, let us do the math:

  • A simple LAMP stack server 1 on an average 5.000 EUR server 2 can – configured correctly – easily serve up to 6.000 requests simultaneously, probably even more if you use NGINX instead of Apache, because you do not hit the thread limit.
  • Now let us assume that a request takes 200ms to complete at the 99th percentile 3 and that each user interacting with your offering sends a request every 10s in average. This means that you can serve up to 300.000 concurrent users, i.e., users that interact with your offering at the same time with a single LAMP stack node (6.000 * (10s / 200ms)).
  • If you get the session handling right (which is pretty simple), you can dynamically add and remove additional nodes behind a simple load balancer and with 10 LAMP stack nodes and a single load balancer you can serve up to 3.000.000 concurrent users.

Note that I wrote “concurrent users”, not “registered users” or alike. You can safely assume that no more than 5% of your registered users are using your application at the same time (typical numbers are quite a bit lower based on my experiences).

All the numbers are relatively conservative. In practice they should be even better 4. This means (phrased a bit provocatively):

With 15 LAMP stack servers you can easily serve whole Germany from toddler to dodderer (>80 million users).

To illustrate this statement with a little real-life example from my project past: In the early 2000s my company developed and maintained the self-service portal of a big German telecommunications company. The registered user base was a bit less than 10.000.000 users. In peak times up to 100.000 concurrent users were logged in. All those users were handled by a single BEA WebLogic instance (with a cold standby), running on average hardware from around 2000, backed by an Oracle RAC cluster that also served the call center application, all POS systems and a lot more.

And – surprise – we never ran into any serious performance problems. If we encountered any problems, it usually was due to some inattentive development and after fixing the programmatic shortcoming the problem was gone.

Coming back to my sample calculation: I am sure that 99,9% of all applications with scalability demands could easily be handled by a single or a few LAMP stack server instances behind a regular load balancer. A little bit careful application design and development, a few commodity servers and a simple load balancer will do the trick for all their scalability demands.


You do not need microservices to satisfy regular enterprise scalability demands.

Unless you are a hyperscaler, an online ad broker or someone else from the 0,01%, there are a lot simpler ways to satisfy your scalability demands – because microservices are anything but simple.

This brings us to the next fallacy – that solutions become simpler with microservices – which I will discuss in the next blog post. Stay tuned …

  1. I do not care if you use PHP or a different programming language. The same applies to MySQL, etc. The point is that a LAMP stack means a very simple, well understood and battle tested, rather monolithic runtime architecture. ↩︎

  2. When I worked at a little startup, around 2012 we got for about 5.000 EUR servers with 24 cores, 100GB RAM, 30TB HDD and multiple 1GB network interfaces. This is quite beefy hardware with respect to the discussed use cases. I have not checked the prices lately, but today you can expect to get quite a bit more for about 5.000 EUR. ↩︎

  3. Based on my experiences this is quite straightforward to achieve with a bit of careful design and programming. ↩︎

  4. At least if you know your stuff. If you have no idea how to set up and configure a Linux OS, how to configure and run a RDBMS, how to design a database schema and indexes fitting your access patterns, how to design your application, how to select the right tech stack, how to get session handling right, how to profile your application and find bottlenecks, if things are slow against expectations, and so on, the numbers used in the calculation might be far beyond your abilities. But then again: If you do not know your stuff, the basics of good software development and operations for this simple setting, what makes you think that it would become any better with microservices where things are at least an order of magnitude more complex? ↩︎