We are our own worst enemies

How we make our lives unnecessarily harder

Uwe Friedrichsen

16 minute read

Heron standing in a lake

We are our own worst enemies

This post is about two observations I repeatedly make when looking around in our community. They are loosely connected by the topic of AI, which makes both observations more visible. The observations are about how we make our lives – IMO unnecessarily – harder.

Let us start with the first observation.

The perceived feeling of acceleration

My first observation in a nutshell:

The world does not accelerate that fast. We decided to accelerate.

Basically, everybody claims the world would accelerate all the time. Some say it to spread fear (“If you do not run faster, you will be left behind”). Some say it with a hint of exhaustion. Some say it to discourage closer examination of whatever they say next (“The world is moving faster, and therefore …”). And very often, it is used as a hollow opener for some kind of shallow marketing message or sales pitch. You may know the situation: Someone starts with “The world is moving faster all the time”, and everyone in the room nods silently with approval.

I took a step back and asked myself: If the world actually accelerates, what are the reasons for it?

The surprising answer I found: Most of the acceleration is our own fault.

Especially folks in IT are major culprits when it comes to acceleration, and the acceleration is not actually required. However, the acceleration is not necessarily the fault of a few individuals (even if some individuals have a bigger share in it than others) but rather a systemic effect.

But before I dive deeper into it, let me first distinguish what I will call here essential acceleration and accidental acceleration. I use the two terms following the distinction between essential complexity and accidental complexity (for a detailed explanation of those terms, see, e.g., my blog post Simplify! - Part 3"). This means:

  • Essential acceleration is the inevitable part of acceleration we experience.
  • Accidental acceleration is everything else.

Essential acceleration

We have experienced an acceleration in scientific progress for several hundred years meanwhile. A big driver of this exponential progress, as it is often called, is most likely the ability to reproduce written text at a very low price and, as a consequence, make it available to many more people than before. Before the printing press, only a few people knew how to write and read. This meant that most knowledge was passed on orally. Quite a lot of knowledge got lost this way and needed to be rediscovered time and again – if it was rediscovered at all. This kept the speed of progress at a relatively low level.

The invention of the printing press changed the situation massively – not immediately, but over time. As more and more knowledge could be preserved and spread in a cost-efficient way, a lot less knowledge got lost. Additionally, the knowledge reached many more people who were able to use it and build on it. As a side effect, the invention of the printing press also led to many more people who were able to read, and then to write, over time, resulting in many more people who were able to share their insights.

Each generation was able to build on the preserved, collected knowledge of their ancestors. This is what we usually mean if we say “Standing on the shoulders of giants”. We have the collected memory of the past at our fingertips when we ponder our next steps. This – plus a few supporting factors – led to an overall exponential acceleration of progress.

This is what I call essential acceleration. It is the acceleration of progress, primarily driven by the preservation and widespread availability of knowledge. Essential acceleration is already a lot, and it is not clear if we as humanity are set up to deal with our current speed of progress. Personally, I have the impression, we are not set up for it, but I leave an authoritative answer to people who are more qualified to answer this question than I am.

Accidental acceleration

Let us move on to accidental acceleration, i.e., the acceleration we put on top of essential acceleration in IT – the “extra” that really exhausts us. (Especially) in IT, we created the habit to run all the time, for the sake of running, creating an artificial feeling of extreme acceleration.

It started roughly 30 years ago. The IT companies promised lots of growth during the dot-com bubble, and investors started to invest in them at a bigger scale. The dot-com bubble burst, but there were survivors that promised above-average growth. Investors loved them. Their shares were considered growth stocks, leading to much higher valuations than shares of companies that are solid but do not promise high future growth.

The tech companies loved their high valuations because they gave them a lot of options that they would not have otherwise. E.g., they could hire and retain top talent by giving them option packages as part of their compensation. It also allowed for expensive acquisitions and many other things that they would not be able to do with a “mature” stock valuation. As a consequence, it became increasingly important for them to preserve their growth valuations, as their whole strategy was built on top of their high stock valuation.

A single “innovation” (whatever that actually means) ensured the growth valuation only for a limited period. Therefore, it was important for the big tech companies to come up with “innovations” regularly to retain their valuation. Thus, they regularly came up with “innovations”.

Additionally, interest rates were extremely low for an exceptionally long period (roughly between 2008 and 2023, give or take a year) which made tech companies extremely attractive for investors (simply because many other forms of investment were less attractive due to the low interest rates) – at least as long as major growth was to be expected.

However, this symbiotic relationship between tech companies and investors is fragile and needs to be fostered constantly. It is especially important for a tech company to convince analysts and investors that you are still “innovative”, i.e., you still promise above-average growth. This is done by sending appropriate messages over all available channels, advertising the “disruptiveness” of the latest “innovation”, and that everyone would be “left behind” who would not immediately adopt it. You know the messages. You have heard them often enough over the course of the last 20 or 30 years. 1

The reinforcers of accidental acceleration

But it does not stop there. There were more parties who wanted their slice of the cake:

  • The consulting companies jumped on the bandwagon. They wanted to let their customers know that they are, of course, “experts” in the latest hot and trendy technology. And so they amplified the message.
  • Other product vendors jumped on the bandwagon, not willing to leave the lion’s share to the big tech companies. And so they amplified the message.
  • The media outlets and conference organizers also wanted their slice of the cake. Hence, they made sure that they also featured the new hot stuff prominently. And so they amplified the message.
  • Additionally, there are the usual free riders who are mainly interested in fame, influence, and ultimately wealth, motivated by the “Forbes 30 Under 30” list. And which topic is better suited as a platform for self-presentation than the current hot and trendy topic? And so they amplified the message.

As a consequence, the marketing messages of the tech companies were repeated and amplified hundreds and thousands of times by other parties who wanted their slice of the cake. The sheer volume of the messages gave everyone the impression that this “innovation” must actually be “disruptive” and “groundbreaking”. If so many people tell you how huge and important the topic is, it must be. Or? And so, most people ran after the topics, the decision makers as much as the people who must deal with the consequences, typically driven by FOMO (fear of missing out).

The interesting part is that none of the players sent their messages because of the technology and its potential benefits for the IT world and beyond. Actually, they were all just interested in maximizing their profits. The technology itself did not matter. It was just a means to an end. It only needed to be hot and trendy. 2

Oversupply as an aggravating factor

But it still does not stop there. Due to the heavy investments by investors in tech, especially IT, between 2008 and 2023, IT grew above average during that period. A lot of money was to be made, but this also led to overcapacities: more consultants than needed. More products than needed. More magazines. More conferences. More media outlets. More of everything. And everyone wanted their slice of the seemingly unlimited cake. A huge surplus of supply. But only a limited demand, only a limited number of customers who could buy all that stuff.

Okay, the ongoing digital transformation increased the IT demand, i.e., companies invested more in IT than they did a few years ago. Nevertheless, the demand was not big enough for this disproportionately grown supply … unless we create artificial demand. All the parties involved needed the story of continuous, acceleration “innovation” in IT to keep their businesses running.

A system feeding (and eating) itself

And so we accelerated and accelerated in IT – not because anyone outside IT needed it, but we needed it. We created a system that is fueled by increasingly fast “innovation”, i.e., new stuff that we can sell to our customers. We also perfected the FOMO and FUD (fear, uncertainty, doubt) messages over time. We perfected which messages we need to send, when, and how often to maximize our likelihood of success. We improved the system so much that it started to eat its creators.

We are caught in a system of knee-jerk action and response patterns, and we have no idea anymore how to escape it – or at least slow it down to a sustainable pace. To cite a famous German ballad:

“Die ich rief, die Geister werd ich nun nicht los.” (“Spirits that I’ve summoned My commands ignore.”)3

To sum up: There is some level of essential acceleration. However, this perceived ever-increasing speed in IT that exhausts us increasingly is something we created and fuel ourselves. This is what I mean by:

We are our own worst enemies.

We are the ones who make our lives worse by reflexively running faster and faster, in directions that others set.

Making others feel incompetent

Let us move on to my second observation. When looking at the still ongoing AI hype, we cannot only observe the artificial speed that exhausts everyone 4. Additionally, we can observe a much darker pattern. The short version of it is:

We belittle, ridicule, and humiliate our peers. (Not everyone does it, but way too many do it)

If anybody says publicly that they did not get the results from using AI that are promised everywhere, by the companies, by the vendors, and especially by the free riders, they are immediately told: “You are doing it wrong!”, “This is a layer 8 problem.” and alike.

The message sent is: “The technology is perfect. You are just too stupid to use it. Noob!”

It is not that we sometimes see it. We see it over and over again. Whenever somebody writes that they did not get the results they expected, the first or second response is a knee-jerk “You are doing it wrong!” The responders have no idea what the person did, how much experience they had, or how much effort they put into it. And as it seems, they also do not care. They only care about hurling their insults at the writer.

Whenever I read such a comment, my first thought is:

“What the heck is wrong with you?”

Fifth-grade jerk behavior

This is stupid fifth-grade jerk behavior. You remember when we were in fifth grade of school, puberty kicking in, and we were so totally insecure about ourselves. We all were, and we tried to get through these confusing times halfway unscathed. But then there were always those jerks who tried to distract from their own insecurities by belittling others, making fun of them, humiliating them in front of others. And those being their targets felt worse than they already did, even more insecure, another scar drawing on their already fragile self-esteem.

Eventually, we got older, and while we slowly grew out of puberty, this kind of jerk behavior declined. Of course, some of the jerks never left puberty, confusing being “cool” with being a jerk. But most of the kids stopped being jerks.

In the end, telling someone you do not know, “You are doing it wrong!” is the same kind of fifth-grade jerk behavior: trying to look better by belittling others in public. This is simply a**hole behavior. 5

Toxic nerd culture

To be fair: This is not a new behavior that emerged with the AI hype. This behavior is widespread in IT. It is a very toxic part of the so-called “nerd culture”: people using every opportunity to make other people feel stupid if they do not know something or do things in a seemingly “uncool” way, like:

  • “What, you still use a mouse for your IDE? Only noobs use a mouse!”
  • “What, you still use this terminal emulator? Boomer!”
  • “What, you use the default key bindings? No wonder you are so slow!”
  • “What, you do not know how to use <X>? Gosh, how did you even get through hiring? RTFM!”

And so on – belittling, humiliating, insulting, void of any empathy.

I have observed this behavior in the IT community since I started my career more than 40 years ago. The more “nerdy” a group of people considers itself, the worse the behavior based on my observations.

I can only guess at the reasons behind it. My personal guess: Software development is not considered “cool” outside of software development, and deep, detailed knowledge in software development topics is valued rarely outside the developer community. Quite the opposite: non-IT people often make fun of software development experts, calling them “nerds” and worse. The counterreaction of some of the nerds is to treat everyone with aggressive contempt who is not an expert in their area of expertise – some kind of misdirected cry for appreciation. Again: Just a guess, and definitely not an excuse.

Eventually, this behavior became an end in itself and became part of the so-called “nerd culture”. It became such a “natural” part of it that most people in IT do not even realize how toxic their behavior is.

Fighting our peers

And now, we see it in full force in the context of AI. Whenever someone raises a doubt about AI not flawlessly doing the job, they are accused of being too dumb to use AI. They are not attacked by people from tech companies. They are not attacked by managers or business experts who expect developers to become 300% more productive by using AI. They are attacked by their own peers – by other (wannabe) developers.

Let that sink in for a moment. We sit here as a software developer community. The AI companies preach with billions of dollars marketing budgets that we will become obsolete in 6-12 months (since 2+ years, renewed aloud every 6 months). Our management and business departments fell for the promises that we would become 3x as productive with a $200/month Claude Code subscription and silently upped their throughput expectations. At the same time, we fight with immature tooling of bleeding-edge AI tools that change every day and stay behind their promises. We need to apply lots of “arcane magic,” like, e.g., extensive harness engineering and alike, to make the tools work halfway reliably. We do our best; we work our a**es off to integrate the new “magic of the day” into our work while trying to satisfy the increased output expectations.

And if we then dare to say that things do not seem to work as advertised? Then, our peers belittle, insult, and humiliate us in public. Not the “others”. Our peers. This is what I mean by:

We are our own worst enemies.

People close to burnout being insulted and ridiculed publicly by their peers, not caring a bit how they feel, what they did, or who they are. Instead of working together to get through this deeply inhumane game that others set up for their own profit, we fight each other and try to press them underwater, perfectly amplifying the perverted game others set up against us.

It is so sad to watch this kind of behavior.

Doing it better

Doing it better would be easy. Simply taking a little step back, thinking for a moment instead of just running after someone else’s goal, and adding a tiny bit of empathy to the interaction with our peers would make things a lot better. Instead of running after every “groundbreaking innovation of the day”, we could ask ourselves what we actually need and then ponder how AI can support us best in achieving these goals. And if someone should struggle with their AI setup, instead of humiliating them, we could ask them friendly if we could support them, if we could sit together and share our setups because we had different experiences using AI. It would not be too hard. 6

To be clear: If we want a future as software developers that is worthwhile, it is only we who can shape it. Neither the (few) profiteers of AI nor our employers or anyone else will shape a future worthwhile for us. They are either busy rigging the game or are caught in the rat race they set up themselves. It is we, and we only, who can shape a future worthwhile.

Or as Steve Yegge phrased it in his blog post “The AI vampire”:

“I don’t think there’s a damn thing we can do to stop the train. But we can certainly control the culture, since the culture is us.”

Maybe let us start by stopping being our own worst enemies …


  1. This is an extremely simplified version of what happens, very coarse-grained, just the outline needed to understand the underlying core driver. Way more detailed versions can be found at Cory Doctorow’s blog post “The Reverse Centaur’s Guide to Criticizing AI” and Ed Zitron’s (very long) post “The Enshittifinancial Crisis”, both containing lots of references to supporting materials. ↩︎

  2. This does not mean that the respective technologies would be irrelevant or useless. But the potential intrinsic value of a technology is not the reason why the players start to send their messages how “disruptive” or “groundbreaking” the technology would be. The reasons for sending these messages are much more mundane. ↩︎

  3. “Der Zauberlehring” by Johann Wolfgang von Goethe, in English better known as [“The Sorcerer’s Apprentice”](The Sorcerer’s Apprentice). ↩︎

  4. In the end, the artificial speed of “AI progress” only serves the big tech companies and major AI startups because if we would find the time to think things through, we might come to conclusions that would not serve their goals, like, e.g., not making ourselves completely dependent of their offerings. Hence, they do all they can to make us run the rat race we created ourselves as fast as possible, not finding any time to take a step back and think. ↩︎

  5. If you strongly disagree, I am afraid you are one of the culprits. ↩︎

  6. Just to be clear: I do not blame AI. AI is a tool. It can be great, powerful, and in a good way magical if used in a good way. It can drain us and destroy our well-being if used in a bad way. Personally, I like the technology a lot and would love to see it used positively. I just highly dislike the destructive behaviors we can observe in the context of the AI hype. These behaviors have nothing to do with AI as a technology. AI is just the current vehicle (ab)used. ↩︎