Thoughts on AI and software development - Part 5

Hedging our options and moving on

Uwe Friedrichsen

25 minute read

(Humanized) animal sculptures

Thoughts on AI and software development - Part 5

In the previous post, we completed our analysis of the projection Steve made by looking at some unresolved side effects and questions that would come with such a future.

In this final post of this series, we will look at how we can hedge our options with such a scenario on the possible horizon and maybe tweak it a bit in our favor. Finally, I will wrap up with a few thoughts on how to deal with such sometimes grim and discomforting looking developments.

But before we start looking into our options, let us briefly recapitulate what we have discussed so far from a bit different point of view.

The convenience trap

If we take a step back, look at IT and ask ourselves what we need, it becomes obvious that GenAI and AI-based software development are just another step in the wrong direction. The relevant questions to improve software were laid out more than half a century ago soon after the beginning of the software crisis: How do we achieve loose coupling and high cohesion? How can we ensure information hiding? How can we separate concerns best? The foundations of good design. 1

We soon figured out that these were hard questions without easy answers. That is why the IT community only too willingly turned their eyes from these questions to the next shiny technology. So shiny! So much easier than pondering hard questions! And the profiteers of the technology, the vendors, the investors, the consultants, always sold the technology as the solution for all existing problems. And (almost) everyone jumped on it.

Of course, the technology never solved the existing problems. Sometimes, it improved convenience. Usually, it was not a risk to decide for the technology because almost everybody did it. But it did not solve the existing problems. In the end, it only added another layer of complexity. But as soon as the excitement faded and people started to realize that their problems still persisted, there was the next technology on the horizon that promised to solve all our problems, distracting us again from the actually important questions.

This way, we created increasingly complex systems and system landscapes. Some complexity was needed as the problems we solved with IT became more demanding. But most of the complexity we introduced was accidental complexity, complexity not needed to solve the given (business) problem. Never solving the original questions of good software design, we piled up more and more accidental complexity. Sometimes, we coincidentally also raised the abstraction level of solution design. But each such coincidental step forward came with a huge pile of accidental complexity that made our lives harder.

This led to extremely complex system landscapes, almost impossible to understand, extremely hard to maintain and cumbersome to change. Therefore, we would urgently need to ponder how to simplify those system landscapes. We would also urgently need to ponder how to make IT more resilient to respond to the challenges of the 21st century. We would need to ponder sustainability in all its dimensions, ecological, economical, human and social as well as technical sustainability because we depend on a properly working IT these days. And we would need to ponder the effectiveness of our actions to stop piling up more and more accidental complexity and reduce the ever-increasing stress resulting from it for all people involved and affected.

That is what we urgently would need to do.

However, our world is not driven by needs, but by wants. We want instant gratification, immediate success. We want convenience without effort. We want risk avoidance, the guarantee of success. This is what drives (most of) us – in private and in business life. This is what drives IT. Therefore, we do not work on tackling the needs. But we feel that we are stuck in an ever-growing mess, that the way we always did things does not really work, that it makes things worse.

Thus, based on our wants we look for an easy way out.

The easy way out

Enter GenAI!

Hello, AI-agent based software development!

The next shiny new technology that promises to solve all the problems we created ourselves while only following our wants, neglecting the needs. Managers always looking for the least risky way. Software engineers, always being more attracted by cool new toys that promised to make their work more convenient. Business departments not understanding software and always pressing for more features instead of questioning (and thus measuring) the business value of their ever-increasing demands. Rarely anyone looking for ways to solve our underlying, actual problems. Well, a few do care but most often people only nod approvingly if someone talks about the underlying problems and then continue doing what they always did.

A fertile ground for GenAI and the promise of AI-agent based software development. Software developers whose work becomes more convenient. A way to evade the ever-increasing pressure to produce code faster. IT managers seeing an easy way out of the increasing pressure from the other departments.

And the best thing: All this without reflecting upon the question if our problem really is that we do not produce code fast enough. Reflecting this question could lead to some uncomfortable insights that would suggest to change. And as change is hard, let us better take the GenAI route. Less risky. Less effort. Less need to ponder hard questions.

With that, we can finally turn our software development departments into the assembly lines, the advocates of industrialization of IT were dreaming of for such a long time – of course neglecting that software development is not assembly but design. The assembly part in IT is automated for more than 70 years meanwhile.

But let us not split hair now! AI will solve all our problems for good! The vendors promised it and we believe it because this time it is different than all the other times before: These agent are intelligent!

The human illusion

Depending on the point of view, this is the biggest trick of the vendors or the biggest misconception of the users: The illusion that LLMs were intelligent in a human-like way.

The by far biggest achievement of LLMs is that they pushed the human-computer-interface (HCI) quite a lot from “computer” towards “human”. We are (largely) able to communicate with an LLM as we would communicate with another human: Normal sentences. No trigger words or strict word orders required. Back references possible. And so on. Like we communicate from human to human.

To be clear: This really is an impressive achievement of LLMs and probably also an important one: While computers were able to solve increasingly complex problems, the HCI stagnated for a long time. With the rise of mobile phones, it even took a step backwards: While we were able to use 10 fingers to interact with IT if using a desktop computer (or a notebook), the user interactions were reduced to one or two fingers with mobile phones – to interact with machines that are several million times as powerful as the mainframes of the past. We definitely need better HCIs that are more broadband than one or two fingers and that are more aligned with the capabilities of humans instead of forcing humans to adapt to the machines. LLMs are a big leap in that direction.

However, while this is a big leap forward regarding the HCI, it has a huge side effect: We tend to anthropomorphize LLMs. What a word: “anthropomorphize”. I admit, I had to look it up when I first heard it (and I still have to look it up to spell it correctly). It means that we – usually unconsciously – attribute human-like properties to something non-human, in our context to LLMs: If it talks like a human, it must be human-like. It is sort of a hard-wired bias in our brains.

This bias leads to further unconscious assumptions: If it is human-like, it also must be intelligent like a human. It must be able to do work like a human – only faster, and never getting tired, and knowing a lot more than a single human does because it also is a machine. We involuntarily build an ideal human-machine in our minds that combines the best abilities of humans with the best properties of traditional computer programs – all based on the misconception that LLMs are human-like just because they provide an advanced HCI.

Hence, let me state it in all clarity:

LLMs (and everything based on them) are neither magic nor human-like.

They are also not intelligent in a way, a human is. If they are intelligent, they are by other standards than humans are intelligent. This is very important to understand. There is no interchangeability between an AI-based agent and a human. They are different. They act differently. They have different pros and cons. Even if we leave out the social aspect of the debate, agents are not a drop-in replacement for humans – at least not unless we have dehumanized work so much they are. 2

The herd prevails

But then again, following this train of thought is not convenient. It might even bear risks because we could stop unthinkingly repeating the messages we are flooded with by the media, by the profiteers of GenAI and their fanboys (and -girls). We could deviate from the commonly accepted path the herd follows. And so we pause. We hesitate. We weigh the thoughts against the risks associated with them. And usually, we then wipe all these inconvenient thoughts aside and follow the path of least risk, of least resistance and of (seemingly) highest convenience, carefully laid out for us by those who will make the most money if we follow it. As basically always in human history, the herd prevails.

Raising the lower bar

For the sake of fairness: Even if agentic software development is not what we need, we have to admit that LLMs and agent-based software development does not necessarily make things worse. While AI-based software development may not raise the upper bar of software quality, it for sure raises the lower bar.

We are surrounded by crappy code from poorly educated software developers who blindly submit to the “implement features as fast as you can” demands without ever pondering the consequences of their acting, who just copy their solutions from Stack Overflow not really understanding what it does and tweaking it just until it “works on their machine”, who even would not be able to write better code if you would give them more time because they utterly lack the foundations of software engineering.

We talk a lot about code quality in our community, about “craft”, about maintainability, evolvability, robustness and all that but the reality is that the majority of all code is just that: crappy – because that is what the system rewards. Well, not crappy code per se but as many completed features in as little time as possible. Quality is implied – which in reality means: Quality is neglected.

In this setting, coding agents can help to raise the lower bar. They may not create great code but they will create some solid code providing a certain base quality in a reliable and repeatable way. You may counter that LLMs will sometimes “hallucinate”, that they will sometimes create incorrect solutions. This is true. But compared to the average poorly educated software developer they do it a lot less often and in less erratic ways.

Of course, the solution to this problem should not be introducing LLMs to mitigate the problems of a poor software developer education and the lack of appropriate training budgets. The solution should be better education 3. Nevertheless, this is an important factor we need to keep in mind when it comes to rate probabilities of future developments.

A high probability of AI with a few uncertainties

With all that, I think it is to expect that there is a significant probability that Steve’s projection will at least partially become reality, simply because it is the most convenient and risk-free path for the people who decide about it. 4

I admit, I would love to simply dismiss Steve’s projection because I think it is the wrong way in so many ways. This direction will not solve any of the problems we face these days. Instead, it will massively aggravate them (as written before, see, e.g., my “Forget efficiency” post, my “Responsible IT” posts or my “Simplify!” blog series) for more information).

But we have seen that market decisions are not really driven by needs but much more by wants, habits, fear of change, career anxiety, FOMO and many other more emotional than rational drivers (even if most decision makers put a lot of work into making their decisions look like “purely rational decisions for the greater good of the company”). We also have met “highly focused” and “highly determined” AI product vendors and investors who do everything to fuel all these emotions, trying to keep the decision makers from getting any time to rationally think because they are in the middle of the biggest IT gold rush we have ever seen. They smell big money and do everything to get it.

All this leads me to the conclusion that there is a significant probability that Steve’s projection will become reality. The biggest potential impediment for this projection is that the technology providers may not be able to deliver solutions that meet their promises, leading to premature market disenchantment and thus absence of the expected purchasing.

However, as a famous quote states: “It is difficult to make predictions, especially about the future.”

Thus, it is to expect that we will not exactly arrive at Steve’s projection and the consequences I extrapolated based on it. Nevertheless, I think we can expect some kind of pronounced, maybe massive move towards agentic AI in software development.

Hedging our options

This all brings us back to the questions: Now what? What does that all mean for us as software engineers? What should we do to prepare for the future, to hedge our options?

Even if I would love to recommend sitting it out and focusing on other topics instead that would actually address some of our long-standing problems, I think this is not a good option. Too much money and energy is spent to make it at least partially reality and the herd is already on its way. Also, if we assume that the projection will not become reality to its full extent but in a more moderate way, that agent fleets will not take over software development completely but agents will support software developers in their work, these AI-based solutions can add value, i.e., improve software development (not for everyone but in general).

This means, we should actively hedge our career options. As a software engineer, I would, e.g.:

  • Familiarize myself with vibe coding. It is okay to use chat-based coding instead of giving the AI agent full access to my repository and do the copying myself if I feel uncomfortable with passing full control to the agent. The point is to get a feeling how it feels if I do not use the AI as coding assistant but let it do all the coding. How do I need to address it? Which prompts work better than others? Where does it head in the wrong direction? How does my focus as software developer change if the agents do the coding? Etcetera. Having a feeling how vibe coding works and how to control and nudge the AI best also gives me the option to call myself a “vibe coder” if the market should massively move in that direction.
  • Alternatively, I can put a stronger focus on parts of the IT value chain that are not directly related to coding. E.g., I could move more towards business analysis, towards quality, towards compliance, towards automation, towards collaboration enablement, or alike. There will always be humans in the game and I can focus on the parts that are not surrendered to the machine. In the end, the ability to write code is just one possible specialization in IT. Over time, AI solutions may take over one specialization after the other. This mean, as humans we should treat AI like a wave and ride it on its front side – always being a step ahead of AI with our skills and specializations, using it to do our work in a more enjoyable way.
  • I can look into domains where vibe coding will not be a viable option for an extended period of time due to regulations or other constraints. E.g., it is quite unlikely that vibe coding will be an option in safety-critical domains like aviation, energy production and distribution, healthcare machine development, and alike. Also embedded systems with quite strict resource constraints including most of the OT sector should be safe from vibe coding for quite a while. Or high-performance computing. All domains where it is important either to be able to explain and verify the code or to highly optimize it are not (yet) the target domain of vibe coding.
  • Or … I wrote a bit more about how to position ourselves as a human emphasizing our human strengths in the age of AI in my “ChatGPT already knows” blog series (plus a lot more stuff that I do not want to repeat here).

Note that the list does not mean that you need to explore all options. They are – well – options and it is up to you and your personal preferences and skills which of these options you consider worth exploring. Additionally, there are more options available than the few I sketched.

Side note: Moving towards software architecture will not save us in the vibe coding scenario. Architecture is mainly something we need to make the complexity of the problem and the solution domain tangible as well as manageable for humans, and to ensure certain quality properties of the solution created (at least that is what architecture should do). If the market should head for vibe coding all-in, it would also conclude that AI agents do not need nice architectures, that they will figure things out on their own.

Even if we all know that things are not that simple, we have seen that very often decisions are not made based on rational thinking but rather based on emotions and vibes (pun intended). In other words: Going for software architecture most likely will not save us in the short run if the vibe coding scenario should become reality. 5

I would like to add “become a generalist”, i.e., widen your area of expertise in general to the list of recommendations. However, the IT market and especially the hiring process (not only) in IT are deeply broken. Companies urgently need more generalists to address the demands of complex and highly dynamic markets appropriately. Those generalists are able to look at the full picture instead of just a tiny fraction of it and thus spot the places that really make a difference if improved. Companies tend to be super happy if they accidentally hired such a generalist.

Nevertheless, the typical company recruitment still solely looks for hyper-specialized people with extremely narrow areas of expertise. Most companies cling to what they always did, no matter if it still makes sense or not. I wrote about the issue in more detail in a prior blog series and I leave it to you if you would like to dive deeper into the topic or not. For this post, I will leave it with the (regrettable) conclusion I cannot full-heartedly recommend to take the generalist route, even if this is what we urgently need.

The middle ground

As I wrote before: Making accurate predictions about the future is hard and more often than not the actual future will turn out to be much less extreme than the predictions, especially if made by the profiteers of an extreme future.

Here, this would mean that we would rather end up with AI-assisted coding than with AI agent fleets completely taking over software development. For this much more moderate scenario I would still recommend looking into the aforementioned recommendations. The main difference is that in this scenario, we rather look into embracing AI instead of trying to position ourselves outside its area of influence. We move with AI instead of trying to stay ahead of it.

But even if we move with AI instead of trying to stay ahead of it, I think it is important to explore one or more of the options I discussed (or some different option I missed). No matter how the AI hype will continue, it can be expected that it will not magically disappear but rather become normal part of our work to some extent.

By hedging our options, we gather valuable insights that will most likely help us in any future. We also see and learn new things that broaden our horizon which helps us make better decisions on our local turf. We may find exciting new career options. We build more healthy (not pretended) self-esteem by not feeling like the little-valued developers at the end of the food – er – value chain anymore. And more.

Additionally, even if we expect the more moderate scenario, it remains important to have an alternative, a “Plan B” if the market should take the more extreme route, if it should move towards “We do not need software developers anymore”, “We do not care about TDD anymore”, “AI agents do not need hexagonal architectures” and all the other things we usually focus on while trying to become better software developers. Which “Plan B” you consider best for you, is up to you.

And from an IT service provider perspective? You may remember: In the first post of this series, I started the discussion with the fact that I work in a CTO role and thus need to evaluate Steve’s projection from such a perspective. Well, that is a different story. Based on my observations, the market for IT service providers will change massively in the upcoming years anyway. Agentic AI and vibe coding, no matter to which extent, may just accelerate the development. But – as I wrote before – that is a different story and I am not going to bore you with that one here … ;)

Final thoughts

We came a long way once since the beginning of this blog series. We started with Steve Yegge’s projection of AI agent fleets that will take over coding in the near future. We discussed why this is not what the market needs. Then we looked at the forces that drive decision making and saw why the market might still want such a future. We looked at the probable short and medium consequences of such a development, including some side effects and unresolved questions that would come with it. Finally, we discussed possible options to hedge our options regarding our jobs and careers as software engineers with such a possible future on the horizon.

When I started writing this post, I thought it would become a 5 min or 10 min post but as so often I learned along the way that it took much longer to discuss the topic than I initially thought. I think, there are still many things I have not touched in this blog series that would require proper consideration (but which I will not discuss as the blog series has become way too long already). Meanwhile, I am almost afraid of myself whenever I think: “This time, it will definitely become a short post”.

This post series probably was more controversial than most other posts, I have written before because I took an unembellished look at some of the darker sides of our industry, a side I usually only slightly touch. Maybe you feel a bit uncomfortable with this unembellished look and the not so nice reality we can observe there. However, a realistic evaluation of a potential future scenario requires being realistic about what is going on and which forces most likely will drive the evolution, no matter if it feels comfortable or not. In such a situation, the rose-colored glasses are of no help.

You may also think, I was quite a bit sarcastic in this blog series. To be clear: This was not my intention. I just tried to translate the often carefully disguised decision patterns of people in easily comprehensible words. Sometimes that may sound more sarcastic than intended because the exposed behavioral patterns do not appear to be too applaudable.

In most cases, I also do not want to blame the respective people for their behaviors. Most often people only do what is expected from them, what they need to do to not endanger their jobs and careers 6. But if you take away the “professional” coating usually attached to the exhibited behavioral patterns and describe in simple words what happens, it can feel sarcastic.

I only blame those who exhibit psychopathic traits, those who are happily willing to destroy the careers, existences and lives of any number of people if they feel it is to their personal advantage. I despise them and I am sad that so many people fall for them, often even considering them “business heroes” and role models. They are the opposite of role models and a healthy society would expel them. But discussions about the health of a society tend to become difficult and heated very quickly. Hence, let us not dive deeper into that rabbit hole. Let us just state that some things seem to be at least a bit off balance and stop there.

When looking at this strange combination of credulity and herd behavior, you may ask yourself if there is any hope that we as an industry will eventually develop in a better direction, that we will eventually focus on what really addresses our problems and not blindly follow the puppeteers who game the market for their personal benefit.

You may not have too much hope, even if you see very smart people in the crowd. But then you may see them also act in predictable nonsensical ways in their work contexts because the herd pressure is too high and it feels too risky for them to leave the well-trodden and – well – nonsensical paths.

This may be the point when you lose hope for a better future.

So, is it time to finally submit to cynicism?

I do not think so. Even if I do not cherish any illusion when it comes to the forces that dominate our industry, I dislike cynicism. It is a very destructive emotion that eventually paralyses you and cripples your mind.

I recommend sticking to a more optimistic point of view – without having any illusions about the influence, our individual acting will have on the acting of the market in general. But we may still be able to make IT a bit better place here or there.

Many years ago, Depeche Mode released a song called “New dress”. I always liked its chorus:

You can’t change the world
But you can change the facts
And when you change the facts
You change points of view
If you change points of view
You may change a vote
And when you change a vote
You may change the world

This chorus can be read in a highly cynical way (manipulate the facts) or in a highly inspiring way (telling the facts). I can only guess how Depeche Mode actually meant it, but I always interpreted it in the highly inspiring way:

I cannot change the world
But I can provide some ideas how to do things better
This may change someone’s point of view
Their different acting then may change someone else’s point of view
And maybe, the spark will eventually turn into a fire – and change the world

I do not have under control if the spark turns into a fire. But I can provide ideas that may inspire people to change their professional lives to the better. Maybe they inspire someone else. And so on.

Therefore, at least I will continue sharing my ideas how to make IT a bit better place. And I would recommend you to do the same. Let us make IT a bit better place together …


  1. Of course, there are more questions to ponder, e.g., how to organize software development and operations to become more effective, how to build dependable systems, etc. But if we would be able to advance mainstream software development in terms of the foundations of good design, it would solve a lot of the issues we currently have with software development – definitely not all of them, but a lot. Note that the fact that useful ideas regarding good design are available does not mean that these ideas will be widely adopted. Most of software design practices still revolve around Structured Design from the 1970s, even if the practicing people called it “OOD” 25 years ago and call it “DDD” today. (For the DDD afficionados: This does not mean that DDD would be SD. It only means that many people actually apply SD techniques while calling it “DDD”.) ↩︎

  2. I will not dive into the rabbit hole of dehumanized work in this post. It is way too long already. If you want to dive deeper into that topic, you may want to take a look into my “Leaving the rat race” blog series where I touch that topic or go for books like “Bullshit jobs” by David Graeber that address the issue in a much blunter and more direct way than I did in my posts … ;) ↩︎

  3. I know I just opened Pandora’s box when I said “education” because vocational training and universities are cramming more and more stuff in their curricula while the average code quality does not improve. So, we could certainly start a lengthy discussion how software engineering education would need to be changed to address the crappy code issue effectively. But as long as we see ads like the one I have seen in the Munich subway “Retraining to professional software developer in 3 months”, I am afraid all such discussions are void because these offerings define the lower bar in software development. ↩︎

  4. This does not mean that IT decision makers would be stupid or lazy. Often, they are very smart people. But pronounced herd behavior also in decision maker spheres and the risks involved not following the herd makes their decisions very predictable, i.e., almost certainly they will go where the herd goes. They will almost always choose the commonly accepted path even if they know that it will not solve their problems because choosing a different path would endanger their careers – and as written before: Often, they are very smart people. ↩︎

  5. And if the same people who decided that agentic AI does not need any software architecture will eventually realize that the vibe coded solutions will not have the required quality properties, they will not ask to reintroduce architecture but will cry for another easy “pill” that “cures” their pain instantaneously. ↩︎

  6. It would be very interesting to see how an universal basic income would affect the response patterns, if the herd behavior would still be so strong and gaming the market would still be so easy or if it would be much harder to drive people in a desired direction, simply because they are afraid of losing their jobs and their existence if they do not follow the accepted paths. But such a sociological thought experiment is far outside the scope of this blog. Nevertheless, personally, I would be curious. ↩︎