ChatGPT already knows - Part 7

Multiplying our value as software engineers

Uwe Friedrichsen

10 minute read

A green snake curled up on a branch

ChatGPT already knows - Part 7

In the previous post, we explored why we need to be a bit rebellious to take the road towards becoming a “full-range engineer”.

We have also discussed that it is not enough for our industry if software engineers change their positioning. Being sustainable (in its general meaning) in an increasingly complex, unpredictable and ambiguous world requires a different focus than the currently predominant one.

In this final post of the series, we will put everything together, we have discussed so far and sum it up.

A huge paradigm shift

Even if the AI tools may soon know better than you how to write code, they still need guidance to know which code to write, to know what the most appropriate solution is in the given context. After you gave them the required guidance, the AI tools then may write the code easily.

This is BTW what distinguishes average and great software engineers: The ability not only to write good code reliably, but also to know what code to write, which option from the myriad of possible options to pick while keeping an eye on the different encompassing forces.

Or as Adam Dymitruk recently put it in a toot on Mastodon:

At the upper part of the skill scale, AI will increase the income of developers. AI will not be replacing them. It will be making them 100x developers. Look for good developers to get more expensive as the average and below average developers use AI to make a mess faster.

In other words: A fool with a tool is still a fool.

Developer efficiency boosted by AI solutions is worth nothing if developers create pointless code faster. This is not so much about creating bad code as it can be expected the quality of the code produced by AI solutions will continuously improve over time (see, e.g. this blog post at Stack Overflow discussing some of the first steps in that direction).

It is more about effectiveness. This is about writing the right code that not only solves a given task in isolation in an arbitrary way but also fits in its complex surroundings, does not unnecessarily increase overall complexity and satisfies all the explicit and implicit assumptions of all the people involved and affected. This is a completely different story than just implementing some task in isolation.

But to become such “100x developers”, Adam talks about, to multiply our value as software engineers, we need to become full-range engineers first. We need to be able to understand the needs and demands, driving the solution. We need to communicate and collaborate effectively for that. And we need to embrace complexity, uncertainty and ambiguity. This is what “good developers” mean. This is what “100x” means.

Again, this is the opposite of hyper-specialization. This is not about “deep dives”, not about tons of details, not about live-coding sessions, not about being a “cool” nerd and alike. This is about context, about complex interaction webs, about understanding how different decisions affect other parts of the interaction web, about learning how to make good decisions, how to collaborate with other (human) peers and a bit more. And maybe it is also about guiding the AI tools to do their part of the work, the coding.

If we think it through, this means a big paradigm shift in our industry and the software engineering community:

  • The distinct nerd culture in IT comes to an end.
  • AI tools become the new nerds.
  • IT media needs to focus on other topics.
  • IT education needs to focus on other topics.
  • Companies need to change their recruiting.
  • Software development companies need to change their offering (simply renting developers for money is not a viable business model anymore).
  • Probably the whole education system needs a (long outstanding) update from teaching knowledge to fostering skills.

But this means to leave the well-trodden paths. This means reinventing oneself (or at least repositioning oneself) for a lot of companies and persons – not only software engineers. This means change. This is laborious. This feels risky, like an uncertain future. But the beloved well-trodden paths become narrower every day until they eventually will turn into a dead end for the vast majority of us.

Personally, I think, simply ignoring the developments, simply staying on the same path is not an option. But then, maybe I am just biased. Future will tell …


Modern AI tools have become impressively good at writing code for clearly defined and rather isolated tasks and it is to expect that in relatively near future they will surpass humans in this area of expertise.

On the other hand, most software engineers still build their careers on having more deep knowledge in a very small area of expertise than others. This fits the demands of the predominant division-of-labor driven hyper-specialization approach in IT where roles are split up further and further and software engineers are expected to build deeper and deeper knowledge in smaller and smaller areas of expertise.

The accompanying rat race makes sure people do not get the time to think about the meaningfulness of this whole development. Additionally, nerd culture, being implicitly aligned with the demands of hyper-specialization also rewards people knowing arbitrary exotic details about some very specific technology or tool.

Due to the effects of hyper-specialization and the resulting very narrow area of expertise of software engineers, most software engineers are used to implementing endless series of small, often unrelated tasks (call it features, requirements, user stories or whatever) in isolation from the rest of the system landscape (often even in isolation from the rest of the containing application) and in isolation from their future consequences (think, e.g., maintainability and evolvability).

The problem is that modern AI solutions are strong exactly in this area, in learning arbitrary big amounts of knowledge (more than any human could keep in mind) and applying them while implementing tasks in isolation.

This means the current typical software engineering career paths lead software engineers in direct competition with AI solutions in their area of strength, a competition humans most likely will lose over time.

We have discussed where we come from in software engineering and what kind of job we need to do today. If we look at the overall job, we need to turn (business) needs and demands into working software. This job is accompanied by a lot of complexity, uncertainty and ambiguity on the problem side and also a lot of complexity on the solution side.

Unfortunately, many software engineers voluntarily limit themselves to doing only a tiny part of this overall job, being merely feature implementation machines, (easily replaceable) cogs in a machine controlled by someone else.

We have looked at the strengths and weaknesses of modern AI solutions and humans and seen they are quite complementary. AI solutions are good at learning and not forgetting lots of knowledge and details as well as repeatedly implementing quite clearly defined tasks in isolation. Humans are neither good at learning arbitrary amounts of knowledge without forgetting them nor at repeatedly doing tasks over and over again without making any mistakes.

Humans on the other hand are good at successfully navigating complexity, uncertainty and ambiguity, all desperately needed in contemporary software engineering. AI solutions do not yet show any signs of these traits, no matter how impressive their coding capabilities are.

Unfortunately, we often suppress those traits in favor of comparatively simple tasks and clearly defined rules because it feels more enjoyable. While this is understandable – after all, a primeval instinct of humans is to minimize uncertainty and thus also perceived complexity and ambiguity – it brings software engineers in direct competition with AI solutions.

Based on these observations, we have asked the question what we can do to preserve our value as software engineers by leveraging our strengths as humans. I have suggested a few ideas. They were about becoming a “full-range engineer”, about understanding also non-IT domains, embracing complexity, uncertainty and ambiguity, being good at communication and collaboration also with peers outside the “dev team”.

This way, software engineers become very effective in what they do and invaluable peers for all the other people interacting with and affected by software development – including those decision makers who may decide if they want to replace us with an AI solution or not.

Finally, we have seen that all this requires to be a bit rebellious because the whole industry, living very well from ever more extreme hyper-specialization and its effects as well as nerd culture prefer software engineers being cogs in a machine. They prefer it for different reasons (and most self-acclaimed “nerds” are not aware of it at all and most likely would vehemently contradict), but in the end, neither the in-group nor the out-group will be of help on our journey towards becoming full-range engineers.

When looking ahead, I concluded that the whole industry needs to reposition itself because the current path is neither sustainable nor resilient in any way (using sustainability in its general meaning, not being limited to ecological sustainability). The updated role of software engineers, becoming full-range engineers would fit very well in such a more sustainable and resilient positioning of our industry. But there is still money to be made with the current way of doing things and change is awkward …

A closing thought

This blog series has become a lot longer than I thought it would become when I started writing it. Actually, I thought it would become a quick single post. I was wrong again.

Nevertheless, I think it is important to take a few steps back and look at the topic in a more holistic way. Modern AI solutions involuntarily brought some of the deficiencies and madness of our industry to the surface again. Responding to the novel challenges they pose thus needs more than just a few knee-jerk reactions. It requires understanding the situation in a more holistic way to figure out how to respond to it best. And so, it did not become a single post but seven posts.

I am very curious where all this will lead. Depending on how our industry will respond to the new possibilities and challenges, this can lead into a bright or dark future – not only for software engineers.

At the moment, most decision makers primarily consider AI a tool to increase “productivity”, i.e., efficiency. They hope to speed up software-based projects using AI, thinking the bottleneck are the “slow developers”.

From what I see, the bottleneck are not the writing of code. Software engineers are extremely efficient at writing code. The bottleneck is everything in and around software-based projects that is not writing code, all the things that lead to 80% of the code written being rarely or never used – meaning no added value or even value destruction while at the same time massively increasing the amount of accidental complexity software engineers need to deal with in all subsequent code changes.

The problem is not a lack of (coding) efficiency. The problem is a lack of (non-coding) effectiveness. Most of the (coding) work we do is the wrong work, wasting effort for things that do not create any value or even destroy value. Unless we fix our lack of effectiveness, all increases of efficiency – including AI – will only lead to creating even more crap in even shorter time. More (AI-based) efficiency will only make things worse.

By becoming a full-range engineer, we might help becoming more effective again. Only if we master this challenge, the modern AI solutions may become really useful – for all of us. We will see …

I hope this series gave you a few ideas to ponder. And maybe you, your ideas and your acting will help shaping a bit brighter future for all of us. But no matter how you decide to move on, I wish you all the best on your way!