Feelings on AI Coding at the Start of 2026


I’m not a heavy AI 1 user. Even though I code almost daily I have never used it for this activity. I use AI a few times per month whenever I want to ask generic questions regarding a certain topic which would require me otherwise to search, validate and read multiple sources. Hence, I’m using it as a summarization tool, to ask about idiomatic patterns and to find about specific terms which I only know as descriptions. For that it works well.

It seems that understanding is one of the fundamental values of my personality. When things around me do not make sense I’m feeling lost and frustrated. Some might say that I’m a control freak, others that I’m more pessimist than optimist. Probably there is some truth in both of those statements but deep inside me I feel that understanding the world is what makes me feel good and that’s what I consciously and subconsciously trying to achieve. And especially when I’m called to execute a complicated task, like programming.

Combine the last two together. The era of AI engineering make me feel uncomfortable. Is it because I lose the control or because I’m not easily excited? I would say no to both. I feel uncomfortable because I don’t understand what goes on. If I’m not (and I’m not) an expert on HTTP API design, deployment and security then I feel that there is a potentially-huge risk in asking an agenting AI to generate for me a production ready API backend for even a handful of endpoints. If it slips and does a mistake that will go unnoticed there might be bad consequences.

The potential objection here would be: do not use AI to blindly engineer systems for which you do not have expertise. Avoid vibe coding. If your niche is SQL optimization just use AI for this type of work. It would probably speed up your development cycle and allow you to do more meaningful work. And indeed that sounds like a more healthy approach but it doesn’t explicitly solve my fundamental goal, that of understanding.

To put it simply I feel that understanding is to discover causality behind a phenomenon or a process and to ultimately build a mental model about it. Without this, someone might still have every now and then successful results in their work but I guess that with a concrete mental model chances for achieving a specific desirable outcome are far greater. And of course AI can be used exactly for that too. In the begging of this blog I wrote how I already use LLM’s for exactly this reason.

This line of thought leaves me feeling that AI for coding is not a (such) big deal for me. It won’t improve my ability to build mental models, understand new technologies, patterns, concepts etc. The fact that it can generate vast amounts of code is a thing that does not interested me since I cannot really evaluate its output. I cannot trust it or I do not know how I could possibly trust it. Is it safe? Is the generated solution only good enough for the initial specification or it also adheres to secondary but important requirements like maintainability, extensibility etc?

To be fair, I know that the usage of AI coding agents will most luckily improve my overall speed 2. I just need to incorporate them in my everyday engineering routine if I want to see how beneficial they can be. And still I’m holding back out of fear that it will eventually slow me down because on top of what I need to learn and understand for completing my tasks I also have to learn how to work with the new shiny tools.

I guess it’s finally time for some new tools exploration! 🤖


  1. In this context by AI I mean LLMs and LLM drive agents. ↩︎

  2. This seems to be also supported by a recent study from Anthropic https://arxiv.org/abs/2601.20245. See figures 5 & 6. ↩︎