
Letting go of the reins
the dainosaur – chapter 2
My first serious experience with AI assistance was fine. Not revelatory, not disappointing. Just fine, it did things. Sometimes the things were useful. Sometimes they were adjacent to useful in ways that required significant cleanup. The output was competent enough to take seriously and unpredictable enough to make me nervous. And that nervousness had a very specific shape: I could not figure out how to be in control.
This is not a small thing for someone who has spent many years making control into a professional virtue. When I say I have been practicing best practices, I mean it literally. Patterns, principles, structure, these are the tools I reach for when complexity threatens to become chaos. I have opinionated views about layering and boundaries and naming conventions and why you absolutely do not put business logic in your UI components. That is not rigidity; that is discipline built from watching what happens when people ignore it. So when I started working seriously with AI, my instinct was the same as I apply to any complex system: I need to understand the rules, I need to know what to expect, I need to control the output. And here is the problem with that: the output is not deterministic with AI. Same input, different output. That is not a bug I can create a support ticket for. That is just how the thing works and we have to deal with that.
I will be honest about how much this bothered me. Not for long, but genuinely. It felt like handing part of my work to something that could go anywhere. You set a goal, you expect a path, and instead you get a very confident “colleague” who takes a completely different route and sometimes arrives somewhere you were not aiming for. Every senior architect has a story about a junior who solved the right problem in the completely wrong way, and you can picture the exact face you made. That was my face, internally, for a while.
What changed was not a single moment. It was accumulation. The more I worked with AI agents on real tasks, real stakes, not toy examples, the more I noticed something I had not been looking for. The non-determinism was not just producing errors. It was also producing angles. Approaches I had not considered. Questions about requirements I had mentally closed. Solutions that worked but arrived via a path I would never have walked, because my experience had taught me that particular path was a dead end. Sometimes the experience was right and the agent was wrong. But sometimes I had ruled out an option based on my own assumptions that were no longer true, and the agent simply had not inherited those assumptions. It had a different view.
That word “view” signals a certain shift in my thinking. I did not use it deliberately; it arrived on its own, and when I noticed it, I understood that something had changed in how I relate to the thing. I had started thinking of the AI agent as something that sees the problem. Not as a tool executing instructions. Not as a horse I needed to break and steer carefully so it did not throw me. A perspective. A virtual colleague with broad context, different instincts, and zero of the scar tissue that makes me fast but also occasionally blind. The colleague who asks the obvious question that you stopped asking because you already know the answer. Except sometimes you are wrong about knowing the answer or the context demands something different.
To be clear: I am not humanizing the AI. I know what it is. It is not alive, it does not have opinions, it does not go home and think about my architecture problem overnight. But when I am deep in a flow and the back-and-forth is working, the functional experience maps to collaboration. That is the honest description. Calling it a “colleague” is not a belief: it is the closest word for what the interaction actually feels like when it is going well. I have stopped fighting that description.
There is another role it has slipped into that I did not predict: rubber duck. Senior engineers know this one. You explain the problem out loud to an inanimate object and the act of explaining it makes the answer visible. The duck does not respond. The duck does not need to respond. The AI duck responds, which is both more and less useful depending on what you need. When it responds poorly, I ignore it. When it responds well, I steal the idea and build from that. Either way, the act of articulating the problem clearly enough for an agent to engage with it is half the work. This is not new wisdom. It just turns out that AI makes a very engaged rubber duck.
I am not saying the agent is always right. It is not. It is confidently wrong with a frequency that would get a human fired. I still verify. I still push back. I still apply the same critical instincts I would apply to anyone’s output. But I apply them differently now: the way I engage with a smart colleague whose judgment I respect but whose blind spots I have learned to recognize. Not the way I manage a process I am trying to constrain. That distinction matters enormously to how you use AI in practice. The person trying to tame the horse is fighting the animal. The person working with a skilled but opinionated colleague is having a conversation. Same situation, very different results.
Many years of experience plus a tool with broad reach and no inherited assumptions: that turns out to be a reasonable combination. The experience tells me which solutions have failed before and why. The agent brings options I would not have generated, surfaces requirements I would have missed, and occasionally says something that sounds wrong until I think about it for ten seconds and realize it is not wrong, it is just different from how I would have said it. That combination is genuinely useful. Not in a “wow, AI is amazing” way. In a “this makes me better at the actual work” way. Which is the only kind of useful that matters in my opinion.
Next: chapter 3 – Context is the craft
“The model is the runtime. Most people spend their energy choosing the runtime. The people getting serious results are designing the architecture.”