
All in, with eyes open
the dainosaur – chapter 5
I want to finish the series with sharing how I leverage AI in practice now. Not a demo, not a use case from a conference slide, but the real texture of a working day where AI is in the loop.
Obviously, I use it for coding. Specifications, tests, implementation, the whole range. I use it to write documentation about existing and new systems, which turns out to be one of the more underrated applications once you realize how much documentation never gets written at all because “it takes too long” or “nobody is going to read it”. By the way: AI actually reads the architecture principles and constraints you feed it and applies them where necessary (an architect’s dream). I use it when assessing systems I have not seen before: feed it the codebase, ask the right questions, and you can build a working understanding of a system’s architecture, its debts, and its hidden assumptions faster than any other method I know (including diagrams). I use it for modernization planning: taking an existing system and building a defensible roadmap for where it needs to go and how to get there without burning everything down. I use it as a second brain for staying organized. I use it for creating outlines, visuals and slides for conference talks. I also use it for side projects. For greenfield stuff, brownfield rewrites and migrations into completely different technology stacks. I will come back to that later.
My main tools of choice right now are Claude Code (through the CLI and the VS Code plugin) and GitHub Copilot in VS Code. I switch between LLMs regularly: Claude Sonnet and Opus, GPT, Gemini. I have not found “the best one”. I am not even sure whether questioning which is best is a valid question: it depends. They have different characters, different strengths, different failure modes. Switching between them regularly keeps your view of the models honest. When you stay with a single model too long, you start to unconsciously work around its weaknesses without noticing, and you stop noticing its limitations altogether. Variety is a calibration tool I strongly recommend using.
We are slowly approaching the end of the series. Having survived the impact of the AI meteor and finding myself thriving in the new AI era, it is time to share my perception of the good, the bad and the ugly.
The good is genuinely good. I feel more productive than at any point in my career, and I say that knowing how high the bar was already. Things I used to spend lots of time on to find on the Internet (exact syntax or method signatures, library specifics, flags I use twice a year, …), get resolved in seconds now. I stopped feeling bad about this. For a while I did. It felt like admitting weakness, like I should know these things off the top of my head. Like using the AI was cheating. But that was vanity talking. What I actually know is how systems work, where the risks live, which patterns survive contact with reality and which ones only work in controlled conditions. The syntax is incidental. And if AI handles the incidental, I can spend my time on the part that actually requires judgment. That is not cutting corners. That is efficient resource allocation.
What has surprised me most is how transferable experience becomes across technology boundaries when AI is in the loop. I understand patterns. I have watched those patterns being used in Java, in C#, in Python, in Go, in JavaScript, you name it. I can read the shape (architecture) of a solution in any language because the shape is what matters, not the words. AI handles the words. That combination of my pattern recognition and its syntax knowledge, means I can work productively in technology stacks I have never touched before. And at a level I could not have reached in the same time on my own.
To give you a concrete example: some time ago I wrote an application that is used by my wife. She does digital scrapbooking, a hobby that involves organizing thousands of folders of graphical elements, organized in “creative kits”. Assets that accumulate faster than any sane filing system can handle. I built her a .NET WPF desktop application for organizing and querying the kits. It runs on Windows only, which was a permanent complaint from many of my wife’s scrapbooking friends who also are using the application and had moved to Mac. I had been meaning to fix that for years. With AI, I rebuilt the entire application as a React and TypeScript application wrapped in an Electron shell. I have minimal experience in that stack. I built all the functionality in roughly one and a half days, packaged it for both Windows and macOS, and added a few new features my wife had been requesting. And I did it the right way: extracting the specifications from the existing codebase and using them as input for the new application, adding automated tests for quality control, adding documentation and a help function. It all worked like a charm. Also, I created a conversion tool for converting a file from the old version of the application to the format of the new application. This is where the AI also really shined. The old application stores its data in a binary file through .NET binary serialization. Mind you: the Microsoft documentation currently recommends to refrain from using binary serialization because of its security vulnerabilities. So the rewrite of the application was definitely due. The migration tool should be able to convert those binary files to the new NDJSON format used by the new application. Because this converter would not be used for a long period of time, I just vibe-coded it. I simply fed the AI both codebases and asked it to write a converter. The AI wrote a .NET console application that correctly handled the conversion in one go. It even added input validation: if you run it without specifying an input file, you get a clear error message explaining what is expected. And after it has finished, it shows statistics of how many folders and kits it converted. Those last two features I did not ask for. They just appeared. That is the power I am talking about. Not magic or autonomy. But just a tool that, when given enough context, produces work at a level that compresses effort by an order of magnitude.
Then there is the bad, which is real and worth naming. One of my concerns is the token economy. The input you feed an LLM and the output it produces is broken down into tokens. So each interaction uses a certain amount of tokens. And using tokens costs actual money through the subscription models of the AI vendors. Let us assume I am in the middle of a paid engagement, relying on AI-assisted work to deliver within the agreed timeline. And as I am burning tokens I reach the end of the token budget I was given. Do I have funds to replenish it? If not, am I still capable of finishing by hand? In the time and budget I have left? This is not a hypothetical situation. It is a dependency I have introduced into my professional delivery. And like any dependency it carries risk.
Managing context is a direct lever on that risk. The size of a conversation (how much you feed the model) and how long the session grows, has a measurable impact on token consumption. People who treat the context window as a dumping ground and throw in entire codebases, long conversation histories, and loosely relevant files and let the AI figure it out, are burning tokens at a rate they often do not notice until the bill arrives or the budget runs out. Context engineering is being deliberate about what you include, pruning what is no longer relevant, breaking long sessions into focused ones. This is not just a performance concern, it is also a cost management habit. The discipline looks familiar once you recognize it: it is the same kind of thinking you apply when scoping a query, designing a minimal interface, or structuring a prompt to get a precise answer rather than a verbose one. The model does not need everything. It needs the right things. Learning to make that distinction is part of working with AI at a professional level, and it directly affects how far your token budget actually takes you.
I also am paying attention to developments around running models locally and in private infrastructure. I have even fantasized about an approach that would distribute inference across available GPU capacity: the unused gaming hardware sitting in people’s homes, running nothing in between gaming sessions. Anyone who remembers the SETI@home project understands the concept immediately. I am not deep enough in the infrastructure side to know whether this is possible, or maybe even solved already. I have colleagues who are considerably sharper on that front and I defer to them on the mechanics. And I am reassured by the fact that the company I work for is already actively looking into a sovereign solution for running LLMs. Because what I do know is that treating cloud-hosted LLM access as infinitely reliable is the same mistake we used to make about on-premises systems before we understood failure modes. Redundancy is not paranoia. It is architecture and risk management.
The ugly is a longer-term concern, and I hold it with appropriate humility because I am not an expert on how models are trained. But here is what sits in the back of my head. The models available today are excellent in large part because they were trained on code written by humans. Actual engineers building software, who were curious, who tried new things, who adopted emerging libraries and patterns sometimes before those patterns were mature, who wrote blog posts about what worked and what did not. Now consider what happens as more and more of the code published to the internet is generated by AI, AI that is mostly recycling patterns from its training data rather than inventing new ones. The effect of humans writing about what they tried, including things that did not work out, is part of what makes the training data valuable. If future training data is mostly AI output, will new patterns, new libraries, genuine innovations in how we build systems, actually make it into the models? Or will the models increasingly mirror themselves, getting better at producing what they already know how to produce and ignore what is genuinely new? I raised this concern with an Info Support colleague of mine who has many years of hands-on experience working with AI, someone whose opinion on these things I take seriously. He pointed out that the tooling is already moving in a direction that addresses exactly this. Modern AI agents are increasingly capable of searching documentation, querying package registries, and pulling from live online resources as part of their reasoning process. They do not have to rely solely on what was baked into them at training time. A model that can look up the current release notes of a library, read its migration guide, and apply that knowledge within the same session is a different beast from one that is frozen in amber. It does not fully dissolve the concern though. The question of whether genuinely novel ideas propagate into these systems at the speed they deserve is still open, but it does reframe it. The loop between the world and the model may be tighter than the static training data picture suggests. That takes the edge off this concern a bit for me. I am curious to see how it plays out in practice.
That brings us to the end of the series. I hope you have enjoyed reading it. Finally, I would like to share why I felt the need for writing this.
There is a large volume of content about how to use AI. Tutorials, feature walkthroughs, prompt guides, benchmark comparisons, framework reviews. Some of it is very good, and I have definitely learned from it. What I have found much harder to locate is honest writing about the actual human experience of adopting AI as a senior professional: the hesitation, the friction, the moments where your professional identity runs directly into a thing that can do parts of your job, and you have to decide what to do with that. The internal negotiation between skepticism and genuine curiosity. The specific discomfort of being wrong about something you were confident in. The adjustment in how you think about your own expertise when a tool starts covering ground you used to cover by yourself and get recognized for. This is the experience I wanted to describe, because it is the experience nobody talks about, and it is the experience that most directly determines whether someone actually adopts AI or just watches from the sidelines and waits for it to go away. And people in the last category usually come in pairs, feeding off each other’s skepticism from a safe distance like Waldorf and Statler in the balcony. And yes, the fact that I reach for a reference to The Muppet Show tells you everything you need to know about my age and the title of the series.
If you made it this far in the series, you are probably not a bystander. You are somewhere on this path. Maybe you are just getting started, maybe you are halfway in or maybe you are already invested and comparing notes. Maybe you are still in the uncertain part and have lots of doubts. You ask yourself things like: AI might be real but how do I commit? Or you feel like the productivity stories belong to someone else. Or you have this nagging sense that maybe waiting a little longer is the right call. If that is you, I want to say to you directly: I have been exactly there. The skepticism was not unreasonable. The hesitation was not irrational. The standards were not set too high. The bar was exactly where it should have been and AI has cleared it. So the verdict came back positive for me. Not because AI is impressive in demonstrations, but because it is genuinely useful in real work, on real problems, with real stakes. That distinction, impressive versus useful, is the one a senior professional actually cares about. It took me quite some time to form that judgment honestly. I am offering it here in case it shortens the journey for someone else.
“I hope you’ve enjoyed reading this series. Even if my words help just one person in his or her journey of adopting AI and becoming more productive, it was definitely worth my time and effort to write them.”
− Edwin