
Context is the craft!
the dainosaur – chapter 3
Before I started using AI seriously myself, I watched. A colleague walking through an agentic workflow, a few YouTube videos of people doing impressive things with AI. My reaction was not immediately “I need that!”. It was closer to mild panic. Not the dramatic kind. Just the quiet, competence-threatening kind. The kind you feel when you walk into a room and realize everyone else seems to understand something you do not. You are on the verge of learning something new.
The surface area looked enormous. There were models everywhere: different providers, different sizes, different trade-offs, new releases every few weeks. There were tools, LLMs, agents, sub-agents, skills, frameworks, orchestration layers, prompt engineering guides, RAG architectures, system prompts, rules, hooks, MCP servers, agent-teams, … . And since writing this post, probably many more will have been added. People on the internet had strong opinions about all of it. YouTube had a tutorial for each piece and a follow-up video explaining why the first video was already outdated. It looked like a second career. I already have one of those.
The implicit message was that you needed to know all of this before you could be useful. That productive use of AI required mastering the ecosystem first. For someone who came up in an era when adopting a new tool meant reading a manual once and then just using it, this felt disproportionate. The overhead seemed to exceed the benefit before you had even started. I have seen this pattern before in the industry. It usually means the ecosystem is immature. And I filed it as such and waited.
What I eventually discovered, through actually working with it rather than watching others work with it, is that the entry tax is smaller than advertised. Not zero, but the YouTube rabbit hole is a poor guide to what actually matters. The models, the tools, the frameworks: yes, they exist, and yes, they are different from each other. But they are not the constraint. The constraint is something much more familiar.
Context, instructions, clarity about what you want, what you do not want, what the rules are, and what good looks like. The AI does not know your problem. It does not know your constraints. It does not know which decisions are already made and which are still open. It does not know what “done” means to you. If you do not tell it specifically, completely, with enough structure to be unambiguous, you will get a generic answer to a generic version of your problem. Which is fine if you have a generic problem. But the majority of the time we typically do not.
This is not a revelation about AI. This is a revelation about communication. The people who get the most out of AI are not necessarily the ones who know the most LLMs or have the best tool stack. They are the ones who can articulate what they need precisely enough that an intelligent system can act on it. That is a different skill. And as it turns out: I already had most of it through my experience.
I am, by habit and by conviction, an organized person. I document decisions. I write down not just what was chosen but why, what was considered and rejected, what assumptions are in play. When I work on a system, I leave a trail, not because anyone asked me to, but because I have cleaned up enough undocumented messes to know what the alternative costs. Arc42 is my preferred template for software architecture documentation: structured, complete, explicit about context and constraints and quality goals. Some people find it heavy. I find it clarifying. The act of filling in the template forces you to confront what you do not actually know yet.
Working with AI has reinforced every one of those habits and made me sharper about why they matter. A well-written context (what I would now call a system prompt or a set of instructions for an agent), is essentially the same discipline. Here is the domain. Here is the goal. Here are the constraints. Here is what I care about and here is what I do not. Here is what a good result looks like and here is what disqualifies a result. The better you write that, the better the output. Not marginally better. Dramatically better. The difference between a useful answer and an answer that wastes your time (and tokens) is almost always traceable to what you put in for the AI to work with.
I have adopted the term “context engineering”, because it is actual engineering. It is not prompting in the casual sense: ask a question, see what comes back. It is the deliberate way of working and a design of the information environment that the AI works in. Skills, instructions, constraints, examples, formats: these are the architecture. The model is the runtime. Most people spend their energy choosing the runtime. The people getting serious results are designing the architecture.
There is a direct line from spending years writing clear architecture documents to being good at this. The same instinct that makes you write down why you rejected the event-driven approach, because someone will ask in six months and you will not remember, also makes you write an agent instruction set that actually tells the agent what it needs to know rather than leaving it to infer (unless you are experimenting). The same discipline that makes you keep your Arc42 documentation up-to-date, makes you maintain your context files as the project evolves. These are not new skills dressed up in new vocabulary. They are the same skills, applied to a new surface.
But controlling the input is not all. You also have to control the output the AI generates. Just as you do in a team, you execute peer reviews on the code written by your teammates. If you do not give this enough attention, you will pay a cost that is easy to miss until you feel it. When the output is good and the pace is fast, the temptation is to accept and move on. The review becomes a skim. The skim becomes a quick glance. And eventually you are shipping things to production you have not actually reviewed and understood. This is not a problem unique to AI, it also happens with copy-pasting from Stack Overflow, with code borrowed from a colleague or with open-source libraries you pulled in without checking them out. And AI makes it harder to stay critical, because the output looks polished and is usually correct enough to pass a casual inspection. The risk is not that the AI is wrong. The risk is that you stop being the person who can tell. Using AI heavily, without engaging critically with what it produces, can quietly erode the reasoning skills that make a senior professional worth listening to in the first place. That is the kind of atrophy you do not notice until you need those muscles and find out they have weakened. So keep training. Review thoroughly. Not only as a quality gate, but as a discipline. Reading a solution critically, challenging its assumptions, spotting what is missing or suboptimal: that is still learning. The pace may be different, but the obligation has not changed.
The people who are struggling most with AI (from what I can see) are not struggling because the technology is hard. They are struggling because they have never had to be explicit about things they usually leave implicit. Experienced people often have the most implicit knowledge and therefore the most to gain from learning to surface it. That is the work. Not learning which model beats which benchmark this week, but learning to say, clearly and completely, what you actually need.
The entry tax, it turns out, is mostly paid in clarity. And that is a currency I have been accumulating for years.
Next: chapter 4 – The noise floor
“The noise floor in this space is high. Getting above it takes the same thing it always has: knowing what a good source looks like, being willing to spend time with it, and being comfortable saying ‘not useful’ and moving on.”