the dainosaur – chapter 4

The noise floor

the dainosaur – chapter 4


Every inflection point in this industry has produced a learning problem. A new paradigm arrives, and suddenly your existing knowledge covers maybe sixty percent of what you need, and the remaining forty is scattered across documentation, conference talks, blog posts, and the hard-earned experience of people who got there before you. I have lived through enough of these transitions to know the pattern. The tools change, the problem of learning them does not. So when AI became something I was taking seriously, I knew I had to learn. The only question was where to start.

The answer, which I landed on by instinct and confirmed by experience, is the same answer it has always been: RTFM. Read the documentation. Official documentation from the tool makers themselves. The people who actually built the thing, who maintain it, who know what it does and does not do. This sounds obvious and yet it is routinely skipped in favor of faster, more entertaining alternatives. I understand why though. Docs can be dry. They assume you want to know how something works rather than just watching someone use it. But they are also accurate. They do not have an agenda beyond explaining the product. They are updated when the product changes. And they do not have someone else’s interpretation baked in, which means you are forming your own understanding rather than inheriting someone else’s. That matters enormously when the technology is still evolving fast enough that inherited understanding has a short shelf life.

I spent real time in the official documentation of the tools I adopted. More time than felt comfortable, which is usually a sign you are doing the right thing. What I found was that the documentation was genuinely good: well-structured, complete enough to work from, and honest about what the tool was not intended for. The investment paid back quickly because I was not building on guesswork. I had a foundation. Everything I learned afterward landed on something solid rather than on assumptions I had picked up at second hand.

Then I opened YouTube.

The appeal is obvious. Someone shows you a working example in twelve minutes. You see the tool in action rather than reading about it in the abstract. You get a sense of the workflow, the pace, what it feels like to use. For a technology that is fundamentally interactive, that is genuinely useful. I am not dismissing it. I watched a lot of videos, and some of them were very good. They make you realize that your mental model has actually shifted, where you saw something you would not have thought to look for in the docs. Those exist, and they are worth finding.

But they are a minority. What YouTube mostly produces, when a technology gets hot, is volume. The platform rewards publishing frequently and early. The barrier to recording a screen and narrating over it is essentially zero. So what you get is an enormous number of videos that are, at their core, the same video. Getting started with the same tool. The same five tips. The same workflow, slightly reordered. The same hook: “I built something incredible with AI in one afternoon”, stretched across thumbnails that are interchangeable if you squint. It looks like a curriculum. It is not. It is an echo chamber with a subscribe button.

The problem is not just redundancy. Redundancy you can filter by skipping after thirty seconds. The deeper problem is that a lot of this content is derivative all the way down, made by people who watched other videos, absorbed someone else’s understanding, and reproduced it with enough confidence to look authoritative. The first person in the chain might have had something real to say. By the fourth reproduction, the signal has degraded. And because the ecosystem moves fast, some of those derivative videos are also pointing you at things that were superseded months ago: a workaround that is no longer needed, a pattern that turned out to be a dead end, an approach that looked good in a demo but does not survive contact with a real workload. The video keeps getting recommended. The tool has moved on.

When I found a creator who actually understood what was happening underneath the surface and could explain why something behaved a certain way (not just that it did), I bookmarked them and kept coming back. When I found a video that taught me something I could not have derived from the docs alone, I noted it. Everything else I let go, without guilt. The goal is not to have watched everything. The goal is to have learned something useful. Those are not the same goal, even though the platform would prefer you to think otherwise.

My honest recommendation: start with the official documentation, spend more time there than feels comfortable, and then use video selectively. Be skeptical of anything that is essentially “look what I built with AI” without an explanation of why it works. Watch for the creators who push back on their own examples, who mention the limitations, who tell you what the tool cannot do. Those are the people who have gone past the surface. The ones who make everything look effortless and inevitable have usually stopped one layer short of understanding.

The noise floor in this space is high. That is not unique to AI: it is true of any technology that gets hot fast and attracts people who are more interested in publishing than in learning. Filtering the noise takes the same thing it always has: knowing what a good source looks like, being willing to spend time with it, and being comfortable saying “not useful” and moving on. You already know how to do all of that. You have just been doing it in different contexts.


Next: chapter 5 – All in, with eyes open
“The distinction — impressive versus useful — is the one a senior professional actually cares about.”