The Vampiric Effect

I shipped an 847-line pull request at 11:43 PM on a Wednesday in January. Claude Code had done the heavy lifting. I'd prompted, reviewed, course-corrected, prompted again. The diff was clean. The tests passed. The PR description was thorough. I merged it and closed my laptop.
The next morning I opened the codebase and couldn't recall a single architectural decision from the night before. Not the data model choices. Not why I'd split the service boundary where I did. Not the tradeoff that led to denormalizing one of the tables. Eight hundred and forty-seven lines of code with my name on the commit, and I was reading it like someone else's work.
I'd been the hands. The AI had been the brain. My understanding had left the room — and I couldn't tell you exactly when.
There's a vampire in What We Do in the Shadows named Colin Robinson. He doesn't bite. He drains your life force by explaining the difference between USB-A and USB-C, by being in the room while you slowly lose the will to think. His victims don't notice because there's no wound — just a creeping exhaustion, plausibly deniable, untraceable to any single moment. That is the most accurate description I have for what AI-assisted development does to a certain kind of programmer's understanding. Not one dramatic bite. An ambient drain that compounds while the output looks fine.
The Three-Minute Pause
A few weeks after that merge, I was in a design review for the service I'd built largely with Claude's help over the previous month. A teammate asked why I'd denormalized a specific table instead of using a join. Reasonable question. I knew the denormalization was the right call — I felt that much through the review — but the reasoning had lived in a conversation between me and an AI that I'd long since closed.
It took me three minutes to reconstruct the rationale. Three minutes of scrolling through migration files and reading my own comments like archaeological artifacts, piecing together logic that had seemed obvious at the time because Claude had explained it and I'd nodded along.
Three minutes doesn't sound like a lot. But I've been a senior engineer long enough to know what three minutes of silence in a design review signals. It signals the system model in your head has holes. That you merged something you reviewed but didn't fully absorb. That the understanding which should have been a mandatory byproduct of building — the kind that compounds into architectural intuition — had not transferred.
The energy vampire doesn't take your code. It takes your ownership of the code. And you don't notice until someone asks a question you should be able to answer.
Measuring the Gap
I'd been telling myself I understood my AI-assisted code at about 60%. Not complete comprehension — but enough to own it. I wanted to know if that figure was real or a number I'd invented to make a vague unease specific.
I ran a retrospective audit on 12 PRs I merged in January, all done primarily with Claude's help. The method is blunt: for each PR, I block 25 minutes and answer five questions without consulting the conversation history or commit messages.
- Why was this architecture boundary drawn here?
- What are the failure modes of this approach?
- What would break if you changed this one line?
- What alternatives were considered?
- What does this code do that isn't immediately obvious from reading it?
I score myself 0–2 on each: 0 = cannot answer, 1 = partial, 2 = complete. Ten points maximum per PR.

| PR | Lines | AI-assist level | Score /10 | Notes |
|---|---|---|---|---|
| SQLite schema refactor | 384 | Heavy | 4 | Could not explain the migration order decision |
| Rate limiter rewrite | 219 | Medium | 7 | Understood backoff math; not the semaphore choice |
| Checkpoint resume | 447 | Heavy | 3 | Recognized the code, couldn't defend any of it |
| CLI flag parsing | 88 | Light | 9 | Mine throughout |
| Embedding batch job | 156 | Heavy | 5 | Understood what, not why |
| FTS5 trigger refactor | 203 | Medium | 6 | — |
| Error translation layer | 121 | Light | 10 | — |
| Agent spawner refactor | 612 | Heavy | 2 | Three weeks old; read it like a stranger's code |
| Router classification | 244 | Medium | 7 | — |
| Teardown manager | 189 | Heavy | 5 | Understood the pattern, not the ordering decisions |
| Skill materialization | 334 | Heavy | 4 | — |
| Config validation | 67 | Light | 9 | — |
Heavy AI-assist averaged 4.0/10. Light AI-assist — me writing, Claude reviewing — averaged 9.3/10. Medium averaged 6.75/10.
The 60% figure I'd been telling myself was too generous. For the PRs where Claude wrote the architecture and I directed and reviewed, it's closer to 40%. The gap between "I reviewed and merged it" and "I understand and own it" is wider than I'd wanted to admit.
One confound worth naming: the heavy-assist PRs are also the most complex. I route hard problems to Claude partly because they're hard. The agent spawner refactor (2/10) is genuinely one of the most complex pieces in the codebase — I might have scored 3 or 4 on it even if I'd written it myself, exhausted, at 1 AM. But the checkpoint resume code (3/10) is not particularly complex. It's just code I didn't write.
The design review data is harder to quantify. I've had three moments in the last month where I paused for more than 60 seconds trying to reconstruct rationale from code I'd merged. Two were on heavy-assist PRs. One was on a medium-assist PR where I'd written the structure but Claude had written the implementation. Three incidents in one month is a rate I wouldn't have accepted two years ago.
The 10x Trap

Steve Yegge told The Pragmatic Engineer in February 2026 that companies should expect roughly three productive hours from an engineer vibe coding at full speed. "The answer is yes," he said when asked if that means a three-hour workday, "or your company's going to break."
I used to think AI would extend the productive window. If the model handles the boilerplate and the syntax and the lookup work, surely I'd stay in the creative zone longer. What I found after a year of intensive AI-assisted development is the opposite. The three-hour ceiling isn't about typing speed or syntax recall — it's about how long a human brain can sustain the kind of focused decision-making that produces good software. AI doesn't change the ceiling. It changes what you produce before you hit it.
Before AI, those three hours yielded maybe 200 lines I understood completely — every branch, every edge case, every reason I chose this abstraction over that one. Now they yield 800 lines I understand at 40%. The output quadrupled. The cognitive budget didn't. And when I push past the ceiling — which the velocity makes tempting, because the next feature is always one prompt away — the comprehension drops further. By hour six of an AI-assisted session, I'm approving diffs I'm barely reading. I'm Colin Robinson's victim at the end of a long meeting: nodding along, life force gone, still technically present.
Here's the structural problem underneath all of it. When a developer writes code by hand, the output and the understanding are bundled together. You can't ship the feature without also building the mental model. That mental model is what makes you a senior engineer. It's what makes you hard to replace. It's your career equity.
AI unbundles the output from the understanding. You can now ship the feature without building the mental model. Which means the thing that made you irreplaceable — deep system knowledge accumulated through the friction of building — is no longer a mandatory byproduct of doing your job. You have to cultivate it deliberately, on your own time, while your employer captures the 10x output and asks why you're not shipping even faster.
TinyZoro, in the same thread that made me write the previous piece, put it plainly: "Who is working for who?" I shipped more code in 2025 than in any previous year of my career. By every metric my organization tracks — velocity, throughput, cycle time — it was my best professional year. I am not more senior for it. I am not more architecturally fluent. The company captured the entire delta.
What I Changed
What works for me — and I offer this as personal practice, not prescription — is splitting the day. Mornings are for AI-assisted velocity: prompting, reviewing, shipping. Afternoons are for what the vampire takes: reading the code I shipped that morning, rebuilding the mental model, drawing the architecture diagrams by hand, having the design conversations that force me to articulate decisions Claude made and I ratified.
Reclaiming the understanding, deliberately.
It's inefficient. I produce less than I could. But I own what I produce, and that ownership is the thing that compounds into a career.
Honest Limitations
I'm still using Claude Code every day. Every morning I open the terminal and the drain begins. The code ships and the tests pass and the PRs merge, and I am, by every external measure, thriving. The drain is real and the output is real and I don't know how to hold both without sounding like I'm complaining about a gift.
I can't separate the vampiric drain from ordinary overwork. Some of Wednesday's exhaustion might be a senior developer with too many projects, not a senior developer with too much AI. The Colin Robinson framing is compelling, but it's also unfalsifiable — any drain can be attributed to it, which means it might explain everything and therefore nothing.
This is specifically a senior-developer problem. The comprehension gap matters because I'm expected to own systems and defend decisions. A junior developer using AI to learn faster is in a different game. The people I know who are having the time of their lives with AI are overwhelmingly folks for whom the tool opened doors rather than changed rooms. I don't want to project my specific loss onto a universal narrative.
Colin Robinson's victims never confront him. The drain is so ambient, so low-grade, so plausibly deniable that naming it feels like overreacting. You'd sound crazy telling your engineering manager that being 10x more productive is making you worse at your job.
But the audit doesn't lie. 4.0 out of 10, on code with my name on the commit.