Skip to main content
// JH

· 6 min read

What Claude Code's /insights Told Me About How I Build

I ran Claude Code's /insights report on a week of heavy usage — 280 messages, 19 sessions, 8,822 lines added. The tool distribution, response times, and friction patterns reveal a workflow that's closer to async delegation than pair programming.

ai · claude-code · workflow · insights · productivity

TL;DR

Claude Code's /insights command generated a detailed report on my last 7 days of usage — 280 messages across 19 sessions, +8,822/-482 lines changed across 101 files. The tool distribution (525 Bash calls vs 109 Edits) reveals I'm using Claude as an orchestration engine, not a code editor. The median response time of 198 seconds confirms the workflow: I delegate, walk away, and come back to results.


The Report Card

I ran /insights on a Tuesday night, expecting a usage summary. What I got was closer to a performance review — one written by the tool I was being reviewed about.

The report covers February 6–12, 2026. Seven days. During that week I was building Via's orchestrator, overhauling this website, cleaning up 6 repositories, and running research missions. Not a light week. The data reflects that:

MetricValue
Messages280 across 19 sessions
Lines changed+8,822 / -482
Files touched101
Lifetime messages2,585
Daily average40 messages/day

The lifetime number — 2,585 messages — put things in perspective. I've been in this tool long enough for patterns to emerge.

525 Bash Calls Tell the Real Story

The tool usage breakdown is where the actual workflow becomes visible:

ToolCalls
Bash525
Read345
Edit109
Write56
Grep49
TaskOutput45

Bash at 525 calls, nearly 5x the Edit count. That ratio means most of my sessions involve Claude running shell commands — executing builds, managing git operations, running tests, orchestrating processes — not writing code character by character. Read at 345 confirms the other half: Claude spends enormous time understanding code before touching it.

The 45 TaskOutput calls point to something specific. That's the sub-agent pattern — Claude spawning background tasks and collecting results. I've been leaning heavily on multi-agent orchestration, and the tool data reflects it.

The language distribution reinforces the pattern. Markdown topped the list at 128 files, followed by TypeScript at 102 and Go at 42. Heavy Markdown means documentation, configuration, and agent skill files. The week was about building infrastructure, not just shipping features.

The Response Times Reveal Delegation, Not Pairing

This is the number that reframed how I think about my own workflow:

Response TimeCount
2–10s15
10–30s16
30s–1m10
1–2m19
2–5m28
5–15m33
>15m32

Median: 198.4 seconds. Average: 538.8 seconds.

33 responses took 5–15 minutes. 32 took longer than 15 minutes. That's 65 out of 153 tracked responses where I was not sitting there watching Claude type. I was making coffee, reviewing another PR, or working in a parallel Claude session.

This is not pair programming. It's async delegation with occasional check-ins. The report's own summary nailed it: "You operate as a strategic architect who delegates ambitious multi-phase missions to Claude's autonomous execution capabilities, then intervenes decisively when the approach drifts from your vision."

Multi-Clauding Is a Real Workflow

The report surfaced something I hadn't explicitly tracked: parallel Claude sessions.

  • 4 overlap events
  • 6 sessions involved
  • 9% of messages sent during parallel sessions

I knew I ran multiple Claude instances. I didn't know it was 9% of my traffic. The pattern usually looks like: Claude is running a long orchestration task in one terminal, so I open a second session to handle a quick fix or research question. The insight data says this isn't an edge case — it's a measurable part of how I work.

Where Things Break

The friction data is more useful than the wins.

Friction TypeCount
Wrong Approach4
Prompt Too Long4
Misunderstood Request3
Timeout/Resource Limit1

"Wrong Approach" and "Prompt Too Long" both at 4 occurrences, and they're connected. Ambitious sessions fill context windows. In one case, Claude designed my agent skills as YAML metadata blobs instead of actual SKILL.md files on disk — I had to interrupt, correct, and trigger a full plan rewrite. In another, Claude started execution without the correct project path, forcing me to re-issue the initial request.

The "Prompt Too Long" errors are more insidious. My content philosophy overhaul across 14+ files hit context limits near the end — meaning the work was done, but I lost the ability to review or iterate in-session. The report called this "burning context you can't get back."

Tool errors tell a similar story:

Error TypeCount
Command Failed37
User Rejected11
File Too Large2
File Not Found2

37 command failures in a week. Most of these are benign — a build that needed a dependency, a test that caught a real bug. The 11 "User Rejected" calls are more interesting. Those are moments I said no. Claude proposed an action I didn't want — and the system recorded that boundary.

What I Changed

The insights led to three concrete adjustments in my workflow:

Front-load constraints. The "Wrong Approach" pattern taught me to be more specific in initial prompts. Instead of "build the agent skill system," I now scope it: "create SKILL.md files on disk in .claude/skills/, one per skill, with markdown content." The more I define upfront, the fewer corrective interventions mid-session.

Break mega-sessions earlier. Context exhaustion happened in my longest sessions. I've started treating the "Prompt Too Long" warning as a session boundary — save state, start a new conversation, and resume with a fresh context window.

Trust the sub-agent pattern. The 45 TaskOutput calls confirm that delegation works. The multi-agent approach produces better results than trying to do everything in a single thread. I've been scoping more work to parallel agents and letting the orchestrator coordinate.

The AI Reviewed Itself and Got It Wrong

One detail from the report's analysis deserves its own section.

During a mission where Claude's orchestrator was coordinating parallel writing agents across 14+ files, the orchestrator's own reviewer agent reported that implementation files hadn't been written — when they actually had been. The AI QA'd itself into unnecessary confusion, flagging work as missing that was sitting on disk.

This is the recursive comedy of AI-assisted development. The agent that checks the other agent's work can also be wrong. The report framed it plainly: Claude's reviewer gaslit itself.

What the Data Doesn't Show

Seven days is not a pattern. This is a snapshot of one heavy week during active Via development. My workflow during lighter weeks — bug fixes, small features — probably looks different. The 198-second median response time reflects orchestration-heavy work, not all work.

Satisfaction is model-estimated. The report tagged 19 of 27 sessions as "Likely Satisfied" and 3 as "Dissatisfied." These are Claude's guesses about my satisfaction, not my actual assessment. I'd call the accuracy decent but imperfect — one "Dissatisfied" session was a session where I was frustrated with the task, not with Claude.

No cost data. The report tracks messages and tool calls but not token usage or dollar cost. For a power user running 40 messages/day with multi-minute responses, the cost dimension matters. I'd want to see cost-per-session alongside the other metrics.

Despite these gaps, the data is useful. Seeing 525 Bash calls next to 109 Edits changed how I describe what I do with Claude. I don't use it to write code. I use it to run operations, and the operations sometimes include writing code.


Related Posts

Jan 12, 2026

Why I Built a Multi-LLM Orchestration System (And You Might Want One Too)

Jan 22, 2026

Why I Built a Personal Intelligence OS

Jan 25, 2026

Starting Line: The Case for Personal AI