Skip to main content

· 7 min read

Eight Months Old and Already Load-Bearing

I built my thesis on Docker 0.8. The tool barely existed. The instinct I built using it has never stopped.

docker · early-adoption · infrastructure · engineering

An octopus studying a distributed system blueprint with a coffee cup and a docker run terminal, construction cranes in the background

Midway through my thesis proposal, I hit a box I couldn't fill. The deployment diagram needed "on-demand isolated environments" — one per user, created in seconds, destroyed when done. In 2014, nothing inside that box existed.

Virtual machines were the standard answer. They also took four to eight minutes to provision per instance and consumed gigabytes each. My thesis was a mobile code editor. If every user had to wait eight minutes for their environment to spin up, the loading screen wasn't a UX problem — it was the product. The product was a loading screen.

Then I found Docker. It was eight months old.

The Documentation Said Almost Nothing

Docker 0.8 had no compose file, no networking primitives worth the name, and documentation that described the API without explaining the mental model. What it had was the correct abstraction: a container that started in three to four seconds, isolated by process and filesystem, and destroyed cleanly when done. The idea was sound. The implementation was a construction site with one entrance and no exits marked.

I needed four things to make the thesis work: a mobile Angular editor, a custom PHP micro-framework to route requests into containers, an orchestration layer that could provision and destroy environments on demand, and DigitalOcean integration for the underlying compute. None of that had a tutorial. Docker's ecosystem in 2014 was a GitHub repository, an IRC channel, and a small group of people solving the same problems in parallel without knowing it.

I became one of those people.

The Part Where I Stopped Trusting the Docs

Container networking in Docker 0.8 was primitive in a way that wasn't obvious until you needed it to do something. Exposing a port meant writing custom proxying logic by hand. My orchestration layer had to track which containers were running, manage port assignment, and route incoming requests from mobile browsers to the right isolated environment. When something broke — and it broke often — the error output wasn't useful. One evening I spent three hours in IRC trying to debug a networking issue before someone told me to look at the Go source directly. The behavior I was seeing was documented, but not in any documentation. It was in a comment inside the implementation.

That was the first thing Docker 0.8 actually taught me: when the docs and the behavior disagree, the source is always correct. The docs are someone's interpretation of an implementation that may have already moved. After that evening, I stopped filing the behavior gap as a Docker failure and started treating it as a reading assignment.

The Hack That Worked Too Well

Three-to-four-second container startup was fast relative to a VM. It was perceptible to a user staring at a mobile browser waiting to start coding. My solution was to pre-provision the container the moment a user created a project — not when they clicked "run," before they'd typed a single line. The Angular editor showed a loading state, the container was usually ready before the user reached the editor pane, and nobody waited.

I was proud of that. For a few months, I framed it as smart UX design. What it actually was: a workaround for the tool's immaturity that I'd dressed up as a feature. The latency wasn't gone. I'd moved it to a place where it mattered less. That's not the same thing. I should have written it down that way at the time.

The thesis shipped anyway. Users could open a mobile browser, write PHP, and see it run in an isolated environment provisioned specifically for them, destroyed when they closed the tab. In 2014, that was genuinely novel. The orchestration layer I built to manage it — container lifecycle scheduling, resource state tracking, failure handling — was Kubernetes before I'd heard the word.

An octopus pointing from a tangled architecture diagram to a clean docker-compose up command — before and after containerization

A Year Later, Melbourne

In 2015, I joined ECAL — a calendar widget startup that had just closed a $2M Series A from Oxygen Ventures. The team was small and building fast. Within my first month, I suggested we adopt Docker for local development and deployment.

Nobody on the team was using it. In 2015 Melbourne, Docker was the thing you'd heard a conference talk about once and hadn't looked at since. The reaction I got was reasonable skepticism: our current setup works, why would we add this overhead?

I had an answer I couldn't have given a year earlier. Not "Docker is the future" — I'd spent too much time in Docker's networking source code to make that pitch convincingly. The argument I made was narrower: onboarding a new developer on our current setup takes three hours because the environment is fragile and undocumented. With Docker, it's a single command and the environment is the same for every developer who runs it. The three-hour problem disappears.

That argument landed because it was specific. It also landed because I knew where Docker was difficult — networking configuration, startup time, the gap between what the docs said and what the code did — and I could say "here's how we handle it" instead of discovering it mid-demo in front of the team. I'd already paid the tuition on those failure modes. What I brought to ECAL wasn't Docker. It was a year of Docker's rough edges already absorbed.

The team adopted it. New developer onboarding became one command.

The Pattern I Kept Not Naming

That sequence has repeated: find a tool early, work through its rough edges without a safety net, understand its actual shape from the inside, then introduce it to a team from a position of earned familiarity rather than conference-talk enthusiasm. Docker in 2014 was the first time I can clearly trace it. It wasn't the last.

What I kept getting wrong was the framing. I'd tell the story as: "I was an early adopter." That sounds like the insight. It isn't. Early adoption is just when the learning started. The insight is that rough edges are curriculum. Every workaround I wrote for Docker's networking built a mental model that no tutorial would have given me — because no tutorial exists for a tool that's eight months old and still figuring out what it is. By the time I brought Docker to ECAL, I wasn't recommending a tool. I was recommending a tool I'd already broken and fixed twice.

Polished tools teach you how to use the tool. Unfinished tools teach you how the problem works.

An octopus holding a blank question mark card over blueprints with a docker run terminal — building without knowing where it leads

Honest Limitations

The instinct to bet on problems over solutions has a failure mode I didn't understand in 2014 and am still working out now. The Docker bet was right. There are other bets I made with identical reasoning — "the problem is real, the roughness is temporary" — where the roughness turned out to be permanent, or where the tool matured in a direction that didn't fit the problem I was solving. Those consumed significant project time before I admitted what I was looking at.

In retrospect, the Docker bet worked because the foundation was sound before the interface was. Linux namespaces and cgroups existed. The isolation model was real. Docker was a wrapper around proven primitives, not an experiment in new ones. The roughness was in the tooling layer — documentation, networking abstractions, the CLI. That kind of roughness resolves. When the roughness is in the foundation, it compounds.

The version of the heuristic I wish I'd had in 2014: is the rough edge in the interface, or in the underlying model? Interface problems get fixed in point releases. Model problems require rewrites, and rewrites usually don't happen.

The technology from my thesis is gone. The mobile Angular editor, the custom PHP framework, the DigitalOcean provisioning scripts — none of it runs anywhere. The instinct is still making calls. Whether the Docker story is a repeatable principle or a success I've been over-indexing on for nine years, I genuinely don't know.

Enjoyed this post?

Subscribe to get weekly deep-dives on building AI dev tools, Go CLIs, and the systems behind a personal intelligence OS.

Related Posts

Feb 28, 2026

Two Commands That Shipped Before the Docs