Skip to main content
// JH

· 6 min read

Starting Line: The Case for Personal AI

Two years of building AI systems — EA, LifeOS, Claude Swarm, Via — and the pattern that connects them all. Why the future of AI is personal orchestration, not better chatbots.

ai · vision · personal-assistant · future

TL;DR

Two years of building AI systems (EA → LifeOS → Claude Swarm → Via) taught me one thing: the future of personal AI isn't a better chatbot. It's an orchestrated ecosystem that remembers, learns, and spans every domain of your life. Via is my starting line — imperfect, 16 days old, and already documenting its own shortcomings.


The Gap Between Vision and Reality

I've been building toward this for two years. The progression looks clean in retrospect:

2024: EA. A production AI assistant for executives. I learned about streaming architectures, confidence scoring, and multi-tenant context management. EA taught me how to build AI products for others — but it was a product for others, not a system for me. Every lesson about real-time AI, hybrid search, and token optimization fed into what came later.

Early 2025: LifeOS. Personal knowledge management with Obsidian and Claude Code. Nine specialized AI skills composing workflows — family-manager, finance-manager, vault-manager, gratitude-entry. Vector search via Qdrant. Slash commands for daily capture. LifeOS taught me that personal AI needs to span domains — finances, family, knowledge, time — not just manage one. But each skill was isolated. The finance manager didn't know what the vault manager knew.

Late 2025: Claude Swarm. Multi-LLM orchestration. Multiple models working together — Gemini for research, Claude for implementation, Opus for coordination. It proved that routing tasks to the cheapest capable model cuts costs by 92% without sacrificing quality. But Claude Swarm couldn't learn. Every mission started fresh. The orchestrator that ran 50 missions had no more wisdom than the one that ran its first.

Each system solved a piece of the puzzle. None solved all of it.

The progression wasn't clean, either. Each system involved failed experiments, painful pivots, and expensive lessons. EA's voice synthesis costs nearly killed the business model. LifeOS's initial vector search was so slow it was unusable. Claude Swarm's context handoff between models lost information in ways that took weeks to debug.

What Via Integrates

Via is my attempt to integrate everything I've learned into a coherent system. Here's what each predecessor contributed:

FromVia Inherited
EAStreaming patterns, confidence scoring, the importance of cost-per-token awareness
LifeOSDomain-spanning architecture, Obsidian integration, personal knowledge as first-class data
Claude SwarmMulti-LLM routing, parallel execution, mission decomposition
New in ViaLearnings system, persona selection, plugin architecture

The key additions in Via — the parts that didn't exist in any predecessor — are the feedback loops. The learnings system that captures 1,604 insights and injects them into future agents. The meta-learning system that watches the orchestrator and documents its own failures. The deduplication mechanism that turns repetition into a quality signal.

These aren't features. They're the mechanism that makes intelligence compound instead of evaporate. Without them, Via would be Claude Swarm with a fancier directory structure. With them, it's a system that genuinely improves over time.

The Honest Assessment

Via is 16 days old. It has obvious gaps:

The persona selector uses keyword matching. It should use semantic similarity. The system's own meta-learnings have flagged 9 mismatch cases — it knows it's getting persona assignment wrong sometimes.

The learnings system has no decay. Old workarounds for patched bugs still get injected into agent prompts. There's no mechanism to age out stale knowledge.

No quality feedback on learnings. The system tracks which learnings are seen but not which ones are helpful. A learning might be served 50 times and ignored every time — there's no signal for this.

Cross-domain intelligence is limited. Learnings are tagged by domain, but there's no mechanism to surface patterns that span domains. A financial modeling insight that applies to data pipeline design won't appear in a development context.

But here's what makes me optimistic: the system is documenting its own shortcomings. Fifteen meta-learning entries about persona gaps tell me exactly which specialists to add. Nine mismatch entries tell me exactly where routing fails. Eleven observations tell me exactly which workflow optimizations to implement.

Via isn't just telling me what to build. It's telling me where it's broken. That's a fundamentally different relationship with your tools.

The Case for Personal AI

I don't think this vision is unique. Anyone who uses AI seriously across multiple domains will eventually reach the same conclusions:

  1. One model, one conversation, no memory isn't enough. The blank-slate problem is real, and it gets worse the more you use AI.

  2. Cost matters at scale. When you're running dozens of AI tasks daily, the difference between $15/M tokens and $0.10/M tokens determines whether your system is a hobby or a tool.

  3. Intelligence should compound. Every interaction should make the next one slightly better. If your AI setup forgets everything between sessions, you're paying for the same lessons repeatedly.

  4. Domains shouldn't be silos. Your budget, your tasks, your knowledge, your code — they're all connected in your life. Your AI should connect them too.

The tools to build this already exist. Claude Code's plugin system provides the extension point. Go provides the CLI infrastructure. SQLite provides the storage. Gemini provides cheap embeddings and research. The hard part isn't any individual component — it's wiring them into a system that's more than the sum of its parts.

The Starting Line

Via is my starting line. Not my finish line.

The architecture is sound — plugins, learnings, personas, routing — but the implementation is rough. The persona selector needs semantic matching. The learnings system needs quality feedback and decay. The cross-domain intelligence needs actual implementation.

But the foundation works. Missions run. Learnings accumulate. Agents get smarter. Research runs on Gemini's free tier, Claude's rate limits stay available for implementation. And every time a meta_gap or meta_mismatch entry appears in the database, I know exactly what to build next.

If you're building something similar, you're at your own starting line. The progression — from single-model conversations, to multi-LLM routing, to persistent learning, to cross-domain orchestration — is one that the tools now support. Two years ago, this would have been a research project. Today, it's a weekend project that grows into a personal operating system.

The future of AI isn't one smart assistant. It's an orchestrated ecosystem that spans your entire life, learns from every interaction, and gets better over time. Via is my version of that future, 16 days in.


The Full Series


Related Posts

Jan 12, 2026

Why I Built a Multi-LLM Orchestration System (And You Might Want One Too)

Jan 22, 2026

Why I Built a Personal Intelligence OS

Jan 28, 2026

From ChatGPT to Claude Code: The Evolution