Skip to content

Tech Designs in the Agentic Era

Posted on:March 18, 2026 at 01:06 AM

In previous teams, tech designs served a specific purpose: a human would spend days writing a detailed plan, other humans would review it, and then humans would spend weeks implementing it. The document was the bottleneck—you couldn’t start building until it existed.

That world is gone. An agent can draft a plan in minutes. The artifact that once took days to produce is now trivially cheap.

So what’s left?

What Tech Designs Actually Did

Looking back, tech designs were never really about the document. They were context-sharing mechanisms. They built institutional knowledge in people. They created shared mental models across the team. They trained junior devs to think about systems.

The artifact was secondary. The process built understanding.

When someone wrote a tech design, they had to think through edge cases, anticipate objections, and encode production constraints they’d learned over years. When others reviewed it, they absorbed that knowledge. The document was a byproduct of humans exchanging context with each other.

Now agents can create the artifact. So what remains valuable? The human parts: discussion, context, judgment. The things the document was always just a vehicle for.

The “Agents Do Everything” Fallacy

There’s a seductive argument floating around: if agents handle implementation, humans don’t need deep system knowledge anymore. Just describe what you want, and the agent figures it out.

Research suggests otherwise. ETH Zurich found that context files like CLAUDE.md only help when they contain knowledge agents can’t derive from the code itself—specific tooling, custom commands, domain constraints. Architectural overviews? Agents figure those out on their own. LLM-generated context files actually hurt performance.

The implication is uncomfortable: someone still needs to have this knowledge to encode it.

Even with perfect context files, someone needs to:

The agent executes. The human still needs to understand.

From “In the Loop” to “On the Loop”

Kief Morris and Martin Fowler describe three models for human-agent collaboration: out of the loop, in the loop, and on the loop.

“In the loop” is what many teams do now—review every line of agent-generated code. It works, but creates a bottleneck. You’ve just shifted from writing code slowly to reviewing code slowly.

“On the loop” is different. Humans build and maintain the harness: specs, quality checks, constraints, workflow guidance. The agent operates within that harness. Humans intervene when the harness fails, then improve the harness so it fails less.

This is where tech design skills translate directly. The harness is the design—encoded in context files, test suites, and workflow definitions rather than a static document. You’re still doing the same work: thinking through constraints, anticipating edge cases, encoding production knowledge. You’re just encoding it for a different audience.

The Context Problem

Here’s what worries me. PlayerZero calls it “the two clocks problem”: we track current state obsessively, but not why decisions were made.

Production knowledge is fragmented. Some lives in code. Some in dashboards. Some in old tickets. Most of it lives in people’s heads—the engineer who was on call during that outage, the architect who remembers why we don’t use that API, the PM who knows what the customer actually meant.

Experienced employees have superior “world models” from observing many situations over time. When they leave, that tacit knowledge disappears. Context graphs are an emerging approach—capturing decisions with evidence, constraints, and outcomes—but it’s far from solved.

Until it is, humans remain the keepers of institutional knowledge. Agents can implement. They can’t remember why we tried that approach three years ago and it failed.

The New Flow

The bottleneck has shifted. Creating a plan is fast; the discussion is what matters.

Old flow:

  1. Human spends days writing tech design
  2. Humans review for a week
  3. Humans implement for weeks
  4. Reality diverges from plan

New flow:

  1. Agent drafts a plan in minutes
  2. Humans discuss, challenge, add context
  3. Domain experts weigh in (they didn’t need to write anything)
  4. Agent implements while humans watch
  5. Learnings go back into the harness

The value moved from “writing the doc” to “having the discussion.” New devs learn by participating in plan discussions, not reading stale documentation. Stakeholders can engage earlier because plans are cheap to produce.

The artifact—the plan—goes into CLAUDE.md or wherever the agent will read it. The understanding stays in people.

What Changes, What Stays

Changes:

Stays:

Where This Leaves Us

Agents amplify human judgment. They don’t replace it.

The question isn’t “do we need humans who understand the system?” The answer is obviously yes. The question is: what should humans focus on?

My answer: building and maintaining the harness. Understanding production reality. Transferring knowledge so it doesn’t disappear when people leave. Having the discussions that plans are meant to provoke.

Tech designs evolved into plans and context engineering. The artifact changed; the purpose didn’t. We still need to think before building. We still need people who understand why, not just agents who execute what.