// /ai-engineering/ai-didnt-kill-engineering-moved-it-upstream
Abstract dark composition: a glowing Go gopher and a geometric AI mark connected by streams of light, with a stylized arrow rising upward, code-snippet panels and circuit board traces in the background.

AI Didn't Kill Engineering. It Moved Engineering Upstream.

AI did not kill engineering. It killed the illusion that typing code was the hard part. In my Claude Code + Go + oCMS workflow, the human job moved upstream: intent, constraints, architecture, verification, and context discipline.

20 views 1 reads

A recent article made a sharp point: AI did not kill engineering — it made bad engineers visible.

I mostly agree.

But I would phrase it slightly differently:

AI did not kill engineering. It moved engineering upstream.

The old visible work was typing code.
The new visible work is deciding what should exist, why it should exist, how it should behave, what constraints it must obey, and how we know it is correct.

That is a very different job.

It is also a better one.

The mistake is thinking that “AI writes code” means “engineering is solved.” It does not. It means the low-level production of code is becoming cheaper, faster, and less interesting. The expensive part is no longer syntax. The expensive part is unclear thinking.

And unclear thinking scales terribly with AI.

The trap: AI-generated spec → AI-generated code → fake confidence

The dangerous workflow looks like this:

Ask AI to write a spec
Ask AI to implement the spec
Ask AI to write the tests
Ask AI if everything is good
Ship

It feels productive. It even looks professional. There is a Markdown spec. There are commits. There are tests. Maybe there is a nice green CI badge.

But something important is missing:

No human owned the intent.

If the AI wrote the spec and the AI wrote the code, the project may only be verifying its own assumptions. It can produce beautiful code that solves the wrong problem. It can generate tests that confirm the behavior it already invented. It can write documentation that sounds precise while quietly encoding a misunderstanding.

That is not engineering.

That is automated self-confidence.

This is where the “AI made bad engineers visible” argument becomes real. Before AI, a weak engineer could hide behind activity: typing, wiring, configuring, searching Stack Overflow, moving tickets. With AI, that surface area collapses. The question becomes much simpler:

Can you explain what should happen and why?

If not, the AI will happily build your confusion.

My workflow: Claude Code executes, I engineer

I use Claude Code heavily. Not as autocomplete. Not as a fancy Stack Overflow. Not as a toy.

I use it as an execution engine.

Claude Code can read a codebase, edit files, run commands, and work inside a real repository with real diffs and real consequences. That is exactly why it is powerful — and exactly why it needs strong human direction.

The important part is not that Claude Code can write code.

The important part is where I put the human.

My loop looks more like this:

Human intent
  ↓
AI-assisted spec
  ↓
Human review of assumptions
  ↓
Claude Code plan
  ↓
Human approval / correction
  ↓
Implementation
  ↓
Tests, compiler, linters, wikilint
  ↓
Diff review
  ↓
Production
  ↓
Documentation / LLM Wiki update

Claude Code is not the engineer in this loop.

Claude Code is the worker that never gets tired.

The engineer is the person deciding:

  • what matters
  • what does not matter
  • what must never happen
  • what can be postponed
  • what tradeoff is acceptable
  • what needs a test
  • what needs a rollback path
  • what needs a human decision

That is not less engineering.
That is concentrated engineering.

Why Go fits this workflow so well

My main production AI-coding experiments are in Go, especially around oCMS and IT Digest.

That is not an accident.

Go is a brutally useful language for AI-assisted development because it has low magic and high feedback.

A Go project gives Claude Code a very honest environment:

gofmt
go vet
go test
static types
explicit imports
boring files
boring build
single binary

Boring is good.

Boring is how you prevent the AI from hiding mistakes behind framework fog.

In oCMS, the production philosophy is simple: one Go binary, no unnecessary external services, SQLite as the primary database, server-rendered HTML, HTMX and Alpine.js where they make sense, and explicit architecture over framework magic.

That matters because AI works best when the project has sharp edges.

A vague Node.js project with five build systems, three state managers, and “just make it work” instructions is a hallucination playground.

A Go project with explicit packages, typed SQL, tests, and a Makefile is a much better target.

The compiler becomes a colleague.

Not a very friendly colleague, but a useful one.

oCMS: AI-generated work still has to survive production

The most important thing about my workflow is that it is not theoretical.

oCMS is not a prompt demo. It is the CMS behind IT Digest. It is a real codebase, with real architecture, real modules, real security decisions, real deployment constraints, and real maintenance burden.

That changes the conversation.

When Claude Code modifies oCMS, the output has to survive:

make check
gofmt
go vet
go test
security expectations
existing architecture
deployment reality
my diff review

This is where AI coding stops being magic and starts being engineering.

I do not care that Claude can generate 500 lines quickly. I care whether those 500 lines belong in the system.

Does the code fit the module model?
Does it preserve existing behavior?
Does it introduce a hidden dependency?
Does it create a migration problem?
Does it make the admin UI harder to reason about?
Does it document the behavior in the right place?
Does it update the LLM Wiki?

Speed is useful only after direction is correct.

A fast wrong implementation is not productivity. It is accelerated cleanup.

The LLM Wiki: making context visible

The biggest improvement I made to my AI workflow was not a prompt trick.

It was the LLM Wiki.

The LLM Wiki: a structured map of entities and topics where contradictions are surfaced, not smoothed over.

I built an LLM Wiki for oCMS: structured Markdown pages generated over a large Go codebase, with contradictions surfaced instead of silently smoothed over. The wiki is not just documentation. It is an operational memory layer for AI agents.

This is the missing piece in a lot of AI coding workflows.

People talk about context windows as if they are just token buckets. But context is not only size. Context is structure.

A large context full of stale docs, duplicate explanations, and contradictory assumptions is not context.

It is noise with citations.

The LLM Wiki gives the agent a map:

index.md
  ├── entities/
  │   ├── page.md
  │   ├── media.md
  │   ├── webhook.md
  │   └── api-key.md
  ├── topics/
  │   ├── caching.md
  │   ├── security-overview.md
  │   ├── deployment.md
  │   └── modules.md
  └── sources/
      └── original files with provenance

More importantly, it tells the agent where the story does not hold together.

That is the key.

A normal documentation system tries to look clean.

The LLM Wiki is allowed to be impolite. If two source files disagree, it says so. If the old wiki says one thing and the current docs say another, it does not average them into a plausible lie. It flags the contradiction.

That is exactly what AI-assisted engineering needs.

Not more confidence.

More visible uncertainty.

The human role: not typing, but constraint design

In the old workflow, the human wrote most of the code.

In my current workflow, the human designs constraints.

That sounds abstract, but it is very concrete.

For a new feature, I care about things like:

Non-goals
Failure modes
Security boundaries
Data ownership
Migration behavior
Operational simplicity
Rollback path
Testing strategy
Documentation impact
Architecture fit

Claude Code can implement within those boundaries.

But it should not invent those boundaries alone.

This is the part many people miss when they dismiss “vibe coding.” Bad vibe coding is real. You ask for something vague, accept whatever comes out, and ship because the demo works.

But disciplined AI-assisted development is different.

It is not:

AI, build whatever.

It is:

Here is the system.
Here are the constraints.
Here are the invariants.
Here is what must not change.
Here is how we verify it.
Now propose a plan before editing files.

That is not laziness.

That is delegation.

And delegation is only safe when the person delegating understands the work.

The Telegram bot experiment: zero typed production code, not zero engineering

My it-digest-bot project is a good example.

It was a production Telegram bot built with Claude Code in 54 commits. Claude Desktop drafted the spec, Claude Code executed it, and the result was a production-grade single-binary Go service scheduled by systemd.

The provocative part is that I typed zero production code.

But that is not the important part.

The important part is what I did instead.

I reviewed the architecture.
I pushed back on defaults.
I inspected diffs.
I checked whether the systemd design made sense.
I cared about idempotency.
I cared about SQLite state.
I cared about test coverage.
I cared about the operational model.

The AI wrote the code.

I owned the shape of the system.

That distinction is everything.

A bad engineer hears “zero typed code” and thinks “no engineering happened.”

A good engineer asks: who defined the constraints, reviewed the tradeoffs, verified the behavior, and accepted responsibility for production?

That is where the engineering lives now.

What AI makes visible

AI makes several things painfully visible.

1. Vague requirements

Before AI, vague requirements created slow confusion.

Now they create fast confusion.

The agent will not necessarily stop and say, “Your idea is underspecified.” It may simply implement the most statistically likely interpretation.

That means the human must get better at specifying intent.

2. Weak architecture

AI can generate code faster than your architecture can absorb it.

If your project has no clear package boundaries, no naming discipline, no testing strategy, and no documentation structure, AI will amplify that mess.

It will add more code to a system that already does not know where code belongs.

3. Fake tests

AI can write tests that pass.

That is not the same as writing tests that matter.

A test is only useful if it protects a behavior you actually care about. Otherwise it is just executable decoration.

4. Documentation drift

AI loves documentation. It can generate it endlessly.

But documentation that is not checked against reality becomes dangerous faster with AI, because agents will read it and treat it as truth.

This is why the LLM Wiki matters. It does not just produce docs. It compares claims.

5. Engineers who cannot explain decisions

This is the big one.

When AI writes the boilerplate, the human can no longer hide behind boilerplate.

The remaining questions are architectural:

Why this design?
Why this dependency?
Why this database model?
Why this deployment shape?
Why this security boundary?
Why this test?
Why now?

If you cannot answer those, AI did not replace you.

It revealed you.

So is this positive for AI?

Yes.

Very positive.

But not in the shallow “developers are obsolete” way.

AI is positive for engineering because it removes a huge amount of mechanical friction. It lets one experienced engineer explore, prototype, implement, test, document, and ship at a pace that used to require a small team.

But AI is not positive for every engineer equally.

It amplifies people who can think clearly.

It exposes people who cannot.

That sounds harsh, but I think it is good for the profession.

Software engineering has tolerated a lot of fake productivity: ticket motion, meeting output, framework cargo culting, boilerplate rituals, and endless “best practices” repeated without understanding.

AI compresses the distance between idea and implementation.

That means bad ideas reach implementation faster too.

So the bottleneck moves.

The bottleneck is no longer typing.

The bottleneck is judgment.

My current rule

The more I use Claude Code, the more I trust this rule:

Never let AI be the only author of both the question and the answer.

If AI drafts the spec, I review the assumptions.
If AI writes the code, I review the diff.
If AI writes the tests, I check what behavior they actually protect.
If AI writes documentation, I make it cite sources.
If two sources disagree, I want the contradiction visible.
If the plan is unclear, I stop the implementation.

This is not anti-AI.

This is how AI becomes usable.

The new engineering stack

My current stack is not just:

Go + Claude Code + SQLite + oCMS

It is more like:

Human intent
Claude Desktop for spec exploration
Claude Code for repository execution
Go for compile-time honesty
Tests for behavior
Linters for boring correctness
LLM Wiki for structured context
wikilint for documentation discipline
oCMS for production reality

That is the actual workflow.

The code generator is only one part of it.

The real system is the loop.

Conclusion: engineering did not disappear

AI did not kill engineering.

It killed the illusion that typing code was the hardest part.

For years, many developers treated code production as the center of the job. AI is proving that code production was only one layer. An important layer, yes, but not the deepest one.

The deeper work is intent.

The deeper work is architecture.

The deeper work is verification.

The deeper work is knowing what should not be built.

Claude Code can write code for me. That is useful.

But it cannot decide what kind of engineer I am.

That part is now more visible than ever.

And honestly, I think that is good news.

/**
* @author

OIV

* Fear not the AI that passes the test. Fear the one that pretends to fail it.

IT-Digest AI Assistant