For the past two years, we’ve been building Springtail — a distributed database that wraps around an existing Postgres instance, turning it into a horizontally scalable, multi-node system. It’s a large, complex project that requires careful planning and a solid understanding of systems architecture.
About six months ago, I started looking for ways to move faster. I didn’t have as much time for coding as I used to, and I kept hearing about people using AI editors like Cursor and Windsurf to crank out web apps and Python scripts quickly. I figured maybe these editors could help me too.
Using AI editors never felt natural to me. I’ve been an Emacs user for years, and switching to something like Cursor felt awkward right away. Editing code felt clunky and unintuitive, even with slightly better auto-complete.
When I tried using their AI features to generate code, things got worse. The editors would often create code in isolation, generating one-shot solutions that didn’t fit the broader system. Every time I gave feedback, they’d rewrite everything instead of improving what was there. In a distributed system like ours, where components depend on one another, that approach caused more problems than it solved.
I ended up spending more time fighting the editor than writing code. It wasn’t helping — it was getting in the way. Eventually, I went back to my old workflow.
A month ago, I decided to try again. I heard some great anecdotes about Claude Code, one of a newer category of AI systems I think of as coding agents. Unlike AI editors, which focus on inline completions, coding agents are conversational. They can research, reason, plan, and work across multiple files with broader context.
What stood out immediately was the ability to talk through ideas before writing code. I could describe a problem, walk through existing components, and ask the coding agent to build a detailed plan — proposed interfaces, algorithms, dependencies, and all. I could even prompt it to ask clarifying questions when something wasn’t clear.
Instead of one-shotting an entire feature, it broke the work into smaller, manageable tasks. It wrote one piece, paused for review, then moved on. That mirrored how I naturally work: plan carefully, build incrementally, and refine as you go.
Over the last month, I’ve stopped writing code myself — and I’ve never been more productive with my limited time. In just two weeks, I’ve shipped four performance improvements and built two components teammates needed. Doing that without AI would have taken at least three times as long.
Coding agents are powerful but limited. They handle detail work well but often avoid big-picture refactoring or interface changes. They’ll optimize within a given structure but rarely challenge it.
That cautiousness reminds me of working with junior engineers: hesitant to make sweeping changes without full context. They don’t yet see how everything fits together. That’s where a tech lead or architect steps in today.
Some of the weak spots I noticed:
These systems need oversight. The human role shifts from coder to reviewer: ensuring consistency, quality, and architectural integrity.
Working with coding agents changes what it means to develop software. You stay hands-on, but your focus shifts to design, architecture, and performance. Your job becomes understanding the full system, defining clear plans, and reviewing output.
Most of my “coding” time now goes into:
I spend my time thinking instead of typing. The agent handles repetitive work, freeing me to focus on decisions that actually matter.
That speed is both impressive and risky. Because coding agents generate so much code so quickly, small mistakes can multiply fast. Without enough planning, you end up with well-structured but wrong code.
The fundamentals of good engineering still apply: design first, implement second, test always. The more time you invest in planning, the better your results.
Coding agents amplify experience. The more you understand design, data structures, and performance, the more effectively you can guide them.
If you don’t understand the code they produce, you can’t direct or debug it. Experienced developers can spot when a solution looks fine but misses the broader context or objectives – that’s something that coding agents struggle with today.
After a month of working this way, a few lessons stand out:
Define architecture and constraints before coding begins. Give the agent all necessary context.
Focus on design and structure instead of editing lines.
Look for inefficiencies, API quirks, and subtle bugs across components.
Communicate, iterate, and correct as needed.
This approach doesn’t just make development faster, it gives you more time to think. It shifts focus away from syntax and toward architecture and experimentation.
Building software — even complex distributed systems — now feels less like typing and more like composing. You think in terms of design and tradeoffs, spending your energy on performance and integration challenges.
That doesn’t mean the craft of building software is over. It’s evolving. Compilers moved us from assembly to high level languages. Coding agents have the potential to move us entirely past the mechanical components of software development. The challenge is learning to guide them well (at least for now!)
The more I work with coding agents, the more they remind me of junior engineers. It makes me wonder what other human roles AI can emulate. If an LLM is a kind of blank-slate intelligence, then with the right context, it should be able to take on specific professional roles by mirroring how those roles think and act.
Maybe instead of one hyper-intelligent agent, we’ll see many contextualized ones: a test engineer agent, an SRE agent, maybe even a tech lead or architect agent. We already know how these human structures work. We just need to distill them into the right contexts for machines to use.