Many people use coding agents for a bit, get frustrated, and conclude "they're not good."
That's like picking up a guitar, fumbling through a few chords, and concluding guitars don't work.
The people getting the most out of agents aren't using better tools.
They built the skill to use them well.
Three fundamentally different activities. Keep them separate.
A $0.50 prompt that gets it right is cheaper than five $0.05 prompts that each need fixing.
Don't optimize for token cost when the bottleneck is your time.
Compaction loses dead ends, file reads, and failed attempts that still shape every response. Break problems down so you never need it.
Agents are only good at things that are easy to verify.
Code compiles or it doesn't. Tests pass or they don't. Where verification is hard, agents fail quietly.
A test that has never failed is a test you cannot trust.
A smoke detector that has never been near smoke tells you nothing about whether it works.
Have the agent introduce a regression and confirm the test actually catches it.
2.7x
470 real PRs, human comparison group. The code works. It passes tests. It isn't secure.
git diff between commits is the fastest way to review
The agent does the typing.
The human does the judgment.
An agent refactors your database access layer. It compiles, passes tests, looks clean. A senior engineer sees it opens a new connection per query instead of using the pool.
Read the full post
alexandersumer.com/blog/how-to-use-ai-coding-agents-effectively