How to Use AI Coding Agents Effectively

Many people use coding agents for a bit, get frustrated, and conclude "they're not good."

That's like picking up a guitar, fumbling through a few chords, and concluding guitars don't work.

The people getting the most out of agents aren't using better tools.

They built the skill to use them well.

Plan · Execute · Verify

Three fundamentally different activities. Keep them separate.

Planning

Plan your sessions

  • Break your task into a sequence of prompts, executed one by one
  • Use plan mode: the agent reads, reasons, and proposes without writing code. Throwaway reads stay out of your real session.
  • Track state in markdown files so the agent picks up where the last session left off
Planning

Context before code

  • First: read the README, understand the test framework, reproduce the problem
  • Lead with constraints and context, not implementation details
  • All context up front is fine if it's reasoning about the plan, not executing
Execution

Parallelize your sessions

  • Run each session in its own worktree or clone. Isolated working directories, isolated context windows.
  • Each session has its own context window
  • Three parallel sessions = triple your total context budget
  • One concern per session
Execution

Smarter models are cheaper

A $0.50 prompt that gets it right is cheaper than five $0.05 prompts that each need fixing.

Don't optimize for token cost when the bottleneck is your time.

Execution

Manage your context window

High quality Compact Start fresh 0% 50% 70% 100% context window usage →

Compaction loses dead ends, file reads, and failed attempts that still shape every response. Break problems down so you never need it.

Verification

Tell the agent how to verify its own work

  • Example: after refactoring, write a script to check that only test cases calling suspending methods got runTest added and no annotations were removed
  • The verification must be automated and deterministic
  • Not "look at the code and tell me if it's right"
Verification

Agents are only good at things that are easy to verify.

Code compiles or it doesn't. Tests pass or they don't. Where verification is hard, agents fail quietly.

Verification

A test that has never failed is a test you cannot trust.

A smoke detector that has never been near smoke tells you nothing about whether it works.

Have the agent introduce a regression and confirm the test actually catches it.

Verification

Invest in your local dev loop

  • Your agent is only as good as your feedback loop
  • Reproduce CI locally, run the service e2e locally
  • Fast local feedback: lint, type check, unit tests, e2e, security scan
  • If CI takes 20 minutes remotely, you're flying blind for 20 minutes
Verification

2.7x

more security vulnerabilities in AI-generated code vs. human-written code.

470 real PRs, human comparison group. The code works. It passes tests. It isn't secure.

Verification

Don't rely on the model to remember your rules

  • An instruction saying "never use rm -rf" can get ignored. An automated rule that blocks it cannot.
  • Keep rule files small and scoped. A monolithic rules file fills context and later rules get ignored.
  • Hooks, pre-commit checks, automated guardrails
Meta-Skills

Know when to bail

  • Context corruption is irreversible. Failed attempts confuse every subsequent turn.
  • Start a clean session. It takes 30 seconds.
  • Bail signals: repeats itself, contradicts earlier decisions, suggests rejected fixes
Meta-Skills

Commit before everything

  • Agents move fast and break things across many files at once
  • Commit before every major agent operation
  • git diff between commits is the fastest way to review
  • Small commits let you revert a single bad step instead of an entire session
Meta-Skills

Work in agent-friendly languages

Type Safety compiler catches errors Source-Available reads actual code Popularity training data TS, Go Rust, Zig Kotlin, C# Python, JS
Meta-Skills

The agent does the typing.
The human does the judgment.

An agent refactors your database access layer. It compiles, passes tests, looks clean. A senior engineer sees it opens a new connection per query instead of using the pool.

Meta-Skills

Know when NOT to use agents

  • Pricing logic across products, admin permissions, infrastructure teardown. Anywhere subtle context spans many steps.
  • High cost of failure + hard to verify = do it yourself
  • Agents are confident and fast. If you can't verify quickly, that confidence works against you.
Use to navigate