Working Effectively with Coding Agents

Core principle

Coding agents should accelerate disciplined engineering, not replace it. Treat them as force multipliers for good process — not autonomous authorities. The codebase, architecture, contracts, schema, and operational realities remain the source of truth.


1. Start with strong framing

Spend more time on design before asking for code. Be explicit about:

  • Objective
  • Constraints and non-goals
  • Performance expectations
  • Security requirements
  • Logging and error-handling rules
  • Reuse of existing modules and utilities
  • Architecture boundaries
  • What must not be changed
Most bad outputs come from bad framing, not bad code writing.


2. Break work into small bounded tasks

Do not ask for vague large-scope work like "build billing" or "fix auth." Ask for one bounded unit at a time.

Good task examples:

  • Add idempotency handling to one endpoint
  • Create one migration with specific columns and indexes
  • Refactor one repository method without changing API contracts
If a reviewer cannot understand the full diff quickly, the task is too large.


3. Make the agent produce a plan before code

Ask for a change plan first. The plan should include:

  • Files to create or change
  • Why each file is affected
  • Assumptions made
  • Risks identified
  • Interface and contract impact
  • Migration impact
  • Backward-compatibility impact
  • Test impact
Review and correct the plan before implementation starts.


4. Maintain a reusable project context / rules file

Keep a reusable system context file the agent reads before work. Include:

  • Architecture boundaries
  • Naming conventions
  • Logging standards
  • Error-handling standards
  • Auth and security rules
  • Performance expectations
  • Testing requirements
  • Migration rules
  • Forbidden patterns
  • Modules and utilities to reuse
  • What areas not to touch casually
Treat this as operating law, not optional background.


5. Maintain todo.md and a decisions file

todo.md tracks work items, execution order, and priorities. Review the order before each session.

decisions.md captures why approach A was chosen over B, known tradeoffs, deferred improvements, and constraints imposed by earlier decisions. This stops the agent — and the team — from re-debating settled decisions.


6. Reuse reference patterns aggressively

Point the agent to good examples already in the codebase: a controller, service, repository, test, and logging/error-handling example. Tell it to match those patterns unless there is a strong reason not to.

Reuse is better than invention in shared codebases.


7. Separate design from execution

Use separate passes:

Design pass — approach, tradeoffs, performance implications, security concerns, failure modes, boundary effects.

Execution pass — implement only the approved approach, no rethinking architecture, no unrelated scope expansion.

Otherwise the agent oscillates between planning and coding and does both poorly.


8. Force explicit assumptions

Make the agent list assumptions and mark uncertain areas clearly. Tell it not to invent missing business behavior. This matters especially when requirements are partial, APIs are unclear, or business rules live in people's heads.


9. Define "done" clearly

Every task should have explicit completion criteria. Example definition of done:

  • Implementation complete
  • Validation added
  • Auth enforced
  • Logging added
  • Errors normalized
  • Tests added or verification steps provided
  • No unrelated changes
  • Build, lint, and tests pass
If "done" is vague, the agent will either stop too early or overbuild.


10. Always require verification

Require one or more of: unit tests, integration tests, a manual verification checklist, curl examples, expected DB changes, or expected logs. Without verification, you only have plausible-looking code.


11. Ask for failure-path thinking

Do not accept only happy-path design. Make the agent think through:

  • Timeout behavior
  • Retries and duplicate requests
  • Partial failures and dependency outages
  • Rollback behavior
  • Permission failures
  • Audit logging
Agents default to happy path unless forced otherwise.


12. Ban unrelated refactors

Team rule: do not rename, move, reformat, or refactor unrelated code unless explicitly asked. This reduces blast radius, review cost, accidental regressions, and noisy diffs.


13. For existing code — start with impact analysis

Before touching existing code, ask the agent to identify:

  • What the code currently does
  • What depends on it
  • What may break if changed
  • API, schema, config, and downstream impact
  • Hidden coupling and edge cases
  • Rollback considerations
In old systems, most mistakes come from breaking behavior no one knew was coupled.


14. Make it explain current behavior before suggesting fixes

Start with: "Read this code and explain its current behavior, assumptions, side effects, and dependencies." Then ask for solutions.

Do not begin with "fix this" unless the issue is trivial.


15. Prefer minimal-change fixes first

For bug fixes and production code, default to the smallest safe fix. Preserve contracts unless explicitly changing them. Isolate blast radius. Avoid mixing cleanup with remediation.

Agents tend to over-improve. That is often dangerous in existing systems.


16. For critical changes — require multiple review passes before implementation

Ask the agent to review its approach multiple times before writing code:

  • Pass 1: correctness
  • Pass 2: hidden impact and edge cases
  • Pass 3: security, performance, concurrency, backward compatibility
Iterative review surfaces issues the first pass misses.


17. Use iterative critique before code, not just after

Strong sequence: analyze → propose → critique → revise → implement → review → revise if needed.

It is much better to catch conceptual errors before code exists.


18. Ask for invariants before changing critical code

Make the agent identify what must remain true after the change, what contracts must not break, and what data integrity rules must hold.


19. Ask "what could make this fix unsafe?"

Use adversarial prompts:

  • "What could still be wrong here?"
  • "Where could this fail in production?"
  • "What hidden assumptions is this relying on?"
This forces the agent into critique mode instead of sales mode.


20. Separate diagnosis from treatment

For existing code, split work into two passes: diagnosis first, implementation second. Mixing them causes blind surgery.


21. Reset to first principles when iteration quality collapses

If the agent is still making mistakes on the second or third iteration, stop. Do not keep polishing the same wrong path.

Tell it to re-derive from scratch: objective, constraints, current behavior, invariants, failure conditions. This breaks anchoring to bad earlier assumptions.


22. Use role-split prompting for difficult work

For non-trivial tasks, use multiple passes with distinct roles: architect, critic, implementer, reviewer, tester, security reviewer. This can happen sequentially in one session. Multi-pass thinking is stronger than one-shot generation.


23. Review diffs, not explanations

Never trust only the agent's summary. Review the actual code diff, exact changes, tests, migration details, and edge-case handling. Explanations can sound coherent even when the code is wrong.


24. Standardize prompt templates across the team

Do not let each engineer improvise from scratch. Maintain templates for: new feature, bug fix, migration, refactor, performance improvement, code review, test generation, incident analysis, and security hardening.


25. Track where the agent helps and where it wastes time

Periodically review repeated misunderstanding types, common bad patterns, generated code that gets rewritten, and tasks where the agent creates more cleanup than value. Feed these back into your rules files, templates, and checklists.


Practical operating sequences

For new code

  • Restate objective and constraints
  • List assumptions
  • Produce change plan
  • Approve plan
  • Implement approved scope only
  • Add tests and verification
  • Self-review for security, performance, logging, errors, compatibility
  • Human reviews diff

For existing code

  • Perform impact analysis
  • Explain current behavior and dependencies
  • Identify invariants
  • Propose minimal safe fix options
  • Critique the proposed fix
  • Implement approved approach
  • Verify thoroughly
  • Review actual diff

For critical code

Raise rigor. Require repeated reviews before implementation. Force failure-path and invariant analysis. Ask what could make the fix unsafe. Review the diff more harshly than usual.

For repeated mistakes

Stop iteration. Reset to first principles. Re-derive from objective, constraints, current behavior, invariants, and failure conditions. Only then proceed.


Team rule of thumb

  • For new code: design first.
  • For existing code: diagnose first.
  • For critical code: verify repeatedly.
  • For repeated mistakes: reset to first principles.
The agent should be treated as a fast junior for implementation, a decent peer for brainstorming, a useful reviewer for checklists, and an unreliable architect unless tightly constrained.

Use it to accelerate disciplined engineering — not to replace it.

AITutorialsApril 14, 2026
Share
Aakash Ahuja

About the Author

Aakash builds systems, platforms, and teams that scale (without breaking… usually). He's worked across 15+ industries, led global teams, and delivered multi-million-dollar projects—while still getting his hands dirty in code. He also teaches AI, Big Data, and Reinforcement Learning at top institutes in India.