
Best practice working with AI coding agents
AI coding agents are only as good as your prompts. I've been vibe coding before the term was coined and am sharing tips for vibe coders with low tech exp.
I often see posts and articles scrutinising AI coding abilities and claiming 'vibe coding' is rubbish, but I bet their tests missed out on prompting product management guidelines, pre-live checklists and weaving in Agile Test & Learn principles...the typical methods dev teams use to refine code before shipping it.
When using AI coding agents (like v0, Cursor, GitHub Copilot, etc.) to customise or extend this template, follow these best practices to ensure high-quality, production-ready code that doesn't break existing functionality.
1. Research before execution
Always ask the AI to research (online) your project before making changes:
"Before making any changes, please:
1. Research the frameworks and dependencies in this project
2. Examine the existing Sanity schema structure
3. Review the current component patterns
4. Check how similar features are already implemented
5. Identify any existing utilities or helpers that already solve this problem"
Why this matters:
- Prevents reinventing the wheel
- Ensures consistency with established patterns
- Avoids introducing conflicting dependencies
- Maintains architectural coherence
Example prompt:
"I want to add a newsletter subscription form. First, search the codebase to see if we already have form handling patterns, email service integrations, or similar components. Then review our Sanity schemas to understand how we structure content. Only after that, propose an implementation plan."
2. Detail Your Infrastructure
Provide context about your specific setup, for example:
"Our infrastructure:
- Next.js 16 with the App Router (not Pages Router)
- Sanity Studio v3 with Live Preview enabled
- TypeScript in strict mode
- Tailwind CSS with custom design tokens in globals.css
- Server Components by default, Client Components only where necessary
- GROQ queries using defineQuery for type generation
- Monorepo structure: /studio and /nextjs-app
- Deployed on Vercel with ISR (Incremental Static Regeneration)
- Using @portabletext/react for rich text rendering"
Some AI coding agent platforms, like V0.app have "Project Rules and Instructions" or "Knowledge bases" which can be handy on the first iteration but can cause irregularities the longer you chat or more complex your project is. Keep reading to see how we can mitigate the AI's memory and learning.
Why this matters:
- Prevents suggestions that are incompatible with your setup
- Ensures the AI uses correct APIs and architectural patterns
- Avoids unintentional changes to rendering strategies
- Maintains performance optimisations
3. Request roadmaps, not immediate execution
Ask for approaches or planning before implementation, for example:
"Don't write code yet. Instead:
1. Create a detailed implementation roadmap
2. List all files that need to be created or modified
3. Identify potential risks and breaking changes
4. Suggest testing strategies for each phase
5. Provide rollback procedures if something goes wrong
6. Estimate complexity and time required for each step"
Why this matters:
- Allows you to review the approach beforehand
- Highlights potential issues early
- Helps prioritise work
- Establishes clear milestones
Example prompt:
"I want to add multi-language support. Don't implement yet.
First, create a comprehensive roadmap that covers:
- Schema changes required in Sanity
- GROQ query modifications
- URL structure changes
- Component updates
- Metadata and SEO implications
- Impact on existing content
- Migration strategy for current posts
Then wait for my approval before proceeding."
4. Require notes and learnings
After each implementation, request documentation:
"After completing this work, please provide:
1. Summary of changes made
2. New patterns or conventions introduced
3. Potential gotchas or edge cases
4. Performance implications
5. Accessibility considerations
6. Browser compatibility notes
7. Lessons learned for future work"
Why this matters:
- Builds shared organisational knowledge
- Helps onboard new team members
- Prevents repeating mistakes
- Documents design decisions
Example format:
## Implementation Notes
**Changes Made:**
- Added `newsletter.ts` schema with email validation
- Created `NewsletterForm.tsx` client component with error handling
- Integrated ConvertKit API via route handler
**New Patterns:**
- Zod-based form validation
- API route template for third-party integrations
- Toast notifications for user feedback
**Gotchas:**
- ConvertKit API requires CORS headers
- Email validation must match client and server
- Rate limiting required to prevent spam
5. Production-Readiness Checklists
Request comprehensive quality checks before pushing to 'live', for example:
"Before this feature is considered complete, create a production-readiness checklist covering:
SEO,
Core Web Vitals,
Mobile Responsiveness,
Accessibility,
Good UX,
Good UI,
Good user journey...
...
You can always ask the AI to generate a .md (Markdown) file of the learnings like a README.md file or, at a milestone in the chat ask it to resummarise what works, changes made and learnings for future prompting.
The AI Collaboration Workflow
- Research → Understand the existing codebase
- Plan → Create a roadmap before coding
- Implement → Follow established patterns
- Document → Capture learnings and edge cases
- Audit → Check quality, security, and performance
- Test → Verify across devices and environments
- Review → Confirm readiness for production
- Improve → Refine patterns for future work
By following these principles, you ensure your application remains maintainable, scalable, and robust without accumulating unnecessary technical debt.


