Cursor rules best practices 2026: get better AI results
The difference between a Cursor session that produces clean, on-convention code and one that produces generic output you have to heavily edit is almost always the rules file. Without a .cursorrules file, Cursor guesses your stack, your style, and your conventions - and it guesses wrong often enough to cost you time. With a well-written rules file, the AI follows your patterns from the first line. This guide covers what belongs in your rules file, how to structure it, and real examples you can adapt.
What .cursorrules actually does
The .cursorrules file (placed in your repo root) is read by Cursor at the start of every session as standing context. Think of it as the briefing you would give a contractor before they touched your codebase: here is the stack, here is how we name things, here is what we never do, here is how to run the tests. The AI reads it before writing a single line, which means it does not have to infer your conventions from the code it sees - it already knows them.
The practical impact: fewer corrections, fewer back-and-forth iterations, and less credit burn. A developer on Cursor Pro with a good rules file typically exhausts their fast request pool later in the month than one running without any rules context.
The six sections every rules file needs
1. Stack and versions
State the exact tech stack and key dependency versions. The AI's training data spans many framework versions - without this, it may generate Next.js 12 patterns on a Next.js 14 project or use deprecated APIs.
# Stack - Node 20 / TypeScript 5.3 (strict mode) - Next.js 14 (App Router, not Pages Router) - Postgres 15 via Prisma 5 - Tailwind CSS 3.4 - Vitest for tests (not Jest)
2. File and folder structure
Tell the AI where things live. This prevents it from creating files in the wrong place or importing from paths that do not exist.
# Structure - API routes: app/api/[route]/route.ts - Components: components/[name]/index.tsx + [name].test.tsx - Server actions: app/actions/[name].ts - DB queries: lib/db/[entity].ts (never in components) - Types: types/[domain].ts
3. Naming conventions
Naming is one of the most common places AI output diverges from team style. Be explicit.
# Naming
- React components: PascalCase
- Hooks: camelCase with "use" prefix (useUserData, not getUserData)
- DB query functions: verb + entity (findUserById, createOrder)
- API response shape: { data, error, meta } - never { result } or { response }
- Test files: [module].test.ts - co-located with the module
4. What the agent must never do
This is the highest-value section. Explicitly listing what the agent should not touch or change prevents the most damaging mistakes - especially in Agent mode where it is operating autonomously across multiple files.
# Never - Never modify prisma/schema.prisma (migrations only by human) - Never use any (TypeScript) - use unknown and narrow - Never use console.log in production code - use logger utility - Never commit .env files or API keys - Never use default exports for utilities - named exports only - Never modify the auth middleware without explicit instruction
5. Testing conventions
If you want the AI to generate tests alongside code (which you should), it needs to know your testing patterns. Generic test output is usable but requires more editing than tests written to your actual conventions.
# Testing - Test file: co-located as [module].test.ts - Use describe/it blocks - not test() - Mock external services with vi.mock() - never in beforeAll - Test the interface, not the implementation - Each test should have exactly one assertion where possible - Always test the error path, not just the happy path
6. How to run the project
Include the commands to start the dev server, run tests, and run the type checker. When Claude Code CLI or Cursor Agent needs to verify its own output, it will use these commands.
# Commands - Dev: pnpm dev - Tests: pnpm test (watch: pnpm test:watch) - Type check: pnpm typecheck - Lint: pnpm lint - DB migrations: pnpm prisma migrate dev
Common mistakes and how to fix them
Rules file is too long
Keep your rules file under 300 lines. Everything beyond that competes with your actual prompt for the model's context window. If your project has genuinely complex conventions, split them into topic files and reference them with @-mentions when relevant rather than loading everything every session.
Rules are too vague
"Write clean code" and "follow best practices" are useless - the AI already tries to do this based on its training. Rules need to be specific to your codebase. "API responses always use { data, error, meta }" is a rule. "Write good APIs" is not.
Rules file is not committed to the repo
If your rules file is only on your machine, every other developer on the team is running without it. Commit .cursorrules to the repo root. This also makes it discoverable by new joiners and ensures it evolves alongside the codebase.
Rules never get updated
A rules file written at project start and never touched is only useful for a few weeks. Update it when you change frameworks, add a new layer to the architecture, or settle on a new convention. Treat it like documentation - it is only valuable when it reflects reality.
Using @-mentions alongside rules
The rules file handles standing context - things that are always true about your codebase. For task-specific context, Cursor's @-mention system pulls in precisely what the current task needs:
@file- pull in a specific file as context (e.g. the interface you are implementing against)@folder- pull in a directory listing and file contents@docs- pull in indexed documentation for a library@web- live web search (useful for libraries with recent changes)@codebase- semantic search across your whole repo
The pattern that works best: rules file for always-true context, @-mentions for task-specific context. Do not try to put everything in the rules file - it becomes noise.
Rules file for Agent mode specifically
Agent mode runs more autonomously than inline edits - it can touch many files, run terminal commands, and make decisions without prompting you at each step. That makes the "never" section of your rules file especially important for Agent mode. Before you run an Agent mode session on a production codebase, verify your rules file explicitly lists:
- Which directories are off-limits
- Which files should never be modified without human review
- Which shell commands are acceptable (and which are not)
- Whether the agent should create new files or only edit existing ones
Pairing your rules file with a task spec written in BrainGrid before each Agent mode session is the most reliable way to keep autonomous runs on track - the spec defines the what, the rules file defines the how and the guardrails.
A complete starter template
Here is a minimal but complete .cursorrules template you can adapt for any TypeScript/Node project:
# Project: [Your project name] # Stack: Node 20, TypeScript 5 (strict), [framework], [database] ## Structure - [Describe your folder layout here] ## Naming - [List naming conventions for your project] ## Never - Never use `any` in TypeScript - use `unknown` and narrow - Never modify [list protected files/dirs] - Never use console.log - use the logger at [path] - Never commit secrets or .env files ## Testing - Framework: [Vitest/Jest/other] - Location: co-located as [module].test.ts - [List your testing conventions] ## Commands - Dev: [command] - Test: [command] - Type check: [command]
Rules file sets the guardrails. BrainGrid sets the goal. Before each Agent mode session, write a task spec in BrainGrid - so the agent knows exactly what to build, what files to touch, and what to leave alone.