Setting up your AI dev stack in 2026 (step-by-step)
Most developers in 2026 are running some AI tooling - but most are not running a stack. There is a difference between having Copilot installed and having a coherent setup where your IDE, your CLI, your context files, and your deploy layer all work together. This guide covers the full setup: what to install, how to configure it, what context files to write, and how to wire in a deploy target so you can ship what you build. All step-by-step.
What "AI dev stack" means in 2026
An AI dev stack is not just one tool. It is the combination of tools that handle the full development loop: writing code, reviewing code, running the project, and shipping it. In 2026, a complete stack has four layers:
| Layer | What it does | Tools |
|---|---|---|
| IDE / AI pair programmer | In-editor completions, chat, Agent mode | Cursor, VS Code + Copilot, Windsurf |
| CLI / agentic terminal | One-off scripts, migrations, multi-file edits in terminal | Claude Code CLI |
| Context files | Standing briefing for the AI - stack, conventions, never-list | CLAUDE.md, .cursorrules |
| Deploy / hosting | Run what you build in production | Railway, Vultr, DigitalOcean |
Optional additions sit on top: MCP servers for live data context, task spec tools like BrainGrid for Agent mode planning, and CI/CD for automated quality gates. But the four layers above are the core. Get those right and everything else plugs in cleanly.
Step 1: Install and configure your IDE
Option A: Cursor (recommended for agentic workflows)
Cursor is the simplest single choice if you want Agent mode, codebase-wide context, and AI chat in one tool. Download from cursor.com, install, and open your project. On first open, you will be prompted to sign in and choose a model - Claude 3.5 Sonnet is the default and works well for most tasks.
Key settings to configure immediately:
- Cursor Settings → Features → Codebase indexing - enable this. It lets @codebase work properly: semantic search across your whole repo.
- Cursor Settings → Features → Agent mode - ensure it is on. This is what allows multi-file autonomous edits.
- Cursor Settings → Privacy → Privacy mode - turn on if you work with sensitive code. Disables training data usage.
Pricing: Cursor has a free Hobby tier (limited fast requests), Pro at $20/month (500 fast requests/month), and Business at $40/month (team features, SSO). For daily use, Pro is the right tier. See our Cursor pricing breakdown for what burns credits fast and how to stretch your budget.
Option B: VS Code + GitHub Copilot
If you need to stay in VS Code - team requirements, extension ecosystem, or preference - install GitHub Copilot ($10/month individual, $19/month Business). You get inline completions and a chat panel, but no Agent mode equivalent out of the box. For agentic work you will rely more heavily on Claude Code CLI in the terminal alongside VS Code.
Step 2: Install Claude Code CLI
Claude Code CLI is the terminal counterpart to Cursor. Install it via npm:
npm install -g @anthropic-ai/claude-code
Then authenticate:
claude login
This opens a browser auth flow. Once authenticated, you are ready to use claude from any directory. Test it:
cd your-project claude "explain the architecture of this repo"
Claude Code reads the directory, finds your files, and gives a project-specific answer - not a generic one. This is the first payoff of the CLI: instant architecture explanation for any codebase you open.
Pricing: Claude Code CLI is billed via the Anthropic API. For typical daily use (a few coding sessions), expect $10–30/month. Heavy agentic use (long autonomous runs, large repos) can push higher. Claude Pro subscribers get a bundled API quota - check your plan.
For a full breakdown of what Claude Code can do and best usage patterns, see our Claude Code CLI tips guide.
Step 3: Write your context files
This is the highest-leverage setup step and the one most developers skip. Without context files, every AI session starts cold - the model guesses your stack and conventions. With context files, every session starts with the full project briefing already loaded.
CLAUDE.md - for Claude Code CLI
Create CLAUDE.md in your repo root. Claude Code loads it automatically at the start of every session. Minimum viable content:
# CLAUDE.md ## Project [1–2 sentences describing what this project does] ## Stack - Runtime: [Node 20 / Python 3.12 / etc.] - Framework: [Next.js 14 / FastAPI / etc.] - Database: [Postgres via Prisma / SQLite / etc.] - Testing: [Vitest / pytest / etc.] ## Commands - Dev: [pnpm dev / python -m uvicorn / etc.] - Test: [pnpm test / pytest / etc.] - Type check: [pnpm typecheck / mypy / etc.] ## Structure - [Key folders and what lives in each] ## Never - Never use `any` in TypeScript - Never modify [protected file or dir] without approval - Never use console.log - use [logger path] - Never commit secrets or .env files
Commit this file to the repo root. Every developer and every AI session benefits immediately.
.cursorrules - for Cursor
Create .cursorrules in your repo root. Cursor reads it at the start of every session. Content mirrors your CLAUDE.md but can include Cursor-specific instructions (like which @-mentions to use for which tasks). See our detailed Cursor rules best practices guide for a complete six-section template.
Both files should be committed to the repo so the whole team benefits - not just you.
Step 4: Add MCP servers (optional but high-value)
Model Context Protocol (MCP) servers give your AI tools live access to external data: your database schema, your GitHub issues, your filesystem, live web search. Without MCP, the AI works from files it can read in the repo. With MCP, it can query your actual database or look up a real API response.
The three MCP servers worth adding early:
| Server | What it unlocks | Install |
|---|---|---|
| @modelcontextprotocol/server-filesystem | Full filesystem read/write beyond the open project | npx @modelcontextprotocol/server-filesystem |
| mcp-server-postgres | Live schema reads and query execution against your DB | npx mcp-server-postgres |
| @modelcontextprotocol/server-github | Read issues, PRs, and repo metadata from GitHub API | npx @modelcontextprotocol/server-github |
Configure MCP in Cursor via Settings → MCP, or in Claude Code via ~/.claude/mcp.json. For a full MCP setup walkthrough including JSON config examples, see our best MCP servers guide.
Step 5: Set up your deploy layer
An AI dev stack that cannot ship is a prototype machine. Add a deploy target early - ideally before you write significant code - so you are shipping continuously rather than doing a big deployment at the end.
Railway - simplest for web apps and APIs
Railway connects to your GitHub repo, detects your runtime, and deploys on every push. Zero config for most Node, Python, and Go projects. Free tier is generous for side projects; paid plans start at $5/month. For any web app or API built with Cursor or Claude Code, Railway is the fastest path from code to live URL.
# Connect Railway CLI npm install -g @railway/cli railway login railway link # link current directory to a Railway project railway up # deploy
Vultr - for VPS and containerized workloads
If you need more control - a VPS, a containerized app, or a custom server configuration - Vultr starts at $2.50/month for a Cloud Compute instance. Good choice for AI-generated apps that need persistent server processes, background workers, or custom networking. Deploy via Docker or a simple systemd service.
DigitalOcean - for managed infrastructure
DigitalOcean sits between Railway and Vultr in control level: App Platform handles deploys like Railway, while Droplets give you VPS-level control. Good for teams that want managed Postgres, managed Redis, and app hosting in one place. Starts at $4/month for the smallest Droplet.
Step 6: Add BrainGrid for Agent mode sessions
Once Cursor Agent mode is running on real features, the missing piece is task scoping. Agent mode is autonomous - it will touch multiple files and make decisions without prompting you at each step. Without a clear task spec, it can drift: touching files it should not, over-engineering, or solving the wrong problem.
BrainGrid is built for this: write a task spec before each Agent mode session - what to build, which files to touch, what the acceptance criteria are - and hand it to the agent as its starting context. The spec keeps the agent on track. This is the same principle as a CLAUDE.md (standing context) but applied per-task rather than per-project.
The complete stack at a glance
| Tool | Role | Cost | Priority |
|---|---|---|---|
| Cursor | AI-native IDE, Agent mode | Free / $20/mo Pro | Core |
| Claude Code CLI | Terminal agentic assistant | API usage ~$10–30/mo | Core |
| CLAUDE.md + .cursorrules | Standing context files | Free (your time) | Core |
| Railway | Deploy layer for web apps/APIs | Free / $5+/mo | Core |
| MCP servers | Live DB / GitHub / filesystem context | Free (open source) | High value add-on |
| BrainGrid | Task spec layer for Agent mode | Paid (see site) | High value add-on |
| Vultr / DigitalOcean | VPS / managed infra for heavier workloads | $2.50–$4+/mo | When you outgrow Railway |
Common setup mistakes
Installing tools but skipping context files
The single most common mistake: Cursor installed, no .cursorrules. Claude Code installed, no CLAUDE.md. The AI works, but it guesses your conventions on every session. Spend one hour writing the context files and every subsequent session is noticeably better. The return on that one hour compounds across every task you ever run.
Using both tools for the same thing
Cursor and Claude Code do overlap - both can edit code, both can answer questions about the repo. The mistake is using both for the same workflow rather than splitting by context. Use Cursor for in-editor work where you want to see diffs inline. Use Claude Code CLI for terminal tasks, quick one-offs, and when you are already in a shell. Two entry points to the same repo, not two competing tools.
No deploy target until the project is "ready"
Projects that are never deployed are never finished. Connect your deploy target (Railway, Vultr) early - even if the first deploy is a hello-world endpoint. Shipping continuously forces you to keep the project in a deployable state, which catches configuration problems before they compound. It also makes it easier to share work in progress and get feedback.
Agent mode without task scoping
Running Agent mode on a vague prompt ("refactor the auth module") across a large codebase is asking for trouble. The agent will make decisions - and some of them will be wrong. Scope every Agent mode session with a clear task spec: what to build, which files are in scope, what the success condition is. BrainGrid is built to make this fast; a plain text note in your prompt works too if you are specific enough.
Minimal viable stack: what to set up first
If you are starting from scratch, do not try to set up everything at once. Start here:
- ☐ Install Cursor, open your project, confirm completions work
- ☐ Write
.cursorrules- stack, commands, naming conventions, never-list - ☐ Install Claude Code CLI, authenticate, test with one prompt
- ☐ Write
CLAUDE.md- same content as .cursorrules, adapted for CLI - ☐ Connect Railway to your repo - one
railway upto confirm deploy works
That is the core. Everything else - MCP servers, BrainGrid, Vultr/DO - adds on top once the core is stable and you have a sense of where the friction is. Ship the minimal stack first. You will know what to add next once you are using it daily.
Ship what you build. Connect your repo to Railway for zero-config deploys. For VPS and heavier workloads: Vultr from $2.50/mo or DigitalOcean.
Tighten the agentic loop: BrainGrid adds task specs to every Agent mode session - so Cursor and Claude Code stay on track. Try BrainGrid →