How to Set Up an MCP Server for Claude Code, Cursor, and Cline
The fundamental bottleneck in AI-assisted engineering isn't the model's reasoning capability; it’s the context window. Even with million-token windows, stuffing a raw codebase into a prompt leads to "lost in the middle" phenomena, high latency, and staggering API costs. The industry is shifting away from simple retrieval-augmented generation (RAG) toward structured codebase intelligence.
The Model Context Protocol (MCP) is the bridge that allows AI agents—like Claude Code, Cursor, and Cline—to query your codebase as if they were a senior engineer who has spent years in the repo. Instead of reading files one by one, an MCP server provides a standardized interface for agents to ask high-level questions: "What is the architecture here?", "Which modules are most at risk?", or "Where are the unused exports?"
In this guide, we will walk through setting up a professional-grade MCP server using repowise, an open-source platform designed to provide deep git intelligence and dependency analysis to AI agents.
MCP System Architecture
Why You Need an MCP Server for Your AI Editor
Most AI editors rely on simple keyword search or basic vector embeddings. While useful, these methods lack "structural awareness." They don't understand that a change in auth.ts might break a seemingly unrelated downstream consumer in billing.py.
An MCP server powered by repowise solves this by providing:
- Deterministic Relationships: It parses imports across 10+ languages to build a true dependency graph.
- Git-Aware Context: It knows who owns which file and which files are "hotspots" (high churn and high complexity).
- Graph-Based Retrieval: Instead of just finding similar text, it can trace the path between two modules using repowise's architecture and graph algorithms like PageRank.
- Reduced Hallucinations: By providing LLM-generated documentation with freshness scores, the agent relies on verified facts rather than guessing what a function does based on its name.
Prerequisites
Before we begin, ensure your environment meets the following requirements:
Python 3.10+
Repowise is built on a modern Python stack to handle heavy AST parsing and graph computations. We recommend using a virtual environment or a tool like uv for faster dependency management.
A Repository to Index
You’ll need a local clone of the codebase you want to analyze. Repowise supports Python, TypeScript, JavaScript, Go, Rust, Java, C++, C, Ruby, and Kotlin.
An LLM API Key (or Ollama)
To generate the initial documentation and "Why" summaries, repowise requires access to an LLM. You can use OpenAI, Anthropic, Google Gemini, or run locally with Ollama to keep everything behind your firewall.
Step 1: Install repowise
The easiest way to get started is via pip. This installs the CLI tool which includes both the indexer and the MCP server.
pip install repowise
Alternatively, if you prefer running from source to contribute to the AGPL-3.0 project, you can clone the GitHub repository.
Step 2: Index Your Codebase
Before the MCP server can answer questions, it needs to "understand" your code. This happens during the indexing phase.
Navigate to your project root and run:
repowise init
What repowise init Does
This command initiates a multi-stage pipeline:
- AST Parsing: It scans your files to identify classes, functions, and imports.
- Graph Construction: It builds a directed graph of how your code interacts.
- Git Mining: It analyzes your commit history to calculate bus factors and hotspots.
- Documentation Generation: It uses your configured LLM to write high-level summaries for every module.
Choosing Your LLM Provider
During the init process, you will be prompted for your provider. If you are concerned about data privacy, point repowise to a local Ollama instance (e.g., llama3.1). For the highest quality architectural summaries, Claude 3.5 Sonnet is currently the gold standard.
Understanding the Output
Repowise creates a .repowise directory in your project root. This contains a SQLite database for structured data and a LanceDB/pgvector store for semantic search. You can see what repowise generates on real repos by checking out our live demos.
Step 3: Start the MCP Server
Once indexed, you can expose this data to your AI agents. The repowise MCP server supports two primary communication modes.
stdio Mode (Default)
This is the most common setup for local editors like Cursor and Claude Code. The editor starts the repowise process and communicates via standard input/output.
repowise mcp start
SSE Mode (Remote/Shared)
If you are hosting repowise on a central server for a whole team, use Server-Sent Events (SSE). This allows multiple clients to connect to a single indexed instance of the codebase.
repowise mcp start --mode sse --port 8000
MCP Tool Registry
Step 4: Configure Your Editor
This is where the magic happens. By connecting your editor to the MCP server, you give your AI agent "superpowers."
Claude Code Setup
Claude Code is Anthropic's command-line agent. To add repowise, edit your claude_desktop_config.json (usually located in ~/Library/Application Support/Claude/ on macOS or %APPDATA%/Claude/ on Windows).
{
"mcpServers": {
"repowise": {
"command": "repowise",
"args": ["mcp", "start"],
"env": {
"REPOWISE_PATH": "/path/to/your/repo"
}
}
}
}
Cursor Setup
Cursor has built-in support for MCP.
- Go to Cursor Settings > Features > MCP Servers.
- Click + Add New MCP Server.
- Name:
repowise - Type:
command - Command:
repowise mcp start
Cline (VS Code) Setup
Cline (formerly Claude Dev) allows for powerful open-source workflows.
- Open Cline in VS Code.
- Click the Settings (gear icon).
- Scroll to MCP Servers.
- Add the configuration similar to the Claude Desktop JSON above.
Step 5: Verify It Works
Once configured, restart your editor or agent. You can now verify the connection by asking the agent questions that require global codebase knowledge.
Testing get_overview()
Ask: "Give me a high-level overview of this project's architecture."
The agent should call the get_overview() tool and return a summary of the tech stack, entry points, and module map. You can see all 8 MCP tools in action on our FastAPI demo page to compare the output.
Testing get_context()
Ask: "Who owns the auth module and what are its main dependencies?"
The agent will use get_context() to pull git ownership data and import relationships. This is much more accurate than the agent trying to guess by reading the file content alone.
Editor Context Loop
Common Setup Issues and Fixes
Database Not Found
If the MCP server starts but the agent reports "No data found," ensure you have run repowise init in the correct directory. The server looks for the .repowise folder. You can explicitly set the path using the REPOWISE_DB_PATH environment variable.
Permission Errors
On macOS/Linux, ensure the repowise binary is in your PATH. If you installed via pip in a virtual environment, you may need to provide the full path to the executable in your editor configuration (e.g., /Users/name/env/bin/repowise).
Timeout Issues
Large repositories (1M+ lines) can occasionally cause timeouts during the initial get_overview() call. You can increase the timeout in Cursor settings or optimize your index by excluding large vendor directories (like node_modules or venv) in your .repowise/config.yaml.
Advanced Configuration
For professional setups, you can customize how the MCP server behaves.
Environment Variables
REPOWISE_LOG_LEVEL: Set toDEBUGto see the JSON-RPC traffic.REPOWISE_LLM_PROVIDER: Switch betweenanthropic,openai, orollamaon the fly.REPOWISE_GRAPH_DEPTH: Control how many layers of dependencies are returned inget_context.
Custom Database Path
If you are managing multiple projects, you can store your repowise indices in a central location:
repowise mcp start --db-path ~/.cache/repowise/my-project.db
Running as a Background Service
For persistent SSE access, we recommend using systemd or docker. This ensures that the repowise architecture is always available for your agents without needing to manually start the server every morning.
Key Takeaways
Setting up an MCP server with repowise transforms your AI editor from a simple autocomplete tool into a deeply informed architectural partner. By moving beyond text-based retrieval and embracing git intelligence and dependency graphs, you significantly reduce the cognitive load of navigating complex codebases.
- Structure over Search: Don't just search for text; query the graph.
- Git Matters: Use hotspot analysis to identify where your AI agent should be most cautious.
- Standardization: MCP is the future of AI-tool interoperability. Setting it up now prepares your workflow for the next generation of agents.
Ready to see it in action? Explore the ownership map for Starlette to see the kind of intelligence repowise brings to your MCP-enabled editor.
FAQ
Q: Does repowise send my code to a third party? A: Only if you configure a cloud LLM provider (like Anthropic or OpenAI). If you use the Ollama provider, all processing stays local.
Q: Which languages are best supported? A: TypeScript and Python have the deepest AST integration, but all 10+ supported languages benefit from full git intelligence and dependency mapping.
Q: Can I use this with VS Code's native Copilot? A: Currently, Copilot does not fully support the MCP standard. We recommend using Cursor or the Cline extension for the best experience.


