Giving AI coding assistants a memory that actually persists. Watch In Memoria in action: learning a codebase, providing instant context, and routing features to files. You know the drill. You fire up Claude, Copilot, or Cursor to help with your codebase. You explain your architecture. You describe your patterns. You outline your conventions. The AI gets it, helps you out, and everything's great. T
Add this skill
npx mdskills install pi22by7/in-memoriaWell-documented MCP server providing persistent codebase intelligence with multi-mode search and pattern learning
Giving AI coding assistants a memory that actually persists.
Watch In Memoria in action: learning a codebase, providing instant context, and routing features to files.
You know the drill. You fire up Claude, Copilot, or Cursor to help with your codebase. You explain your architecture. You describe your patterns. You outline your conventions. The AI gets it, helps you out, and everything's great.
Then you close the window.
Next session? Complete amnesia. You're explaining the same architectural decisions again. The same naming conventions. The same "no, we don't use classes here, we use functional composition" for the fifteenth time.
Every AI coding session starts from scratch.
This isn't just annoying, it's inefficient. These tools re-analyze your codebase on every interaction, burning tokens and time. They give generic suggestions that don't match your style. They have no memory of what worked last time, what you rejected, or why.
In Memoria is an MCP server that learns from your actual codebase and remembers across sessions. It builds persistent intelligence about your code (patterns, architecture, conventions, decisions) that AI assistants can query through the Model Context Protocol.
Think of it as giving your AI pair programmer a notepad that doesn't get wiped clean every time you restart the session.
Current version: 0.6.0 - See what's changed
You: "Where did we put the password reset code?" AI: queries In Memoria "In src/auth/password-reset.ts, following the pattern we established in our last session..."
No re-explaining. No generic suggestions. Just continuous, context-aware assistance.
## Quick Start
### Installation
```bash
# Install globally
npm install -g in-memoria
# Or use directly with npx
npx in-memoria --help
Claude Desktop - Add to your config (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"in-memoria": {
"command": "npx",
"args": ["in-memoria", "server"]
}
}
}
Claude Code CLI:
claude mcp add in-memoria -- npx in-memoria server
GitHub Copilot - See Copilot Integration section below
# Analyze and learn from your project
npx in-memoria learn ./my-project
# Or let AI agents trigger learning automatically
# (Just start the server and let auto_learn_if_needed handle it)
npx in-memoria server
In Memoria is built on Rust + TypeScript, using the Model Context Protocol to connect AI tools to persistent codebase intelligence.
┌─────────────────────┐ MCP ┌──────────────────────┐ napi-rs ┌─────────────────────┐
│ AI Tool (Claude) │◄──────────►│ TypeScript Server │◄─────────────►│ Rust Core │
└─────────────────────┘ └──────────┬───────────┘ │ • AST Parser │
│ │ • Pattern Learner │
│ │ • Semantic Engine │
▼ │ • Blueprint System │
┌──────────────────────┐ └─────────────────────┘
│ SQLite (persistent) │
│ SurrealDB (in-mem) │
└──────────────────────┘
Rust Layer - Fast, native processing:
TypeScript Layer - MCP server and orchestration:
Storage - Local-first:
This isn't just another RAG system or static rules engine:
AGENT.md for complete tool reference with usage patterns and decision trees.In Memoria works with GitHub Copilot through custom instructions and chat modes.
This repository includes:
.github/copilot-instructions.md - Automatic guidance for Copilot.github/chatmodes/ - Three specialized chat modes:
In Memoria integrates with GitHub Copilot Chat using MCP + Custom Agents (formerly called Chat Modes). These agents allow Copilot to query In Memoria’s persistent intelligence when working in Agent mode.
⚠️ Important: Copilot will only call MCP tools when the chat is in Agent mode (not Ask or Edit).
Create or edit the following file in your workspace:
.vscode/mcp.json
{
"servers": {
"in-memoria": {
"command": "npx",
"args": ["in-memoria", "server"]
}
}
}
Open this file in VS Code and click Start when prompted, or start it manually.
This repository includes:
.github/copilot-instructions.mdVS Code automatically loads this file and applies guidance to Copilot Chat. No additional setup is required.
This repository provides three Custom Agents for Copilot:
| Agent | Purpose |
|---|---|
| 🔍 inmemoria-explorer | Intelligent codebase navigation |
| 🚀 inmemoria-feature | Feature implementation using learned patterns |
| 🔎 inmemoria-review | Consistency & pattern-based code review |
⚠️ VS Code has renamed Chat Modes → Custom Agents
To ensure compatibility with current VS Code versions:
Create the folder:
.github/agents/
Move or copy files from:
.github/chatmodes/
into:
.github/agents/
Rename each file:
*.chatmode.md → *.agent.md
Example:
inmemoria-feature.chatmode.md → inmemoria-feature.agent.md
If agents do not appear:
.github/agents/Where is the authentication logic?
→ Copilot queries In Memoria’s semantic index
Add password reset functionality
→ Copilot retrieves:
Review this code for consistency
→ Copilot compares against learned conventions
.github/chatmodes/ only| Old Name | Current Name |
|---|---|
| Chat Modes | Custom Agents |
Chat: Configure Chat Modes… | Chat: Configure Custom Agents |
.github/chatmodes/ | .github/agents/ |
VS Code still recognizes legacy files, but .github/agents/*.agent.md is the recommended format going forward.
Native AST parsing via tree-sitter for:
Build artifacts (node_modules/, dist/, .next/, etc.) are automatically filtered out.
Let's be honest: In Memoria is early-stage software. It works, but it's not perfect.
This is open-source infrastructure for AI-assisted development. Currently a solo project by @pi22by7, but contributions are not just welcome, they're essential.
Before contributing code, please:
Ways to contribute:
See CONTRIBUTING.md for development setup and guidelines.
vs GitHub Copilot's memory:
vs Cursor's rules:
vs Custom RAG:
In Memoria works for both individual developers and teams:
Individual:
Team:
.in-memoria.db files to distribute learned patternsgit clone https://github.com/pi22by7/in-memoria
cd in-memoria
npm install
npm run build
Requirements:
Development:
npm run dev # Start in development mode
npm test # Run test suite (98.3% pass rate)
npm run build:rust # Build Rust components
Quality metrics:
Q: Does this replace my AI coding assistant? A: No, it enhances them. In Memoria provides the memory and context that tools like Claude, Copilot, and Cursor can use to give better suggestions.
Q: What data is collected? A: Everything stays local. No telemetry, no phone-home. Your code never leaves your machine. All embeddings are generated locally using transformers.js models.
Q: How accurate is pattern learning? A: It improves with codebase size and consistency. Projects with established patterns see better results than small or inconsistent codebases. The system learns from frequency and repetition.
Q: What's the performance impact? A: Minimal. Initial learning takes time (proportional to codebase size), but subsequent queries are fast. File watching enables incremental updates. Smart filtering skips build artifacts automatically.
Q: What if analysis fails or produces weird results? A: Open an issue with details. Built-in timeouts and circuit breakers handle most edge cases, but real-world codebases are messy and we need your feedback to improve.
Q: Can I use this in production? A: You can, but remember this is v0.5.x. Expect rough edges. Test thoroughly. Report issues. We're working toward stability but aren't there yet.
Q: Why Rust + TypeScript? A: Rust for performance-critical AST parsing and pattern analysis. TypeScript for MCP server and orchestration. Best of both worlds: fast core, flexible integration layer.
Q: What about other AI tools (not Claude/Copilot)? A: Any tool supporting MCP can use In Memoria. We've tested with Claude Desktop, Claude Code, and GitHub Copilot. Others should work but may need configuration.
We're following a phased approach:
See GitHub Projects for detailed tracking.
Project maintained by: @pi22by7
Before contributing: Please discuss your ideas on Discord, via email, or in an issue before starting work on significant features. This helps ensure alignment with project direction and avoids duplicate efforts.
MIT - see LICENSE
Built with ❤️ by @pi22by7 for the AI-assisted development community.
Try it: npx in-memoria server
Latest release: v0.6.0 - Smooth progress tracking and Phase 1-4 complete
In memoria: in memory. Because your AI assistant should remember.
Questions? Ideas? Join us on Discord or reach out at talk@pi22by7.me
Install via CLI
npx mdskills install pi22by7/in-memoriaIn Memoria is a free, open-source AI agent skill. Giving AI coding assistants a memory that actually persists. Watch In Memoria in action: learning a codebase, providing instant context, and routing features to files. You know the drill. You fire up Claude, Copilot, or Cursor to help with your codebase. You explain your architecture. You describe your patterns. You outline your conventions. The AI gets it, helps you out, and everything's great. T
Install In Memoria with a single command:
npx mdskills install pi22by7/in-memoriaThis downloads the skill files into your project and your AI agent picks them up automatically.
In Memoria works with Claude Code, Claude Desktop, Cursor, Vscode Copilot, Windsurf, Continue Dev, Codex, Gemini Cli, Amp, Roo Code, Goose, Opencode, Trae, Qodo, Command Code. Skills use the open SKILL.md format which is compatible with any AI coding agent that reads markdown instructions.