serpro69/capy
4 stars · Last commit 2026-04-23
🦫 Privacy-first virtualization layer for LLM context with MCP protocol for tool access.
README preview
# capy <b>🚧 Fork in progress, expect some dust 🚧</b> **C**ontext-**A**ware **P**rompting ...or "**Y**et another solution to LLM context problem" [](https://github.com/serpro69/capy/stargazers) [](https://github.com/serpro69/capy/network/members) [](https://github.com/serpro69/capy/commits) [](LICENSE) > [!IMPORTANT] > This project was created with the help of Claude-Code. Is it, however, reviewed, tested, and reworked with a human-in-the-loop. > > No AI slop here. Purely AI-made skills are hot garbage, and that's putting it mildly. > > That said, if you have any problems with code that is written by AI - you've been warned. But, then again, why would you be interested in AI-related configs and skills in the first place... `¯\_(ツ)_/¯` ## Privacy & Architecture `capy` is not a CLI output filter or a cloud analytics dashboard. It operates at the MCP protocol layer - raw data stays in a sandboxed subprocess and never enters your context window. Web pages, API responses, file analysis, log files - everything is processed in complete isolation. **Nothing leaves your machine.** No telemetry, no cloud sync, no usage tracking, no account required. Your code, your prompts, your session data - all local. The SQLite databases live in your home directory and die when you're done.