The open-source, local-first alternative to NotebookLM
Drop in your PDFs, images, audio, video, URLs, or YouTube links — Memorwise chunks and embeds everything on your machine, then lets you chat with your documents using whichever LLM you prefer.
How it works
Three steps from documents to insights. All processing happens on your machine.
Everything you need to learn from documents
Chat, study, graph, listen. All powered by AI, all running locally on your machine.
Ask questions, get cited answers
Chat naturally with your documents. Every answer includes source citations so you can verify and dig deeper.

See how ideas connect
AI extracts concepts and maps relationships across your sources. Discover connections you would have missed.

Use any LLM you want
Run completely free with local models, or connect to cloud providers. Mix and match different models for different tasks.
Why local-first?
Your documents stay on your machine. No cloud uploads, no accounts, no subscriptions.
One command to get started
Run a single command to clone, install, and launch. Then configure at least one LLM provider in settings and you're ready to go.
# One command — clones, installs, and starts
npx memorwisegit clone https://github.com/robzilla1738/Memorwise.git
cd Memorwise
npm install
npm run devhttp://localhost:3000Works with your AI tools
Connect to Claude Code, Cursor, or any MCP-compatible assistant to interact with your notebooks from your editor.
{
"mcpServers": {
"memorwise": {
"command": "node",
"args": ["./mcp-server.js"]
}
}
}Your data, your machine
Everything lives in a single directory. Back it up, move it, delete it. You own every byte.
.openlm/
├── openlm.db — SQLite database
├── lancedb/ — Vector embeddings
├── sources/ — Uploaded files
└── whisper-models/ — Local Whisper modelsMEMORWISE_DATA_DIR=/path/to/data npm run devStart chatting with your documents
Open source, local-first, and free. Clone the repo and be up and running in 2 minutes.