Show HN: Mem0 Browser Extension: Shared Memory Across ChatGPT,Claude,Perplexity
github.comHey HN! We're Deshraj and Taranjeet. We've been building working on a startup called Mem0, building an open-source memory layer for AI apps and agents (https://news.ycombinator.com/item?id=41447317). We also kept running into our own daily frustrations with AI assistants forgetting everything between conversations. Over a weekend, we decided to hack together a Chrome extension to solve this for ourselves.
The problem was simple: we were constantly re-explaining our context across platforms when switching between ChatGPT, Claude, and Perplexity. Start a coding discussion in ChatGPT, switch to Claude for a different perspective, jump to Perplexity for research—you're starting from scratch each time. We thought others might find this useful too, so we're sharing it with the HN community.
Our solution is built on our unified memory layer that works across multiple LLMs, accessible through a simple Chrome extension. Here’s a quick demo of how it works: https://youtu.be/cByzXztn-YY
The key features include
- Cross-LLM Memory: Start a conversation in ChatGPT, then continue in Claude or Perplexity without losing your context. This makes it easy to switch between models while maintaining coherence in your interactions.
- Customizable Control: Our dashboard lets you manage memories directly—you can add, edit, or delete memories, ensuring that your context stays relevant and accurate across all your LLM interactions.
- Sync with ChatGPT Memories: If you've been using ChatGPT's memory feature, Mem0 can sync with it, creating a consistent experience across your preferred LLMs.
We use a hybrid data architecture that combines graph, vector, and key-value stores to manage memories. This setup enables efficient memory retrieval based on relevance, recency, and context, ensuring your interactions remain meaningful across all apps.
The Chrome extension is MIT licensed and available on GitHub. Currently, it uses our hosted version of Mem0 for simplicity and stability. But we plan to add support for self-hosting using the open-source version of Mem0.
Try it out:
- Extension: https://mem0.dev/extension - Source code: https://github.com/mem0ai/mem0-chrome-extension
We'd love to hear any feedback and suggestions!
This seems interesting, how do you actually decide what to fetch from the memory that is relevant? "This setup enables efficient memory retrieval based on relevance, recency, and context" -- but how does it actually "decide" what to retrieve?
I was just thinking about this recently! Eventually I settled on making a .txt which I manually update whenever I want it to remember some stuff and uploading it to every conversation.
Doesn’t larger conversations also consume more therefore hitting the usage limit faster?
That’s great ! Does the extension work with APIs UIs like open-webui or LibreChat?
[dead]