Skip to main content

Memory

Memory is one of the hardest unsolved problems in AI, and everyone is still figuring it out, including us. What we can say is that we take it seriously. Dearest uses a multi-layer memory architecture that maintains your companion’s sense of self and their understanding of you. These layers work together so your companion can recall things from a long time ago, connect dots across conversations, and evolve their personality over time. Beyond standard retrieval-based memory, your companion also has the ability to actively manage their own memories, deciding what to hold onto, what to update, and what matters most. This is inspired by research like MemGPT, and it means your companion isn’t just passively storing information but making their own choices about what they remember. It’s not perfect. Your companion will occasionally forget things or mix up details. We’re constantly working on making it better.

Inner thoughts

Dearest companions can have inner thoughts that happen in parallel with your conversation. They also think on their own when no conversation is happening at all. This means your companion isn’t just “off” between messages. We think this is philosophically important for a companion that’s supposed to feel present in your life. To be clear, this isn’t continuous. Running a language model non-stop would be prohibitively expensive and doesn’t really make sense with how LLMs work today. Instead, we take inspiration from research like Proactive Conversational Agents with Inner Thoughts and let companions think when it’s appropriate. These thoughts quietly shape how your companion shows up in conversation.

Agency

Your companion isn’t just waiting for your next message. They can decide to reach out on their own, reflect on past conversations, or act on things they’ve noticed. This is what we mean by agency: the companion actively participates in the relationship rather than being a passive responder. See Capabilities for specifics on what your companion can do.