The Holy Grail of AI-Chatbots: Why Remembering User Preferences Matters
AI is getting better and better really fast. It can do amazing things now, like complex calculations, writing code, and figuring out data patterns. But it still struggles with a basic skill that humans learn naturally: remembering personal tastes and talking to people in a way that fits them.
Today, I want to share an illuminating experiment that compares how different AI systems handle user preferences. We'll examine the behavior of traditional AI assistants like ChatGPT and Letta (previously known as MemGPT), an open-source solution specifically designed to address the challenge of long-term memory in AI systems. Through this comparison, we'll explore how different approaches to AI memory can dramatically impact the user experience and potentially reshape how we think about AI assistants.
The Session Problem: A Modern Day Groundhog Day
Imagine having a highly capable assistant who, every morning, completely forgets everything they learned about your work style the day before. They're brilliant at their job, but you must repeatedly explain that you prefer detailed reports, need technical explanations broken down step-by-step, or want your data presented in specific formats. This is the current reality with most AI systems.
The root of this problem lies in how traditional AI systems operate: each conversation exists in isolation, a fresh start without any memory of past interactions. While this approach has its benefits for privacy and resource management, it creates a frustrating user experience that fails to mirror natural human interaction.
The Experiment: Testing AI Memory
To demonstrate this challenge, I conducted a simple yet revealing experiment with three different AI systems: ChatGPT, Claude, and Letta. The test was straightforward - ask each system about Joe Biden, express a preference for more detailed information, and then ask the same question in a new session to see if the preference was retained.
With ChatGPT, the initial interaction followed a predictable pattern. It provided a concise response: "Joe Biden is the 46th President of the United States, serving since January 20, 2021." When asked for more detail, it adapted beautifully, offering comprehensive information about Biden's career, policies, and background. However, in a new session, this learned preference vanished, and we were back to square one.
Having received a very brief reply to my question from Openai, I ask it not to write so briefly. If this were a human being, he would probably remember this piece of preference.
But as expected, when I start a new session and ask the same question, I get exactly the same short answer.
Custom instructions for ChatGPT
Now you might say that I can add my preferences in the ChatPGT settings. Well, yes, but writing down every single preference during a dialogue could be a daunting task, don't you think?
Memory for ChatGPT
In addition to (explicit) custom instructions ChatGPT has a Memory feature to actually remember the preferences that the user has set during the dialogue and it is called Memory. You have to turn it on. But in my experience it often did not work as expected. Here is my memory, after having articulated my preference to get more detailed answers.
Although it did not work for me as expected, I am quite sure that this feature will gain more attention as we progress in ChatGPT and other LLMs and agents alike.
Persistent Memory with Letta
What is Letta (previously MemGPT)?
Letta is an open-source framework for building stateful LLM applications. Originally known as MemGPT, it provides developers with tools to create AI agents that maintain persistent memory across interactions through a database backend. The framework is model-agnostic, meaning it works with various LLM providers including OpenAI, Anthropic, vLLM, and Ollama.
Letta actively records and retrieves user preferences. In our experiment, after expressing a preference for detailed responses, Letta stored this information: "The user appreciates more detailed responses. Make sure to provide more comprehensive context and engage on a deeper level."
The crucial difference came at the start of a new session. Without any prompting, Letta retrieved this stored preference and automatically provided a detailed answer to a similar question (this time for Barack Obama), demonstrating true learning and adaptation to user preferences.
Why This Matters: Beyond Convenience
This isn't merely about saving a few seconds of interaction time. The implications of persistent memory in AI systems extend far deeper into how we work and interact with technology:
For developers, it means not having to repeatedly specify code documentation preferences or styling conventions. For data analysts, it translates to consistent visualization formats without constant reconfiguration. For writers, it ensures consistent tone and style guidance across sessions.
Consider the compound effect of these small interactions across an organization. The time spent repeatedly training AI systems to understand individual preferences could instead be spent on more valuable tasks. Moreover, the cognitive load of constantly having to remember and restate preferences diminishes the very efficiency gains AI promises to deliver.
From Stateless to Stateful
The shift from traditional stateless AI interactions to stateful, memory-aware systems represents a fundamental evolution in how we think about AI assistants. Letta's framework, focusing on transparent long-term memory, demonstrates how AI can maintain context and preferences across sessions while respecting privacy and security considerations.
This approach mirrors how human relationships develop over time. Just as we learn and remember how our colleagues and friends prefer to work and communicate, AI systems should build up a understanding of user preferences that persists and evolves.
Looking Forward: The Future of AI Interaction
The real advancement in AI isn't just about making models better at generic tasks - it's about making them better at understanding and adapting to individual users. This evolution points toward AI assistants that don't just seem intelligent but become truly effective partners in our daily work.
The future of AI lies not in raw computational power or the ability to process vast amounts of data, but in the capacity to remember who we are and how we work best. After all, the most effective assistant isn't necessarily the one who knows the most, but the one who best understands how to work with us.
Member discussion