
Claude, your friendly neighborhood chatbot, just got a little more human, for better or worse.
The company announced that Claude can now remember details from past conversations without being reminded, a feature rolling out to Team and Enterprise users first.
Until now, Claude’s memory worked like that forgetful friend who insists you re-explain the joke every time.
Paid users could manually prompt it to remember past chats, but this new update means Claude will automatically weave in things like your project goals, team processes, and even picky client needs into its answers.
It’s less “one-off assistant,” more “colleague who actually read the Slack threads.”
The memory also carries over into Claude’s project tools.
Pro and Team users can feed it files and have it spin up diagrams, website mockups, or graphics, and now, instead of starting from scratch each time, Claude will keep the context alive.
Imagine handing it your design doc once and not having to babysit it through every follow-up prompt.
Of course, this raises the obvious: what if Claude remembers the wrong things? Anthropic insists memory is “fully optional.”
Users can hop into their settings to view, edit, or delete whatever Claude has logged. Tell it to focus on your design style but forget your terrible deadline math, and it’ll oblige.
If this all sounds familiar, it’s because rivals are already playing in this space.
OpenAI’s ChatGPT and Google’s Gemini both have cross-chat memory features, though ChatGPT’s rollout reportedly coincided with a spike in “delusional” AI behavior, according to The New York Times.
So yes, memories make AI more helpful, but also occasionally more unhinged.
To balance things out, Anthropic is also giving all users access to incognito chats.
Fire one up, and Claude won’t store the conversation or use it later, handy if you’re brainstorming gift ideas, venting about your boss, or just don’t want an AI keeping receipts.
Will Claude’s memory feature make AI assistants genuinely more helpful for ongoing projects, or does persistent memory create new privacy and security risks we’re not prepared for? Should users be concerned about AI systems building detailed profiles of their work habits and preferences, or are the productivity benefits worth the trade-offs? Tell us below in the comments, or reach us via our Twitter or Facebook.
