claude-mem uses a compaction approach. It records session activity, compresses it, and injects summaries into future sessions. Great for replaying what happened.
A-MEM builds a self-evolving knowledge graph. Memories aren’t compressed logs. They’re atomic insights that automatically link to related memories and update each other over time. Newer memories impact past memories.
For example: if Claude learns “auth uses JWT” in session 1, then learns “JWT tokens expire after 1 hour” in session 5, A-MEM links these memories and updates the context on both. The older memory now knows about expiration. With compaction, these stay as separate compressed logs that don’t talk to each other.
meta-analysis, make Claude write a blog post, connecting and deriving set of concepts across a range of books, articles, papers, etc. Basically a syntopic reading machine. Its a meaty side project (and expensive), but I am curious how close I can push this. My current approach relies on a tool that I have built recently [0], it's an agentic memory but I am using a new memory model that is based on Zettelkasten principles.
What is really cool about it is that it natively capture connections between atomic ideas and evolve them. Which I believe it gets me one step closer to syntopic reading machine.
Thanks for the pointer! But it seems to me beads is a different tool? Beads is a task tracker for multi-agent system, A-MEM is agentic memory for accumulated knowledge.
> It’s a cynical way to view the C-staff of a company. I think it’s also inaccurate: from my limited experience, the people who run large tech companies really do want to deliver good software to users.
I strongly disagree with this statement. What C-staff cares about is share-holder value. What middle management care about is empire building and promotions.
> for instance, to make it possible for GitHub’s 150M users to use LaTeX in markdown - you need to coordinate with many other people at the company, which means you need to be involved in politics.
You presented your point in a misleading way. I would classify this as collaboration/communication rather than politics.
Politics is when you need to tick off a useless boxes for your promo, when you try to to take credits for work you haven't helped with, when you throw your colleague under the bus, when you get undeserved performance rating because the manager thinks you are his good boy. There's a lot more, I didn't read any of your previous blogs, but all of these things are what engineers dread when we refer to politics.
This feels a bit like semantics. To get something big done you have to build consensus (e.g. on what to build and what resources to dedicate to it) and align incentives. Oftentimes these things require building relationships and trust first. I would consider all of these things to be a part of politics, but your definition seems to only include the bad stuff.
> You presented your point in a misleading way. I would classify this as collaboration/communication rather than politics.
Collaboration and communication are key parts of politics, though.
At its core, politics is simply the dynamics within a group of people. Since we innately organize into hierarchies, and power/wealth/fame are appealing to many, this inevitably leads to mind games, tension, and conflict.
But in order to accomplish anything within an organization, a certain level of politics must be involved. It's fine to find this abhorring and to try to avoid it, but that's just the reality of our society. People who play this game the best have the largest impact and are rewarded; those who don't usually have less impact and are often overlooked.
It's always worth being skeptical when someone appeals with the term "good". I'm sure there are people who run large tech companies who want to deliver "good software", but it's such a meaninglessly vague designation that it being true doesn't matter. I can't speak to the motivations of C-suites I've never met, but I can say for sure that my idea of "good software" is very different to theirs.
Politics is accruing and deploying political capital within an organisation - or less abstractly, building relationships and using them.
What you’re describing is a particular form of manipulative and divisive politics which is performed by insecure, desperate or selfish people.
Many engineers are not good at building relationships (the job of coding isn’t optimal for it after all), so painting the people who are good at is as narcissistic may be comforting but isn’t correct.
Using LLM as a judge architecture to optimize multi-agent system prompts and configurations. For now it's achieved through LLM based consensus system that evaluates another LLM output, and based on its performance for a specific task, it's tune the architecture and the prompt e.g. refine the prompt, change the base model to a smaller or cheaper model, etc
claude-mem uses a compaction approach. It records session activity, compresses it, and injects summaries into future sessions. Great for replaying what happened.
A-MEM builds a self-evolving knowledge graph. Memories aren’t compressed logs. They’re atomic insights that automatically link to related memories and update each other over time. Newer memories impact past memories.
For example: if Claude learns “auth uses JWT” in session 1, then learns “JWT tokens expire after 1 hour” in session 5, A-MEM links these memories and updates the context on both. The older memory now knows about expiration. With compaction, these stay as separate compressed logs that don’t talk to each other.