Documentation Index
Fetch the complete documentation index at: https://docs.mindset.ai/llms.txt
Use this file to discover all available pages before exploring further.
Because nobody will log decisions manually
Everything described in the rest of Memex assumes decisions make it into the graph. Developers don’t stop mid-flow to log a decision. They don’t pause a productive conversation with their AI agent to open a separate tool and fill in a form. They make the decision, they move on, and the decision is locked in a conversation thread that nobody else will ever read. This is not a discipline problem. It’s a design problem. The system must extract decisions from where they’re actually made (in conversation), not demand that people duplicate their thinking into a separate tool.Passive extraction with lightweight confirmation
When a developer works with an AI agent through Memex AI’s MCP connection, the agent is already participating in decision-making conversations. It knows when a non-obvious choice has been made. It can recognise the shape of a decision: options were considered, trade-offs were weighed, a direction was chosen. Memex AI extracts these decisions passively. The agent identifies candidate decisions during the conversation and batches them. The developer doesn’t need to do anything differently, they just work. At natural pause points (end of a session, before a commit, when switching context), the system surfaces what it found.Extraction is not silent. It’s not a background process that quietly populates the graph without anyone knowing, that would create a different trust problem. Instead, it produces a decision bundle: a batch of decisions from a session, presented for lightweight review before they enter the shared graph.
Decision bundles: a merge request into the graph
A decision bundle is the unit of review. It’s designed to be as easy to process as a code diff, something a reviewer can form an opinion on in two minutes, with full context available if they need to go deeper.A summary
One sentence, enough to understand the choice.
The rationale
Why this option, compressed to the essential reasoning.
Impact links
Which work items and existing decisions are affected.
Link to conversation
Full context, available but not required for review.
Progressive disclosure
The bundle is designed around progressive disclosure. The default view shows the summary and rationale, enough to form an opinion in seconds. If a decision looks surprising or consequential, the reviewer clicks through to the conversation context. Most decisions won’t need this. The ones that do are exactly the ones that benefit from it. This mirrors how code review works. You scan the diff. Most changes are obvious. A few need closer inspection. The tool makes scanning fast and deep-diving possible, without forcing you to read every line of every file.Why this changes the economics
Without extraction
A team of five developers, each making 3–5 decisions per day with their AI agents, produces 15–25 decisions daily that never enter the shared graph. After a month, there are hundreds of invisible decisions. The graph is incomplete by design, because it only contains what someone remembered to log.
With extraction
Those same decisions are captured passively, bundled, and reviewed. The graph grows at the rate decisions are actually made, not at the rate humans are willing to do data entry. Review takes minutes per day, not hours. The graph is complete by default, not by heroic effort.