Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mindset.ai/llms.txt

Use this file to discover all available pages before exploring further.

Because nobody will log decisions manually

Any system that depends on a human remembering to update something will go stale.
Everything described in the rest of Memex assumes decisions make it into the graph. Developers don’t stop mid-flow to log a decision. They don’t pause a productive conversation with their AI agent to open a separate tool and fill in a form. They make the decision, they move on, and the decision is locked in a conversation thread that nobody else will ever read. This is not a discipline problem. It’s a design problem. The system must extract decisions from where they’re actually made (in conversation), not demand that people duplicate their thinking into a separate tool.

Passive extraction with lightweight confirmation

When a developer works with an AI agent through Memex AI’s MCP connection, the agent is already participating in decision-making conversations. It knows when a non-obvious choice has been made. It can recognise the shape of a decision: options were considered, trade-offs were weighed, a direction was chosen. Memex AI extracts these decisions passively. The agent identifies candidate decisions during the conversation and batches them. The developer doesn’t need to do anything differently, they just work. At natural pause points (end of a session, before a commit, when switching context), the system surfaces what it found.
Extraction is not silent. It’s not a background process that quietly populates the graph without anyone knowing, that would create a different trust problem. Instead, it produces a decision bundle: a batch of decisions from a session, presented for lightweight review before they enter the shared graph.

Decision bundles: a merge request into the graph

A decision bundle is the unit of review. It’s designed to be as easy to process as a code diff, something a reviewer can form an opinion on in two minutes, with full context available if they need to go deeper.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Decision Bundle · 3 decisions · from @sarah · 14:32 today
Strategy: S3 — Proactive Role Discovery
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

D12: Use embedding similarity over keyword matching for discovery
  Rationale: Keyword matching misses transferable skills that use
  different vocabulary across industries. Embedding similarity
  captures semantic relatedness.
  Affects: WI-1 (matching engine implementation)
  [Approve]  [Reject]  [Flag for discussion]
  ↳ View conversation context

D13: Limit initial discovery to 3 occupations per candidate
  Rationale: Showing too many options overwhelms uncertain
  candidates. The agent can offer more if the candidate engages.
  Resolves: D1 (how many discovery occupations)
  [Approve]  [Reject]  [Flag for discussion]
  ↳ View conversation context

D14: Run matching server-side, not in-browser
  Rationale: Embedding computation is too heavy for client.
  Server-side also enables caching across candidates with
  similar profiles.
  Resolves: D6 (server vs client)
  [Approve]  [Reject]  [Flag for discussion]
  ↳ View conversation context

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[Approve all]  [Review individually]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Each decision in the bundle has:

A summary

One sentence, enough to understand the choice.

The rationale

Why this option, compressed to the essential reasoning.

Impact links

Which work items and existing decisions are affected.

Link to conversation

Full context, available but not required for review.
Plus three actions: approve (enters the graph), reject (discarded with reason), or flag (needs team discussion).

Progressive disclosure

The bundle is designed around progressive disclosure. The default view shows the summary and rationale, enough to form an opinion in seconds. If a decision looks surprising or consequential, the reviewer clicks through to the conversation context. Most decisions won’t need this. The ones that do are exactly the ones that benefit from it. This mirrors how code review works. You scan the diff. Most changes are obvious. A few need closer inspection. The tool makes scanning fast and deep-diving possible, without forcing you to read every line of every file.

Why this changes the economics

Without extraction

A team of five developers, each making 3–5 decisions per day with their AI agents, produces 15–25 decisions daily that never enter the shared graph. After a month, there are hundreds of invisible decisions. The graph is incomplete by design, because it only contains what someone remembered to log.

With extraction

Those same decisions are captured passively, bundled, and reviewed. The graph grows at the rate decisions are actually made, not at the rate humans are willing to do data entry. Review takes minutes per day, not hours. The graph is complete by default, not by heroic effort.

The MCP tools

# Extraction
extract_decisions(session_context)  → candidate decisions from a conversation
create_decision_bundle(decisions[]) → bundle for review

# Review
get_pending_bundles(account_id)     → bundles awaiting review
review_decision(bundle_id, decision_id, action, reason?)
  action: approve | reject | flag
approve_bundle(bundle_id)           → approve all decisions in the bundle