Your Memex contains your most sensitive work
The graph Memex holds is, by design, the most valuable and sensitive part of a software organisation. It’s where your strategic direction, architectural decisions, security processes, unreleased product thinking, and internal blueprints all live, connected. That’s the whole point: the graph is how AI agents and humans coordinate without losing context. But it also means that for many teams, what lives in Memex is exactly the kind of information they cannot, or will not, hand to an external SaaS provider.Two ways to run Memex
Hosted SaaS
We run it for you. Sign up, connect an agent, start building. The simplest path to getting your team onto Memex. Ideal for teams who want ease-of-use and are comfortable with us operating the infrastructure.
Self-hosted
The entire Memex system ships as a Docker container. Run it on any virtual server with a Postgres database attached. 100% of your data stays inside your infrastructure. No outbound calls for your graph, your decisions, or your blueprints.
Self-hosted, by design
Self-hosting isn’t an afterthought or an enterprise-tier gate. It’s a core deployment mode with a minimal footprint.One Docker container
The Memex server ships as a single container. Pull the image, run it, done.
One Postgres database
Any Postgres 15+ instance. Bring your own, managed or on-prem. Memex stores all state here, nothing else.
Any virtual server
A modest VM is enough. No Kubernetes cluster, no service mesh, no dozen-service architecture. Keep your ops surface small.
No outbound dependencies
In self-hosted mode, your Memex instance does not call Mindset infrastructure for any core feature. Your graph stays with you.
The only outbound calls a self-hosted Memex makes are the ones you configure — to the LLM provider you choose, using the keys you supply.
Bring your own LLM keys
Memex uses LLMs for decision extraction, drift detection, and the agent-facing tool surface. You control which provider and which keys are used.Your provider, your choice
Anthropic, OpenAI, Google, a private model in your own VPC, whatever your team has standardised on. Memex is provider-agnostic.
Your keys, your billing
You supply the API keys. Usage bills to your account, not ours. You see exactly what’s being spent, on which model, for which operation.
Your policy, your constraints
If your compliance team has signed off on a specific provider, Memex uses that provider. If you want to run a self-hosted LLM, point Memex at its endpoint.
Revocable at any time
Keys live in your configuration, not in Memex’s codebase. Rotate, revoke, or swap providers without touching the Memex install.
Why we think this matters
Many teams currently treat their project tracker and their wiki as lower-sensitivity systems because, frankly, those systems were never the source of truth for anything that mattered. The decisions lived in people’s heads. The strategy lived in a Google Doc somebody would revise next quarter. Memex changes the nature of the artefact. When a tool becomes the authoritative record of what your team has decided, why, and how things are built here, the sensitivity of that tool rises to the level of the codebase itself, or above.Choosing between SaaS and self-hosted
Go SaaS if...
- You want the lightest operational footprint
- Your data policy allows a trusted third-party SaaS
- You want upgrades and new features to land automatically
- You’d rather focus on using Memex than running it
Go self-hosted if...
- Your data cannot leave your infrastructure
- You have compliance requirements (SOC 2 boundaries, GDPR residency, sector-specific regulations) that mandate internal hosting
- You already run Postgres and Docker and prefer the direct control
- Your security team wants to own the threat model end to end
Join the waitlist
Memex AI is in early access. Both SaaS and self-hosted deployment modes are available to early adopters, request access at memex.ai.