Skip to main content

The world moved on. The tools didn’t.

Agile project management was designed in 2001 for teams of humans who needed work broken into predictable, estimable chunks. The artifact hierarchy (Epic → Story → Task → Subtask) is a communication protocol between people sitting in the same room, negotiating scope at a whiteboard. Twenty-five years later:
  • AI agents write production code, generate tests, perform research, and handle deployments
  • Teams are distributed across time zones, and “the room” is a Slack thread
  • The rate of change has outpaced any human’s ability to maintain a mental model of the system
  • Decisions that once lived in someone’s head now need to be machine-readable
Yet we’re still forcing this reality into tools built for a different era. We write stories for agents that don’t need narrative. We estimate points for work that takes minutes. We maintain wikis that are wrong by the time they’re published.
The result is predictable: AI agents produce incoherent work because they lack shared context.

Three failures that compound daily

Decisions are invisible, in both directions

In every existing project management tool, a decision is a comment buried in a ticket, a message lost in a Slack channel, or a vague memory from last month’s planning session. There is no canonical place where “we decided X because of Y” lives as a first-class, referenceable object.But the problem is now bidirectional. The old failure mode was human decisions not reaching agents. The new failure mode, already happening in every team using AI tools, is that decisions made during human–AI conversations don’t reach other humans or their agents.A developer spends an hour with their AI agent working through an architectural problem. They explore options, weigh trade-offs, and arrive at a decision. That decision is now locked in a conversation thread that nobody else can see. The next developer working on a related feature has no idea the decision was made. Their agent has no idea either. Two people made two contradictory decisions, each with their own AI agent, in two separate conversations, on the same Tuesday afternoon.This is the new shape of invisible decisions. They’re not buried in Slack or lost in meeting notes. They’re locked in AI conversation threads, and the volume is growing because human–AI planning sessions are now where most design thinking happens.When priorities shift (and they always shift), teams can’t trace which decisions are affected, which work items depend on them, or what downstream consequences follow. Re-prioritisation becomes chaos because the decision graph was never explicit.
A developer’s AI agent picks up a ticket to build a caching layer. It doesn’t know the team decided last week to move from Redis to Valkey. It builds the Redis implementation. The PR gets rejected. The agent’s work is wasted. The developer’s afternoon is wasted reviewing code that should never have been written.

Institutional knowledge decays

Every team has accumulated knowledge about how things work: deployment procedures, coding conventions, architectural boundaries, security requirements, design principles. This knowledge lives in READMEs that were accurate six months ago, in wiki pages nobody maintains, and in the heads of senior engineers who haven’t updated the onboarding docs since they were onboarded themselves.For human developers, this is friction. For AI agents, it’s fatal. An agent doesn’t have tribal knowledge. It can’t ask the person sitting next to it. When it loads a stale document that says “deploy with kubectl apply” but the team moved to ArgoCD three sprints ago, the agent follows the document. Confidently. Incorrectly.
The testing conventions document says to mock the database. The team stopped doing that after a production incident where mocked tests passed but the real migration failed. A new AI agent reads the document, mocks the database, writes tests that pass against the mock, and the team ships a broken migration. Again.

No coordination between agents

When two AI agents work on related parts of a system simultaneously (and this is increasingly common), there is no mechanism for them to share context. Agent A modifies a shared interface. Agent B, working from an outdated understanding of that interface, produces code that won’t compile. Neither agent knows the other exists.This isn’t a hypothetical. It’s happening right now in every team running multiple AI agents against a shared codebase. The agents are individually competent and collectively incoherent.

The cost is staggering, and hidden

These failures don’t show up as a line item. They show up as:
  • PRs that get rejected because the agent didn’t know about a recent decision
  • Duplicated work when two agents solve the same problem differently
  • Debugging sessions caused by stale documentation
  • Onboarding time for new team members (human or AI) that stretches from days to weeks
  • The slow, invisible drift of a codebase away from its own architectural principles
What if the system that held your decisions, your work, and your institutional knowledge was the same system your AI coding agents read from before writing a single line of code?That’s Memex AI.

Join the waitlist

Memex AI is in early access. If any of this sounds familiar, come and build with us, request access at memex.ai.