Key capabilities
Low-latency responses
Each session loads the agent’s configuration once: prompt, tool schemas, and conversation context. Every message a user sends after that goes straight to the LLM, with no server-side setup between the user and the model.
Live interactive elements
Agents can pause mid-conversation to present choices, request a confirmation, run a quiz, or guide a sign-up flow. Because the orchestration runs locally, the UI for these interactions appears the moment the agent decides to show it. There’s no network hop between the agent’s question and the user’s click.
Continuous capability delivery
Agent behaviour is shaped by small, composable rules delivered to the SDK at session start. When we ship new capabilities (multi-step plans, user memory, policy updates, new tool types), your customers pick them up automatically on their next session. No API changes or engineering coordination required on your side.
How it works
Mindset AI splits the agent stack into three tiers.Display tier
The chat UI in your user’s browser. Renders streamed responses, handles input, and shows interactive elements. The UI lives inside a Shadow DOM so it doesn’t conflict with your site’s styles.
Orchestration tier
The agent graph itself, running in the browser alongside the display tier. Decides which tools to call, assembles the system prompt, and runs the reasoning loop.
Current limitations
The SDK requires a modern browser: Chrome 140+, Firefox 140+, Safari 17+, or Edge 140+. Chrome, Firefox, and Edge auto-update, so most users are on a supported version by default.