This guide covers how your users’ data flows through the Mindset AI platform when you integrate with MCP (Model Context Protocol) servers. It addresses the questions we hear most from clients evaluating or implementing MCP integrations: what gets transmitted, what gets stored, how long we keep it, and where third-party exposure risks exist. Whether you’re deploying MCP servers in your own infrastructure or registering third-party MCP services, you’ll find clear answers here about your responsibilities and ours.Documentation Index
Fetch the complete documentation index at: https://docs.mindset.ai/llms.txt
Use this file to discover all available pages before exploring further.
How Data Moves Through the Platform
When a Mindset AI agent interacts with an MCP server, data flows through a pipeline with security controls at each stage. Here’s what happens at every step.1. How your agent calls an MCP server
When a user interacts with a Mindset AI agent, the agent may determine it needs to call an external MCP server to fulfil the request — for example, a RAG (Retrieval-Augmented Generation) MCP server to fetch context from your knowledge base, or a tool MCP server to execute an action like retrieving user data or triggering a workflow. What gets transmitted from Mindset AI to the MCP server:- Tool name and parameters
- Authentication credentials — API key via Bearer token
- User context headers:
x-user-id,x-app-uid,x-session-tags
- Your own infrastructure (e.g. a Google Cloud Run instance) — you have full control
- A third-party MCP server you’ve registered in the Mindset AI platform (e.g. an external API service or SaaS platform)
2. What the MCP server returns
The MCP server processes the request using your data sources — databases, APIs, knowledge bases — and returns a structured response. What gets returned:- An acknowledgement that an action was completed (e.g. “Booking updated successfully”)
- Output data retrieved from your systems (e.g. user profile details, search results, weather data)
{status, message, data}. Client-built and third-party MCP servers may use different response structures.
The response travels back to Mindset AI over the same channel as the request. Your MCP server controls what data gets exposed — you can implement fine-grained access controls, filtering, and redaction in your MCP server logic.
3. What gets stored and why
Once the MCP server responds, Mindset AI stores both the request and the response in the user’s agent conversation history. What we store:- The original MCP tool call (tool name, parameters, timestamp)
- The full MCP server response (status, message, data returned)
- Conversation context (user messages, agent responses)
| Reason | Detail |
|---|---|
| Context continuity | The agent needs historical context to maintain coherent, multi-turn conversations |
| Debugging and auditing | Enables troubleshooting of agent behaviour and MCP integrations |
| User experience | Powers features like revisiting prior interactions and reviewing thread history |
- An end user deletes their profile
- Your organisation terminates its Mindset AI subscription
- You make a specific deletion request
4. How we process data with LLMs
To generate the next agent response, Mindset AI sends relevant conversation context — including MCP call and response data — to a Large Language Model (LLM) provider. What gets sent to the LLM:- Recent conversation history, which includes any MCP server response data from the current conversation
- System prompts and instructions for the agent’s behaviour
- Not used to train or improve future models
- Not accessible to other customers or for other purposes
Data Privacy at a Glance
| Data location | What data | Retention | Third-party access |
|---|---|---|---|
| Your MCP server | Your operational data (databases, knowledge bases) | You control it | None — client-controlled infrastructure |
| Third-party MCP server (if registered) | Data sent to external service | Per that provider’s policy | Subject to that provider’s data handling — you must vet them |
| Mindset AI conversation history | MCP requests/responses, user messages, agent responses | Indefinite (until profile deletion, subscription termination, or explicit deletion request) | None — isolated per tenant |
| LLM provider (during processing) | Conversation context, MCP results | Not used for training; providers may briefly retain for abuse monitoring per their DPAs | None — we’re opted out of training |
Security Safeguards
Here’s what we have in place across every layer of the Mindset AI platform:Encryption in transit
Encryption in transit
Authentication
Authentication
Authorization
Authorization
Data minimization
Data minimization
Tenant isolation
Tenant isolation
Opt-out of LLM training
Opt-out of LLM training
Logging and audit
Logging and audit
Common Questions
Can my data leak to other Mindset AI clients?
Can my data leak to other Mindset AI clients?
Is my data used to train LLMs?
Is my data used to train LLMs?
What happens if an MCP server is compromised?
What happens if an MCP server is compromised?
Can Mindset AI see sensitive data from our MCP server?
Can Mindset AI see sensitive data from our MCP server?
How long does Mindset AI keep conversation history?
How long does Mindset AI keep conversation history?
What if we use a third-party MCP server?
What if we use a third-party MCP server?
- Review their Data Processing Agreement (DPA) for data handling, retention, and sub-processor policies
- Verify they meet your organisation’s compliance requirements (SOC 2, GDPR, etc.)
- Assess what data will be sent to them and whether that aligns with your data classification policies
Recommendations for Your MCP Server
Filter your responses
Enforce least privilege
x-user-id header to apply user-level permissions in every tool. Authorization checks in your tools are the final, authoritative security layer.Rotate API keys regularly
Monitor your access logs
Deploy in isolated environments