How Data Moves Through the Platform
When a Mindset AI agent interacts with an MCP server, data flows through a pipeline with security controls at each stage. Here’s what happens at every step.1. How your agent calls an MCP server
When a user interacts with a Mindset AI agent, the agent may determine it needs to call an external MCP server to fulfil the request — for example, a RAG (Retrieval-Augmented Generation) MCP server to fetch context from your knowledge base, or a tool MCP server to execute an action like retrieving user data or triggering a workflow. What gets transmitted from Mindset AI to the MCP server:- Tool name and parameters
- Authentication credentials — API key via Bearer token
- User context headers:
x-user-id,x-app-uid,x-session-tags
- Your own infrastructure (e.g. a Google Cloud Run instance) — you have full control
- A third-party MCP server you’ve registered in the Mindset AI platform (e.g. an external API service or SaaS platform)
When you use third-party MCP servers, you’re responsible for vetting that provider’s security and privacy practices. Data sent to third-party MCP servers is subject to that provider’s data handling policies — review their data processing agreements (DPAs) carefully. Mindset AI doesn’t control or guarantee third-party MCP server data handling.
2. What the MCP server returns
The MCP server processes the request using your data sources — databases, APIs, knowledge bases — and returns a structured response. What gets returned:- An acknowledgement that an action was completed (e.g. “Booking updated successfully”)
- Output data retrieved from your systems (e.g. user profile details, search results, weather data)
{status, message, data}. Client-built and third-party MCP servers may use different response structures.
The response travels back to Mindset AI over the same channel as the request. Your MCP server controls what data gets exposed — you can implement fine-grained access controls, filtering, and redaction in your MCP server logic.
3. What gets stored and why
Once the MCP server responds, Mindset AI stores both the request and the response in the user’s agent conversation history. What we store:- The original MCP tool call (tool name, parameters, timestamp)
- The full MCP server response (status, message, data returned)
- Conversation context (user messages, agent responses)
| Reason | Detail |
|---|---|
| Context continuity | The agent needs historical context to maintain coherent, multi-turn conversations |
| Debugging and auditing | Enables troubleshooting of agent behaviour and MCP integrations |
| User experience | Powers features like revisiting prior interactions and reviewing thread history |
- An end user deletes their profile
- Your organisation terminates its Mindset AI subscription
- You make a specific deletion request
4. How we process data with LLMs
To generate the next agent response, Mindset AI sends relevant conversation context — including MCP call and response data — to a Large Language Model (LLM) provider. What gets sent to the LLM:- Recent conversation history, which includes any MCP server response data from the current conversation
- System prompts and instructions for the agent’s behaviour
- Not used to train or improve future models
- Not accessible to other customers or for other purposes
LLM providers may retain request data briefly for abuse monitoring and trust and safety purposes, in line with their respective data processing agreements. This is standard practice across enterprise API providers and is distinct from training.
Data Privacy at a Glance
| Data location | What data | Retention | Third-party access |
|---|---|---|---|
| Your MCP server | Your operational data (databases, knowledge bases) | You control it | None — client-controlled infrastructure |
| Third-party MCP server (if registered) | Data sent to external service | Per that provider’s policy | Subject to that provider’s data handling — you must vet them |
| Mindset AI conversation history | MCP requests/responses, user messages, agent responses | Indefinite (until profile deletion, subscription termination, or explicit deletion request) | None — isolated per tenant |
| LLM provider (during processing) | Conversation context, MCP results | Not used for training; providers may briefly retain for abuse monitoring per their DPAs | None — we’re opted out of training |
Security Safeguards
Here’s what we have in place across every layer of the Mindset AI platform:Encryption in transit
Encryption in transit
All data transmission within the Mindset AI platform and to LLM providers uses HTTPS. For MCP server communication, the transport depends on the URL you register — make sure your MCP server endpoints use HTTPS.
Authentication
Authentication
Your MCP servers validate Mindset AI’s identity via API keys. We validate user identity before making any MCP calls.
Authorization
Authorization
Data minimization
Data minimization
Your MCP servers should only return data necessary for the agent’s response. Less data exposed means less risk.
Tenant isolation
Tenant isolation
We isolate every client’s data in our infrastructure. Cross-client data access isn’t possible.
Opt-out of LLM training
Opt-out of LLM training
Every LLM provider we work with is contractually prohibited from using customer data for model training.
Logging and audit
Logging and audit
MCP tool calls (request and response) are persisted in two places: in the Mindset AI conversation history in Firestore (for context continuity and debugging), and as usage events emitted via Pub/Sub to BigQuery (for analytics and audit). The BigQuery dataset has its own access controls, separate from the application layer.
Common Questions
Can my data leak to other Mindset AI clients?
Can my data leak to other Mindset AI clients?
No. Every client’s data lives in isolated Firestore collections with strict IAM and application-level access controls. Client-built MCP servers run in your own infrastructure. Some Mindset AI-provided system tools (e.g. widget creation) are shared services, but they don’t access or expose client-specific data from other tenants.
Is my data used to train LLMs?
Is my data used to train LLMs?
No. We’re opted out of training at all supported LLM providers. Data sent via API isn’t retained or used for model improvement.
What happens if an MCP server is compromised?
What happens if an MCP server is compromised?
Your MCP server runs in your infrastructure (or an isolated environment). Mindset AI can only access what the MCP server explicitly returns — we can’t reach your underlying data sources directly. To reduce your risk: rotate API keys regularly, isolate your network, validate all inputs, and monitor access logs.
Can Mindset AI see sensitive data from our MCP server?
Can Mindset AI see sensitive data from our MCP server?
Yes, if your MCP server returns it. We receive whatever the MCP server sends back, so implement filtering and redaction in your MCP server logic to avoid exposing sensitive data unnecessarily. For example: return masked card numbers or last 4 digits rather than full values.
How long does Mindset AI keep conversation history?
How long does Mindset AI keep conversation history?
We retain conversation history indefinitely to support ongoing agent functionality. Automatic deletion only happens when an end user deletes their profile or your organisation terminates its subscription. You can request manual deletion of specific conversations or user data at any time by contacting Mindset AI support.
What if we use a third-party MCP server?
What if we use a third-party MCP server?
When you register a third-party MCP server in the Mindset AI platform, you’re responsible for vetting that provider. Here’s what to check:
- Review their Data Processing Agreement (DPA) for data handling, retention, and sub-processor policies
- Verify they meet your organisation’s compliance requirements (SOC 2, GDPR, etc.)
- Assess what data will be sent to them and whether that aligns with your data classification policies
Recommendations for Your MCP Server
Filter your responses
Only return data the agent actually needs. Redact or mask sensitive fields like PII and credentials.
Enforce least privilege
Use the
x-user-id header to apply user-level permissions in every tool. Authorization checks in your tools are the final, authoritative security layer.Rotate API keys regularly
Store keys in a secret manager (e.g. Google Secret Manager) and rotate on a schedule. Don’t hardcode credentials in your codebase.
Monitor your access logs
Track Mindset AI platform requests to catch anomalies or unauthorised access attempts early.
Deploy in isolated environments
Hosting your MCP servers in your own infrastructure (e.g. Google Cloud Run) gives you full control over data access and reduces exposure.