Skip to main content
This guide covers how your users’ data flows through the Mindset AI platform when you integrate with MCP (Model Context Protocol) servers. It addresses the questions we hear most from clients evaluating or implementing MCP integrations: what gets transmitted, what gets stored, how long we keep it, and where third-party exposure risks exist. Whether you’re deploying MCP servers in your own infrastructure or registering third-party MCP services, you’ll find clear answers here about your responsibilities and ours.

How Data Moves Through the Platform

When a Mindset AI agent interacts with an MCP server, data flows through a pipeline with security controls at each stage. Here’s what happens at every step.

1. How your agent calls an MCP server

When a user interacts with a Mindset AI agent, the agent may determine it needs to call an external MCP server to fulfil the request — for example, a RAG (Retrieval-Augmented Generation) MCP server to fetch context from your knowledge base, or a tool MCP server to execute an action like retrieving user data or triggering a workflow. What gets transmitted from Mindset AI to the MCP server:
  • Tool name and parameters
  • Authentication credentials — API key via Bearer token
  • User context headers: x-user-id, x-app-uid, x-session-tags
Mindset AI doesn’t send the user’s original natural language query to the MCP server as a separate field. However, the agent generates tool parameters based on the user’s input, and those parameters may contain the user’s exact words or close paraphrases — for example, a natural language query passed as a search parameter. Treat user input as potentially present in tool parameters and design your MCP servers accordingly.
Security at this stage: All communication within the Mindset AI platform uses HTTPS. For communication between the Mindset AI platform and your MCP server, we use the transport defined by the MCP server URL you register.
Make sure your MCP server endpoints use HTTPS to protect data in transit, including authentication credentials and user identity headers.
Your MCP server will be deployed in one of two ways:
  • Your own infrastructure (e.g. a Google Cloud Run instance) — you have full control
  • A third-party MCP server you’ve registered in the Mindset AI platform (e.g. an external API service or SaaS platform)
When you use third-party MCP servers, you’re responsible for vetting that provider’s security and privacy practices. Data sent to third-party MCP servers is subject to that provider’s data handling policies — review their data processing agreements (DPAs) carefully. Mindset AI doesn’t control or guarantee third-party MCP server data handling.

2. What the MCP server returns

The MCP server processes the request using your data sources — databases, APIs, knowledge bases — and returns a structured response. What gets returned:
  • An acknowledgement that an action was completed (e.g. “Booking updated successfully”)
  • Output data retrieved from your systems (e.g. user profile details, search results, weather data)
For Mindset AI-built MCP servers, responses follow a recommended format: {status, message, data}. Client-built and third-party MCP servers may use different response structures. The response travels back to Mindset AI over the same channel as the request. Your MCP server controls what data gets exposed — you can implement fine-grained access controls, filtering, and redaction in your MCP server logic.
Filter out sensitive data (PII, credentials, internal IDs) before returning a response. Follow the principle of least privilege — only return what the agent actually needs.

3. What gets stored and why

Once the MCP server responds, Mindset AI stores both the request and the response in the user’s agent conversation history. What we store:
  • The original MCP tool call (tool name, parameters, timestamp)
  • The full MCP server response (status, message, data returned)
  • Conversation context (user messages, agent responses)
Why we store it:
ReasonDetail
Context continuityThe agent needs historical context to maintain coherent, multi-turn conversations
Debugging and auditingEnables troubleshooting of agent behaviour and MCP integrations
User experiencePowers features like revisiting prior interactions and reviewing thread history
Where we store it: We store conversation history in Mindset AI’s secure Google Cloud Firestore database. All data is encrypted at rest using Google-managed encryption keys, isolated per client tenant, and protected by IAM roles and application-level permissions. Cross-client data access isn’t possible. How long we keep it: We retain conversation history indefinitely to support ongoing agent interactions and context continuity. We only delete data when:
  • An end user deletes their profile
  • Your organisation terminates its Mindset AI subscription
  • You make a specific deletion request
You can request deletion of specific conversations or user data at any time.

4. How we process data with LLMs

To generate the next agent response, Mindset AI sends relevant conversation context — including MCP call and response data — to a Large Language Model (LLM) provider. What gets sent to the LLM:
  • Recent conversation history, which includes any MCP server response data from the current conversation
  • System prompts and instructions for the agent’s behaviour
Our privacy commitments here are non-negotiable: We only work with LLM providers where we can explicitly opt out of using our API interactions for future model training. We’re opted out at every LLM provider we currently support — including OpenAI, Anthropic, and Google Vertex AI. That means your data sent via API is:
  • Not used to train or improve future models
  • Not accessible to other customers or for other purposes
LLM providers may retain request data briefly for abuse monitoring and trust and safety purposes, in line with their respective data processing agreements. This is standard practice across enterprise API providers and is distinct from training.
All LLM API calls go over HTTPS, and we only work with vetted LLM providers with contractual data protection agreements (DPAs) in place.

Data Privacy at a Glance

Data locationWhat dataRetentionThird-party access
Your MCP serverYour operational data (databases, knowledge bases)You control itNone — client-controlled infrastructure
Third-party MCP server (if registered)Data sent to external servicePer that provider’s policySubject to that provider’s data handling — you must vet them
Mindset AI conversation historyMCP requests/responses, user messages, agent responsesIndefinite (until profile deletion, subscription termination, or explicit deletion request)None — isolated per tenant
LLM provider (during processing)Conversation context, MCP resultsNot used for training; providers may briefly retain for abuse monitoring per their DPAsNone — we’re opted out of training

Security Safeguards

Here’s what we have in place across every layer of the Mindset AI platform:
All data transmission within the Mindset AI platform and to LLM providers uses HTTPS. For MCP server communication, the transport depends on the URL you register — make sure your MCP server endpoints use HTTPS.
Your MCP servers validate Mindset AI’s identity via API keys. We validate user identity before making any MCP calls.
We pass user identity headers (x-user-id, x-session-tags) on every MCP request. Your MCP server can use these to enforce user-level permissions and role-based access control. Inspecting these headers and applying appropriate data filtering is your MCP server’s responsibility.
Your MCP servers should only return data necessary for the agent’s response. Less data exposed means less risk.
We isolate every client’s data in our infrastructure. Cross-client data access isn’t possible.
Every LLM provider we work with is contractually prohibited from using customer data for model training.
MCP tool calls (request and response) are persisted in two places: in the Mindset AI conversation history in Firestore (for context continuity and debugging), and as usage events emitted via Pub/Sub to BigQuery (for analytics and audit). The BigQuery dataset has its own access controls, separate from the application layer.

Common Questions

No. Every client’s data lives in isolated Firestore collections with strict IAM and application-level access controls. Client-built MCP servers run in your own infrastructure. Some Mindset AI-provided system tools (e.g. widget creation) are shared services, but they don’t access or expose client-specific data from other tenants.
No. We’re opted out of training at all supported LLM providers. Data sent via API isn’t retained or used for model improvement.
Your MCP server runs in your infrastructure (or an isolated environment). Mindset AI can only access what the MCP server explicitly returns — we can’t reach your underlying data sources directly. To reduce your risk: rotate API keys regularly, isolate your network, validate all inputs, and monitor access logs.
Yes, if your MCP server returns it. We receive whatever the MCP server sends back, so implement filtering and redaction in your MCP server logic to avoid exposing sensitive data unnecessarily. For example: return masked card numbers or last 4 digits rather than full values.
We retain conversation history indefinitely to support ongoing agent functionality. Automatic deletion only happens when an end user deletes their profile or your organisation terminates its subscription. You can request manual deletion of specific conversations or user data at any time by contacting Mindset AI support.
When you register a third-party MCP server in the Mindset AI platform, you’re responsible for vetting that provider. Here’s what to check:
  • Review their Data Processing Agreement (DPA) for data handling, retention, and sub-processor policies
  • Verify they meet your organisation’s compliance requirements (SOC 2, GDPR, etc.)
  • Assess what data will be sent to them and whether that aligns with your data classification policies
Mindset AI can’t control or guarantee third-party MCP server practices — this is a shared responsibility model.

Recommendations for Your MCP Server

1

Filter your responses

Only return data the agent actually needs. Redact or mask sensitive fields like PII and credentials.
2

Enforce least privilege

Use the x-user-id header to apply user-level permissions in every tool. Authorization checks in your tools are the final, authoritative security layer.
3

Rotate API keys regularly

Store keys in a secret manager (e.g. Google Secret Manager) and rotate on a schedule. Don’t hardcode credentials in your codebase.
4

Monitor your access logs

Track Mindset AI platform requests to catch anomalies or unauthorised access attempts early.
5

Deploy in isolated environments

Hosting your MCP servers in your own infrastructure (e.g. Google Cloud Run) gives you full control over data access and reduces exposure.
6

Review our DPA

Make sure Mindset AI’s data processing agreement aligns with your organisation’s privacy requirements before going to production.