New research from data privacy firm Incogni has raised fresh concerns about how major generative AI chatbots handle personal and business data. The study warns that large language models developed by the likes of Meta, Google and Microsoft collect sensitive information and share it with third parties — often without adequate transparency or meaningful user […]

New research from data privacy firm Incogni has raised fresh concerns about how major generative AI chatbots handle personal and business data.

The study warns that large language models developed by the likes of Meta, Google and Microsoft collect sensitive information and share it with third parties — often without adequate transparency or meaningful user control.

Incogni’s analysis of leading AI platforms, including Google Gemini, Meta AI, DeepSeek, Pi.ai and Microsoft Copilot looked across 11 subcategories in three key areas: how user data is utilised in model training, the transparency of each platform’s privacy practices, and the scope of data collection and third-party sharing.

The results show that these tools gather a wide range of data. This includes names, email addresses, phone numbers, precise location information and, in some cases, physical addresses. Few of the platforms provide clear mechanisms for businesses or individuals to opt out of having their data used for AI training.

While regulations such as GDPR grant individuals the right to request data erasure, it’s still unclear how to practically remove the information from a machine learning model.

As a result, many companies are not currently obligated, or technically able, to remove such data after the fact.

Contact details or proprietary business details may become embedded in the model’s training data, potentially without the user’s explicit knowledge or consent.

The study highlights the risks for enterprises. Employees using generative AI tools to draft reports, emails or code may inadvertently expose proprietary information, which could end up embedded in AI training datasets.

This introduces significant compliance, confidentiality and competitive risks, particularly for organisations handling sensitive client data or intellectual property.

Key findings from the report include Meta.ai sharing names and contact details with external partners, Claude disclosing email addresses and app interactions to third parties, and Grok (xAI) potentially sharing user-uploaded images.

Microsoft’s privacy policy suggests user prompts may be shared with advertising partners.

Nike, H&M, The North Face among apps sharing sensitive data, study finds

While OpenAI’s ChatGPT was noted in the report for clearer privacy policies, experts stress that all generative AI platforms require cautious and informed use when it comes to sensitive data.

“Many businesses remain unaware that routine use of AI tools could see internal data reused or shared in ways that undermine confidentiality and compliance obligations,” said Darius Belejevas, head of Incogni. “The lack of transparency and opt-out options creates real exposure for firms.”

The report calls for stronger safeguards, clearer privacy disclosures, and practical ways for organisations to prevent their data from being retained or reused by AI platforms.

For businesses adopting AI at scale, robust internal policies and supplier due diligence are now critical to managing data security and regulatory risk.

Personalized Feed
A Coffee With... See More
Personalized Feed
A Coffee With... See More