Artificial intelligence (AI) is rapidly evolving, and while chatbots powered by AI can be convenient for answering questions, making recommendations, or summarizing documents, there’s an important factor to keep in mind: As you already know, what you share might not always remain private. Many popular AI models, like OpenAI’s ChatGPT, Google’s Gemini, or Meta’s AI services, are constantly refining their algorithms using real interactions, which may include the conversations people have with these bots.
For example, when you seek advice about personal issues such as health, finances, or sensitive topics, or if you’re uploading proprietary information like a company report, the content you share could potentially be stored and used for AI training purposes. And in some cases, this data could even be reviewed by human moderators to enhance the AI’s accuracy. So, the next time you ask your chatbot about an embarrassing medical issue or share confidential information, consider that your interaction may live beyond just that one conversation.
The AI models that fuel these chatbots have often been trained on massive data sets, scraped from various sources across the internet like blogs, social media, and news articles. This scraping process has been controversial, especially due to the lack of explicit consent from users whose data was collected. In some cases, removing data once it has been used for training purposes can be nearly impossible due to the opaque nature of the AI development process. Copyright issues have also arisen from this widespread data collection, as many AI models may inadvertently use copyrighted materials for their learning processes without obtaining proper permissions from the content creators. One notable example is the lawsuit involving Getty Images. Getty Images filed a lawsuit against Stability AI, the company behind the AI art generator Stable Diffusion, alleging that Stability AI unlawfully used millions of copyrighted images from Getty’s library to train its AI model without permission. it is important to examine the privacy stances of these chatbots especially the popular ones and their opt-out policies.
Protecting Your Data: Opting Out of AI Training
Privacy Stance of six Popular Chatbots: Opt-Out Policy Table
Chatbot | Opt-Out Availability |
---|---|
Google Gemini | Yes, users can stop future chats from being recorded. but will still be stored for 72 hours regardless. |
Meta AI | Yes, but only for users in the EU and UK. Non EU and UK users can apply for opt out through Facebook. |
Microsoft Copilot | No, personal users cannot opt out. |
OpenAI ChatGPT | Yes, opt out via data control settings. |
Grok | Yes, opt out via desktop browser settings. |
Claude | Not needed by default, unless you gave explicit opt-in consent. |
Thankfully, there are ways to prevent your future interactions with some chatbots from being used for AI training. While it’s not always a guaranteed option with every service, many of the major platforms offer users the ability to opt out of having their data included in ongoing training processes. Let’s break down how you can take control of your data privacy across some of the most popular chatbot platforms.
1. Google Gemini
Google’s Gemini chatbot, which is designed to assist with tasks ranging from summarizing documents to answering queries, does store conversations by default to help train its AI models. Users who are 18 years or older have their conversations saved for a default period of 18 months, but this setting can be adjusted in the privacy controls.
Additionally, human reviewers at Google can access these conversations to improve the AI model’s accuracy. Google explicitly warns users not to share any confidential information with the chatbot that they wouldn’t want others to see, as it’s possible for a human reviewer to come across it during their evaluation process. Conversations stored for human review are saved separately from general chat logs.
Opting out: If you want to stop Google from using your Gemini conversations for AI training, you can go to the Gemini website and click on the “Activity” tab. From there, you can toggle the setting to turn off recording for future chats, or you can delete all previous conversations. However, it’s worth noting that conversations selected for human review will not be deleted and will continue to be stored separately. Furthermore, Google will continue to store all chats for 72 hours, even if you’ve opted out, in order to provide the service and manage user feedback.
For mobile users on iPhone or Android devices, Gemini’s help page details the process to turn off data collection through the app.
2. Meta AI
Meta (formerly Facebook) has integrated AI-powered chatbots into its suite of apps, including Facebook, Instagram, and WhatsApp. These chatbots are trained using information shared on Meta’s platforms, such as social media posts, as well as photos and their associated captions. However, Meta claims that private messages exchanged between users with friends and family on WhatsApp or Messenger are not included in this training.
Meta’s AI models are also trained using publicly available information scraped from across the web, which raises further privacy concerns. If you’re a resident of the European Union or the United Kingdom, where privacy regulations are stricter, you have the right to object to your data being used for training AI systems. To do this, you can go to Meta’s Facebook privacy page, click on “Other Policies and Articles” on the left-hand side, and then find the section on generative AI. From there, you can fill out a form to exercise your right to object. After submitting the form, you should receive an email from Meta confirming that your request has been reviewed and honored, ensuring your data will not be used for training purposes going forward.
Unfortunately, for those living in the United States and other regions without robust data privacy laws, this option is unavailable. However, Meta does provide a form where users can request that data scraped by third parties not be used to train AI systems. It’s important to know that Meta will review these requests on a case-by-case basis, and submitting the form does not guarantee your data will be excluded from training. Additionally, the process is somewhat complicated, requiring you to provide evidence, such as a screenshot of the conversation that includes your personal information.
3. Microsoft Copilot
Microsoft’s Copilot chatbot integrates AI assistance into several of its services, including Word, Excel, and other Office apps. However, for personal users, there’s currently no way to opt out of having your interactions used to improve Copilot’s algorithms. The only thing users can do is delete their interaction history, which you can do through your Microsoft account’s settings and privacy page. Once there, you can find the “Copilot interaction history” or “Copilot activity history” drop-down menu, where you can choose to delete your past chats. However, deleting your interaction history does not stop future conversations from being recorded and used for AI training.
4. OpenAI’s ChatGPT
OpenAI’s ChatGPT is one of the most widely used chatbot platforms, and it allows users to opt out of their conversations being used for training. To do this, you need to have an OpenAI account. Once logged in, go to the settings menu and navigate to the Data Controls section. Here, you can disable the setting to “Improve the model for everyone.” If you don’t have an OpenAI account, you can still find this option by clicking on the small question mark at the bottom right of the web page, which will direct you to the settings menu.
Even if you opt out, your conversations will still appear in your chat history for 30 days, but they won’t be used for training unless there’s a need to review them for abuse or other policy violations. This is an important distinction, as OpenAI may still access conversations temporarily to ensure compliance with its terms of service, especially if a conversation is flagged.
On the ChatGPT Android and iOS apps, the same data control options are available, allowing users to stop their chats from being used for AI training across different devices.
5. Grok
Elon Musk’s AI chatbot, Grok, is integrated with his social media platform X (formerly Twitter). By default, Grok is set to train on user data from X, including posts, interactions, inputs, and results. This change wasn’t widely publicized and only became known after some X users noticed the new setting in July 2024.
To opt out of having your X data used for Grok’s training, you’ll need to access the platform through a desktop browser (the mobile app currently doesn’t support this feature). In the settings, navigate to “Privacy and Safety” and scroll down until you find the section labeled “Grok.” Here, you can uncheck the box to stop your data from being used for training. Additionally, you can delete any conversation history with Grok, though again, this feature is only available on the desktop version of the site.
6. Claude (Anthropic AI)
Anthropic AI, the company behind the Claude chatbot, has taken a different approach by not using personal data for training by default. Claude is not trained on any individual’s personal data unless the user gives explicit permission for a specific interaction to be used in training. Users can indicate this permission by giving a thumbs-up or thumbs-down rating, or by contacting Anthropic AI via email to approve the use of their data for training purposes.
However, if a conversation is flagged for a safety review, it could be used to improve the AI’s ability to enforce rules and prevent harmful content. In such cases, data flagged for review may still be included in training processes to help the system identify and prevent problematic interactions in the future.
The Importance of Data Privacy
As the adoption of AI chatbots continues to expand, it’s crucial to understand the ways in which your data might be used. While many companies offer options to opt out of AI training, not all do so by default, and the process to ensure your conversations remain private can sometimes be cumbersome or opaque. The best way to safeguard your sensitive information is to be mindful of what you share with these chatbots, and to regularly review the privacy settings and data control options provided by the platform you’re using.
Although data privacy regulations are becoming stricter in some regions, like the European Union and the United Kingdom, users in other parts of the world may not have the same level of control. In the absence of comprehensive data protection laws, it’s up to individuals to take proactive steps to safeguard their information when interacting with AI systems. The key takeaway: always think twice before sharing personal or confidential data with a chatbot, especially if you’re unsure whether the platform has adequate safeguards in place to protect your privacy.