Short answer: If you want to stay compliant while using AI tools, you should first assess whether you intend to enter any personal or sensitive data into the model. If so, you need to understand how AI models – such as ChatGPT or other large language models (LLMs) – process information. In addition, you should implement technical measures before deploying the model, and set up workflows that align with GDPR. With the right preparation, routines, and safeguards in place, AI can be used responsibly without breaching data protection rules.
A new era with AI – but what about compliance?
In recent years, many companies have begun using AI tools such as ChatGPT, Gemini, and Copilot for everything from writing and analysis to code generation. But while the technology itself has evolved rapidly, the understanding of how to regulate and handle it – especially when it comes to personal data and accountability – has not kept up.
In this article, we take a closer look at how you can avoid breaching GDPR, the AI Act, or your company’s own compliance framework when using AI in practice.
Studies show that almost 50% of UK companies have experienced a cyber attack
- www.gov.uk
Where is the compliance risk?
AI models like ChatGPT, Copilot, or Claude do not automatically generate sensitive content. The risk typically arises when the user – often with good intentions – enters personal data, confidential documents, or other sensitive information into the model in order to solve a task.
In these situations, you may be violating GDPR if the data is processed in a way that is not legal. On the other hand, sharing sensitive information is sometimes necessary to get a meaningful or useful result from the model – and that places the data controller in a difficult position.
You may breach GDPR if you:
-
Enter personal or sensitive information, such as health data, national identification numbers, religion, or sexual orientation.
-
Use an AI tool without a data processing agreement, or without knowing how data is stored and handled.
-
Lack a legal basis for processing, such as consent or legitimate interest.
-
Fail to inform the individual that their data is being processed in this way.
Start your privacy cleanup with the big picture
A GDPR Risk report gives you a complete overview of the privacy risk in your company. The report is based on a scan with DataMapper.
Real-life examples of AI-related GDPR breaches
Here are a few examples of how things can go wrong when using AI in a professional setting:
-
Uploading sensitive documents to AI: For instance, sending an internal email thread or meeting minutes into ChatGPT for summarisation – without noticing that it contains sensitive data about a customer or employee.
-
Using an AI service without a processing agreement: Free tools often lack a data processing agreement, meaning you lose control over how data is stored, processed, or shared.
-
No legal basis: If personal data is being processed without a valid legal ground, such as explicit consent or legitimate interest, it’s a breach of GDPR.
-
Lack of transparency: If the individual (the data subject) has not been informed that their data will be sent through an AI model, this violates the core GDPR principle of transparency.
To reduce these risks, it is important to implement the right technical and organisational measures, to document decisions, and to build GDPR-compliant workflows for how AI tools are used in the organisation.
Want to know more about personal data?
In our newsletter you get tips and tricks for dealing with privacy management from our founder Sebastian Allerelli.
When you sign up for our newsletter you get a license for one user to ShareSimple, which will give you a secure email in Outlook. This special offer is for new customers only, with a limit of one freebie per company.
How to use AI in a GDPR-compliant way
To ensure GDPR compliance when using AI services, there are five key actions your organisation should take:
1. Use only services with documented data protection
If you’re processing personal data through an AI service, the processing must be properly documented and compliant with GDPR:
-
Data processing agreement (DPA): There must be a formal agreement in place governing how the AI provider processes your data on your behalf. If the service doesn’t offer a DPA or cannot guarantee data localisation and processing within EU legal frameworks, you should not share sensitive data.
-
Data transfers outside the EU: If the service relies on subprocessors in countries like the US, there must be valid safeguards in place, such as Standard Contractual Clauses (SCCs).
-
Information security: The AI provider must be able to document technical and organisational safeguards – including encryption and access control.
-
Accountability and oversight: As the data controller, you must be able to demonstrate that you’ve chosen a provider that complies with GDPR and that their security practices are reviewed on an ongoing basis.
2. Understand the service’s data policy
Before sharing any data with an AI service, you should review how sensitive data is processed. Key questions include:
-
Is data stored? If inputs are logged, it means they are not just processed temporarily, but retained on the service’s servers – often for debugging, analytics, or optimisation purposes.
-
Is the data used for training or product improvement? Sensitive or confidential data could in some cases leak into results shown to other users.
-
How long is data stored? And do you have the right to have it deleted? This is a core GDPR requirement.
-
Who has access to the data? This includes developers or subprocessors. As data controller, you’re responsible for ensuring all data processors and subprocessors adhere to GDPR.
3. Use sandbox models or enterprise solutions
If you intend to use AI in production settings involving sensitive or business-critical data, consider:
-
On-premise or self-hosted models (e.g. through Azure, AWS, or EU-based providers)
-
Enterprise versions that offer data isolation and advanced controls (e.g. OpenAI’s ChatGPT Enterprise or Microsoft Copilot with DLP integration)
-
AI tools that run locally and by default do not store data
4. Apply data minimisation and anonymisation
The fewer (and less sensitive) data points you share with the AI, the lower the risk. Make use of data minimisation and anonymisation by:
-
Removing names, national IDs and other direct identifiers
-
Applying pseudonymisation before sharing data with AI tools
-
Scanning and cleaning up sensitive data across your systems before using AI
5. Document your legal basis and purpose
Remember: your organisation remains the data controller – even when a third-party AI model handles the actual processing.
-
Record what the AI is used for
-
Assess risks (e.g. with a DPIA) if personal data is involved
-
Ensure a clear legal basis for processing (e.g. consent, legitimate interest, or contract)
FAQ on AI and compliance
1. Can we use ChatGPT to process personal data?
That depends on how it’s done. If you use ChatGPT to process personal data – especially sensitive data – you must ensure the service meets GDPR requirements and that a valid data processing agreement is in place. In most cases, it’s safer to avoid sharing personal data with open AI models.
2. When is it unlawful to use AI with sensitive data?
It typically violates GDPR if you share sensitive information with an AI model without a legal basis, without a data processing agreement, or without informing the data subject. It also applies if you don’t know how the data is stored or used.
3. Can employees use AI freely in their work?
Not without clear guidelines. Without proper controls, employees may unintentionally share confidential or identifiable information with unapproved AI tools. This could breach both GDPR and internal company policies.
4. Do we need to inform users and customers if we use AI?
Yes – especially if AI is used to make decisions that affect them. GDPR requires transparency, and the AI Act includes information requirements when AI is used in interactions with individuals or customers.
5. How does data minimisation help us use AI more safely?
The fewer sensitive data your systems hold, the lower the risk of accidentally sharing them with an AI tool. GDPR tools can help you identify and clean up unnecessary data – allowing you to use AI with reduced risk and better compliance.
AI isn’t a GDPR risk – poor data practices are
AI models in themselves aren’t incompatible with GDPR, but the way we use them can be. That’s why organisations should consider whether they want to share sensitive information with an AI model at all – knowing full well that doing so might limit the full potential of the technology. If you choose to share sensitive data with an AI service, you should have clear policies, documented procedures, and secure workflows in place.
One of the most tangible steps you can take is to reduce the amount of sensitive data stored in your systems. Once you know where sensitive information lives and have minimised it, using AI becomes significantly less risky from a GDPR perspective. Tools like DataMapper can support this process by locating and mapping sensitive content across systems. This provides the essential overview you need before integrating AI into your workflows – and enables you to use the technology with confidence.
Sebastian Allerelli
Founder & COO at Safe Online
Sebastian is the co-founder and COO of Safe Online, where he focuses on automating processes and developing innovative solutions within data protection and compliance. With a background from Copenhagen Business Academy and experience within identity and access management, he has a keen understanding of GDPR and data security. As a writer on Safe Online's Knowledge Hub, Sebastian shares his expertise through practical advice and in-depth analysis that help companies navigate the complex GDPR landscape. His posts combine technical insight with business understanding and provide concrete solutions for effective compliance.