
Municipalities, companies and educational institutions are rapidly using AI assistants for everything from citizen services to teaching support. But every time a chatbot "sees" personal data, the same requirements apply as for all other IT systems below EU General Data Protection Regulation (GDPR). Therefore, one is thoroughly worked out Data processing agreement / Data Processing Agreement (DPA) between controller and processor, as well as a number of technical and organizational measures, absolutely essential before the first prompt is keyed.gdpr-info.eu, gdpr-info.eu)
Below you will find six concrete steps that data protection advisors and IT managers can follow to ensure that your AI assistant meets the GDPR requirements from day 1.
GDPR requires a clear legal basis for every treatment. Public authorities typically choose as a starting point nature. 6(1)(e) "public exercise of authority / public interest" as well as authorization in relevant sector legislation, while private individuals most often use consent or legitimate interest. Document the choice and describe it in the privacy policy and cookie banner.ico.org.uk, ico.org.uk)
Municipalities and other authorities cannot settle for nature. 6(1)(e) on "public exercise of authority / public interest". The Danish Data Protection Authority emphasizes that the processing always must rest on one supplementary legal basis in EU or Danish law, which specifically gives access to the task the AI assistant supports, e.g. citizen service, teaching or case management. (The Norwegian Data Protection Authority)
Nature. 28 requires you to have a written agreement that describes purpose, duration, categories of data, security measures and sub-processors. A good starting point is the Danish Data Protection Authority Template for data processor agreement customize the attachments for AI operation (model hosting, vector databases, GPU providers). Remember version control: update the DPA when you change models or add new functions.gdpr.eu, gdpr-info.eu)
According to nature. 5(1)(c) “data minimization” you may only process data that is necessary. In practice, this means:
Mask or hash personal data in prompts so that the model never sees raw CPR numbers.
Implement safe rules and instructions in the system prompt, based on which the AI model generates text.
Monitor logs for attacks or misuse of the solution.ico.org.uk, latitude-blog.ghost.io)
The GDPR requires that data be deleted or anonymised when the purpose has been fulfilled [nature. 5(1)(e)].
Set one automatically deletion policy on conversation logs (e.g. 90 days).
Document deadlines in your record of treatment activities (ROPA).ico.org.uk, gdprhub.eu)
The supplier (processor) must not engage new sub-processors without yours prior written approval (art. 28(2)). Ensure:
A change management flow, where you will be notified of new GPU centers or SaaS services.
A register over all current sub-processors with contact information and data location.gdpr-info.eu, gdprhub.eu)
If the chatbot processes sensitive data or profiles users, a Data Protection Impact Assessment (DPIA) before go-live (art. 35). Use e.g. CNIL's open-source PIA-tool to map risks, and repeat the assessment at least once a year or with major model updates.(github.com, cnil.fr)
A GDPR-safe AI assistant requires more than encrypted traffic and a nice cookie banner. By that
choose the correct legal basis,
enter into a detailed DPA, and
build in data minimization, deletion and audit,
you show both the Norwegian Data Protection Authority and the users that you take privacy seriously.
Take small, iterative steps: start with a DPIA workshop, update your DPA, then roll out technical controls. Once the foundation is in place, the AI assistant can deliver value without becoming a compliance nightmare. If you want to save time, there are ready-made modules, e.g. Promtes Privacy Layer, which automates secure instructions in system prompts, deletion policy and sub-processor handling.
The result? A faster, more trust-inspiring chatbot that both users and regulatory authorities can have peace of mind about.
EU local data storage: all processing and storage takes place exclusively on servers in the EU, eliminating transatlantic transfers.
On-premises-mulighed: if you have special requirements (e.g. health or citizen service data), Promte can be fully installed in your own data center or as a hybrid solution.
Customized DPA & full data ownership: Promte offers standard DPAs and guarantees that the customer always owns both raw data and trained models.
Advanced content and security filter: real-time content filtering blocks illegal or sensitive content, while IAM/IAP controls servers ensure role-based access.
Certified infrastructure: underlying cloud services are ISO 27001, SOC 2 and PCI-DSS certified, which simplifies your own compliance documentation.
In short: Promte combines the flexibility of generative AI with an architecture designed for the strictest European data protection requirements, ready to be plugged directly into your existing GDPR management.