Lukas Huber
Contributor
Use AI without violating data protection? With the right tools and processes you remain compliant - GDPR, FADP and industry-specific regulations included.
Key Takeaways
- ▸**Rechtslage**: DSGVO (EU), DSG (Schweiz), Swiss-US Data Privacy Framework (seit Sept. 2024), EU AI Act (ab 2025)
- ▸**3 Compliance-Säulen**: 1) Tool-Wahl (EU/CH-Server), 2) Auftragsverarbeitungs-Verträge (AVV), 3) Human-in-the-Loop
- ▸**Quick Check**: Datenfluss dokumentieren - wo landen Ihre Daten? (OpenAI USA, Claude EU-Option, Gemini Multi-Region)
- ▸**Branchen-Spezifika**: Gesundheit (KVG), Finanzen (FINMA), Anwälte (Berufsgeheimnis) - erhöhte Anforderungen
- ▸**Kosten**: DSGVO-Audit ab CHF 2'000, laufende Compliance-Tools CHF 100-500/Monat
AI vs. Data Protection: The Dilemma in the Era of the AI Act
Imagine this: an employee in your Swiss SME uses ChatGPT to analyse customer data. Email addresses, names, and purchase history are copied into the prompt. A seemingly harmless click, yet one that can have far-reaching consequences. This sensitive information ends up on servers, possibly in the US, without a valid Data Processing Agreement (DPA) and without your customers' explicit consent. What was already an issue a few years ago becomes an even greater compliance risk in 2026, as the regulatory landscape has tightened dramatically.
The reality in 2026 is that adhering to AI and data protection regulations can no longer be a separate, quarterly exercise. Instead, it must be embedded as an integral part of the entire workflow, from detecting to remediating cybersecurity incidents. Automation and AI itself must be managed in a way that allows for demonstrable accountability to supervisory authorities and boards at all times. The European AI Act, which now establishes uniform legal frameworks for trustworthy AI, places a strong emphasis on data protection and requires high-risk systems to comply with stringent risk management and monitoring obligations. This means that simply using AI without a deep understanding of data sovereignty and processing constitutes an immediate violation of the GDPR and the Swiss Federal Act on Data Protection (FADP).
The consequences are not merely theoretical: fines can amount to up to CHF 250,000 under the Swiss FADP or 4% of global annual turnover under the GDPR. Furthermore, reputational damage and loss of customer trust loom, which can jeopardise long-term business success. For global companies, especially in the financial sector, the complexity multiplies. A single data incident may require simultaneous reporting under DORA (Digital Operational Resilience Act), GDPR, and national frameworks, each with different formats and deadlines, as sources from 2026 highlight [1].
The good news:
With the right tools, processes, and a proactive compliance strategy, you can use AI in a data protection-compliant and responsible manner – even in highly regulated industries. Integrating compliance into daily operations is key to success in the AI-driven world of 2026.
The 3 Pillars of Compliance in the AI Era
To successfully navigate the challenges of AI usage and data protection, companies rely on three fundamental pillars. These form the framework for a robust, future-proof, and, above all, legally compliant AI strategy that meets the requirements of the AI Act and other regulations.
Pillar 1: Tool Selection – Where Does Your Data End Up?
Choosing the right AI tool is the first and often decisive step in ensuring data security and compliance. The geographical location of servers, the type of data processing, and the provider's legal framework play a central role. In 2026, with the AI Act coming into force and the increasing importance of data sovereignty, this choice is more complex than ever.
Renewing the data core towards adaptable AI data foundries is no longer optional for companies in 2026 but the foundation for meeting requirements [3]. The institutions that will succeed are those that understand that the data core is not just a storage location but an agile infrastructure optimised for AI applications while remaining data protection compliant. The choice of the right tool must reflect this vision.
| Option | Description | Advantages | Challenges/Prerequisites |
|---|---|---|---|
| A: US Tools with Data Privacy Framework (DPF) | Providers like OpenAI ChatGPT, Anthropic Claude, and Google Gemini that participate in the EU-US Data Privacy Framework. | Access to leading AI models, often a broad feature set, global reach. Anthropic, for example, offers EU data centres, and Google Gemini supports multi-region hosting. | It is essential to conclude a robust Data Processing Agreement (DPA), use an enterprise plan with enhanced data protection guarantees, and strictly avoid highly sensitive data. Despite the DPF, the legal situation remains complex and requires continuous review by the data controller. |
| B: EU-Hosted Tools | Providers like Aleph Alpha (Germany) or Mistral AI (France) that operate their infrastructure exclusively within the EU. | High GDPR compliance "out-of-the-box" as data processing takes place within the European legal area. This significantly simplifies adherence to local data protection regulations and offers additional legal certainty, especially concerning the AI Act. | Can be more expensive in some cases and may offer a smaller feature set or less mature models in certain application areas compared to global market leaders. However, the market is evolving rapidly. |
| C: Swiss Hosting | Solutions like Infomaniak EURIA (a Swiss LLM) or custom deployments of self-hosted LLMs operated on Swiss servers. | Offers the highest security and data sovereignty for Swiss companies. Ideal for industries with strict regulatory requirements such as healthcare, finance, and legal consulting, as the Swiss Federal Act on Data Protection (FADP) applies. | Can incur higher implementation and operating costs and may require more technical expertise for setting up and maintaining self-hosted LLMs. The selection of specific Swiss LLMs is still limited but steadily growing. |
Pillar 2: Process Design – Human-in-the-Loop
The fascination of Artificial Intelligence lies in its ability to automate and make decisions. However, especially when processing personal or business-critical data, it is essential in 2026 to involve humans as the final authority in the decision-making process. The AI Act underscores this necessity by setting specific requirements for human oversight of high-risk AI systems.
Integrating compliance into the detection and remediation process of cybersecurity incidents, as highlighted by current analyses for 2026, means that human oversight is not just a post-hoc check but an active component of the system. This is particularly true for situations where AI makes far-reaching decisions with potentially serious consequences. The rule "AI suggests, human decides" is the golden standard here.
AI should never decide alone when it comes to:
- Personal Data: According to GDPR Art. 22, automated individual decisions that produce legal effects or similarly significantly affect individuals are only permissible under strict conditions and often require human intervention or the right to human review.
- Contractual Agreements, Terminations, or Rejections: AI can identify risks or create drafts, but the final decision always rests with humans to ensure fairness, transparency, and legal compliance.
- Creditworthiness Checks or Insurance Classifications: The impact on the individual is particularly high here. Human review is crucial to prevent discrimination and ensure compliance with financial market regulations (such as DORA for financial service providers).
Human intervention ensures that the results generated by AI can be critically questioned, checked for accuracy and fairness, and adapted to the specific context. This not only minimises legal risks but also fosters trust in AI systems.
Pillar 3: Documentation – The Burden of Proof as the Backbone of Compliance
In the complex world of AI and data protection, comprehensive and precise documentation is not just a bureaucratic obligation but the backbone of any compliance strategy. The AI Act and the GDPR demand transparent and traceable processing of data and the use of AI systems. For 2026, it is predicted that compliance can no longer be a separate, quarterly exercise but must be automatically generated as a by-product of incident handling [1]. This requires deep integration of documentation processes into operational workflows.
The ability to demonstrate accountability to supervisory authorities and boards directly depends on the quality and availability of your documentation. An "adaptable AI data foundry" can play an important role here by designing the data core to meet the requirements for transparency and traceability [3].
What you need to document in the context of AI usage:
- Record of Processing Activities (GDPR Art. 30): Expand this record to include all AI-assisted processing operations. Describe in detail which data the AI processes, for what purpose, which categories of data subjects are affected, where the data is stored, and what security measures are implemented.
- Data Processing Agreements (DPAs) with all AI Providers: Every external AI service provider processing personal data on your behalf requires a DPA. This must precisely regulate which data may be processed, how data security is ensured, which subcontractors are used, and how data will be handled after the contract ends. Ensure that EU Standard Contractual Clauses are included where necessary.
- Data Protection Impact Assessment (DPIA) for High-Risk Use (GDPR Art. 35): If the use of AI is likely to result in a high risk to the rights and freedoms of natural persons – for example, when processing sensitive data, large-scale monitoring, or automated decision-making – a DPIA is mandatory. This must assess potential risks and outline measures to mitigate them. The AI Act will further specify the requirements for such risk assessments for high-risk AI systems.
- Employee Training and Guidelines: Document when and how your employees have been trained in handling AI. Establish clear guidelines on what data can and cannot be entered into AI systems and how the "human-in-the-loop" process should be conducted. These training sessions and internal policies are crucial for minimising human errors and ensuring compliance in daily operations.
- Audit Logs and Traceability: Implement systems that log interactions with AI systems. These audit logs should show what data was entered, what results the AI produced, and when human decisions were made. This is particularly important for accountability and the ability to generate compliance as a by-product of incident handling.
Comprehensive documentation not only proves your compliance but also serves as a valuable resource for internal audits and the continuous improvement of your AI strategy.
Industry-Specific Requirements in Focus 2026
The general compliance pillars are supplemented in specific industries by additional, often stricter, regulations. In 2026, the AI Act, DORA, and national laws require a more precise adaptation of AI usage to the respective sector requirements.
Healthcare (KVG, HMG, AI Act)
The healthcare sector deals with arguably the most sensitive data. Patient data is subject to the highest protection requirements under the Health Insurance Act (KVG), the Medicines Act (HMG), and the GDPR/FADP. The AI Act will also impose specific requirements on AI systems classified as medical devices or considered high-risk AI systems in the healthcare sector [4].
- Requirement: Patient data must be processed in Switzerland or the EU on servers that meet the highest security standards. Transfer to third countries, even with DPF, must be critically reviewed for particularly sensitive data and is usually only possible with explicit consent and additional measures.
- Solution: Infomaniak EURIA or self-hosted LLMs on Swiss servers are the preferred choice here. These offer the greatest possible control over the data.
- Use Case: Summarising medical reports or assisting with diagnoses using AI models. It is crucial here that the AI merely generates suggestions ("AI writes"), but the final review, correction, and approval are always carried out by the doctor ("Doctor reviews"). Research shows "cautious optimism on foundation models in medical imaging balancing privacy and innovation" [4], underscoring the need for specialised, data protection-compliant solutions.
Compliance with these requirements is not only legally binding but also of paramount ethical importance to maintain patient trust.
Financial Service Providers (FINMA, DORA, AI Act)
The financial industry is highly regulated by the Swiss Financial Market Supervisory Authority (FINMA) and international regulations such as DORA (Digital Operational Resilience Act) and the AI Act. Transparency, traceability, risk controls, and incident reporting are paramount. For global financial groups, the complexity multiplies, as a single incident may require simultaneous reporting under DORA, GDPR, and national frameworks, each with different formats and deadlines [1].
- Requirement: Highest standards for data integrity, security, and traceability. AI systems must be designed so that their decision-making processes are transparent and auditable.
- Solution: The use of EU-hosted LLMs in combination with comprehensive audit logs and strict human approval processes is essential here. The ability to trace every step of AI-assisted processing is a must.
- Use Case: AI-assisted fraud detection. The AI identifies anomalies in transaction patterns and suggests potential fraud cases ("AI detects anomalies"). However, the final review and decision on measures always lie with the compliance department or a specialised analyst ("Compliance reviews"). This is crucial to avoid incorrect decisions and meet regulatory requirements for decision-making.
Integrating compliance into the detection and remediation workflow of cybersecurity operations is critical for financial service providers in 2026 to ensure accountability to regulators and boards.
Lawyers & Fiduciaries (Professional Secrecy, Client Confidentiality)
For lawyers and fiduciaries, professional secrecy and client confidentiality are of utmost importance. Violations can not only have legal consequences but also jeopardise their professional existence. The use of AI tools must therefore be extremely cautious and under strict adherence to these principles.
- Requirement: Absolute confidentiality and protection of client data. The data must not fall into the hands of third parties or be used for training AI models under any circumstances.
- Solution: Swiss hosting or EU hosting with strict data segregation and the assurance that data will not be used for model training. Self-hosted LLMs are often the safest option here.
- Use Case: Contract analysis. AI can scan large volumes of contracts to identify risk clauses, inconsistencies, or missing information ("AI flags risks"). However, the final legal assessment, advice, and decision on further action are always made by the lawyer or fiduciary ("Lawyer decides").
Choosing the right AI tool and implementing robust processes are crucial here to maintain client trust and meet professional standards.
Practical Checklist: Is Your AI Usage Compliant in 2026?
The rapid development of AI and the constantly changing legal landscape require continuous review and adaptation of your compliance strategy. Use this checklist to assess the current status of your AI usage and identify potential risks.
1. Document and Understand Data Flow
- What data does the AI process? Precisely record whether it involves names, emails, financial data, health information, or other sensitive data. Accurate classification is the first step towards risk assessment.
- Where does the data end up? Identify the geographical storage of the data (US, EU, Switzerland). This is crucial for assessing applicable data protection laws and the need for transfer mechanisms like the DPF or Standard Contractual Clauses.
- Who has access? Clarify which parties outside your company have access to the data (AI provider, their subcontractors, third parties). Every access point must be secured under data protection law.
2. Review and Conclude Data Processing Agreements (DPAs)
- Do you have a valid DPA with every AI provider? A DPA is mandatory as soon as an external service provider processes personal data on your behalf. Without a DPA, you are operating in a legal grey area with a high risk of fines.
- Does the DPA cover everything necessary? Check if the DPA includes provisions for data deletion, a list of the AI provider's subcontractors, and, where applicable, EU Standard Contractual Clauses to secure data transfer to third countries.
3. Train Employees and Establish Clear Guidelines
- Do all employees know what data does NOT belong in ChatGPT or similar tools? Regular and understandable training is essential to raise awareness of data protection risks.
- Are there clear, written guidelines? A short, one-page checklist with do's and don'ts (e.g., "Do not enter customer names, only anonymised data into public AI tools") can prevent misunderstandings and promote compliance.
4. Ensure and Document Human-in-the-Loop
- Does the AI ever make decisions without human approval? If so, you must critically question this. In 2026 and under the AI Act, human oversight is mandatory for decisions with legal effect or significant impact.
- If yes: Conduct a Data Protection Impact Assessment (DPIA) and transparently inform affected customers about the automated decision-making and their rights.
Your next steps for future-proof AI compliance:
- Document the data flow for your AI tools in detail: Take the time to outline the path of each data category through your AI systems. Use existing tools like ChatGPT or Claude to create drafts for this documentation and then review and complete them internally.
- Review and update DPAs with all AI providers: Ensure that a current and legally valid Data Processing Agreement is in place for every external AI service provider. If missing, request it immediately and carefully review its content.
- Create and communicate employee guidelines: Develop a concise, one-page checklist or information sheet that provides clear instructions for the data protection-compliant use of AI tools. Communicate this internally and ensure all employees are trained.
- Risk assessment for high-risk areas: If your company operates in a high-risk sector (e.g., healthcare, finance) or uses AI systems with potentially significant impacts, conduct a Data Protection Impact Assessment (DPIA) or consult an experienced data protection advisor like Lukas Huber from lhubertreuhand.ch.
Further Resources
- FDPIC Switzerland - Federal Data Protection and Information Commissioner
- schnellstart.ai Compliance Consulting - Support with the integration and auditing of AI systems
- lhubertreuhand.ch - Expertise for GDPR-compliant fiduciary processes and data protection consulting
- TNW | The heart of tech - Why 2026 will be the year of governed cybersecurity AI [1]
- Nature - Cautious optimism on foundation models in medical imaging balancing privacy and innovation [4]
About schnellstart.ai: We support Swiss SMEs in using the possibilities of Artificial Intelligence responsibly and legally – from strategic tool selection and the implementation of DPAs to conducting comprehensive data protection audits. Our goal is to help you seize the opportunities of AI safely.
Frequently Asked Questions
Darf ich ChatGPT für Kundendaten nutzen?+
Ja, ABER: 1) Enterprise-Plan nutzen (nicht Free/Plus - die haben keine AVV-Option), 2) AVV mit OpenAI abschliessen, 3) Keine hochsensiblen Daten (Gesundheit, Finanzen) ohne Anonymisierung, 4) Kunden informieren (falls KI Entscheidungen trifft). Für sensible Branchen: Lieber EU/CH-Alternativen (Claude EU, Infomaniak EURIA).
Was ist ein AVV und brauche ich einen?+
AVV = Auftragsverarbeitungsvertrag. Pflicht (DSGVO Art. 28), wenn ein Dienstleister (z.B. OpenAI, Google) personenbezogene Daten für Sie verarbeitet. Inhalt: Datenschutz-Massnahmen, Subunternehmer-Liste, Löschpflichten. Die meisten KI-Anbieter bieten AVVs an (Enterprise-Pläne) - einfach anfordern und unterschreiben.
Sind Open-Source-LLMs automatisch DSGVO-konform?+
Nein! Auch Self-Hosted LLMs (Llama, Mistral) müssen DSGVO-konform betrieben werden: EU/CH-Server, sichere Infrastruktur, Zugriffskontrolle, Logs. Vorteil: Volle Datenkontrolle (nichts verlässt Ihr System). Nachteil: Sie sind verantwortlich für Security, Updates, Backups.
Was ist der EU AI Act und betrifft er die Schweiz?+
EU AI Act (ab 2025): Reguliert KI-Systeme nach Risiko (minimal, begrenzt, hoch, unakzeptabel). Hochrisiko (z.B. Kredit-Scoring, Bewerbungsauswahl): Strenge Auflagen. Schweiz ist NICHT direkt betroffen, aber: Wenn Sie mit EU-Kunden arbeiten oder EU-Daten verarbeiten, müssen Sie compliant sein.
Related Articles
Newsletter
Receive our weekly briefing on Swiss AI & Deep Tech.



