Compliance5 January 20258 min

    Data Protection and AI: What Swiss Companies Need to Know

    Data Protection and AI: What Swiss Companies Need to Know
    L

    Lukas Nagel

    Contributor

    FADP-compliant AI usage: Legal foundations, practical tips and checklist for implementation.

    ```html

    As 2026 approaches, a fundamental shift is underway in how Swiss companies must approach Artificial Intelligence. Simply identifying potential risks is no longer sufficient. Organisations must now be capable of demonstrating the trustworthiness of their AI systems without gaps and providing "compliance-grade evidence." This evolution, driven by new regulatory frameworks like the EU AI Act and NIS2, necessitates rigorous governance and compliance strategies that extend far beyond current business practices.

    The Paradigm Shift: From Risk Identification to Demonstrable Trustworthiness

    The digital landscape is evolving at breakneck speed, and so are the expectations of regulators. By 2026, Swiss SMEs must prioritise the demonstrable trustworthiness of their AI systems. The focus is shifting from mere risk identification to ensuring that AI systems can guarantee verifiable compliance and maintain data protection, particularly in highly sensitive sectors like finance and healthcare (Nature, 2026 [5]).

    This development isn't some abstract future scenario; it's a direct response to the increasing prevalence of AI and its associated potential dangers. Organisations that integrate strong governance into their AI strategies will be better positioned to innovate safely and prevent AI from becoming an internal threat (Financial Times, 2026 [2]). This means companies must act proactively, not only to meet requirements but also to bolster the trust of their customers and partners.

    Why 2026 is the Turning Point: A Look at Global Regulation

    The urgency surrounding 2026 stems from a series of international and sector-specific regulations that are coming into effect or will have far-reaching implications. The EU AI Act, considered the world's first comprehensive AI law, already mandates impact assessments for high-risk systems. In the US, states are following suit: Colorado will require annual assessments for high-risk AI operators from June 30, 2026, while California's CCPA regulations, also concerning data protection, continue to gain prominence (Bloomberg Law, 2026 [3]).

    These international developments have direct implications for Swiss companies, especially those with customers in the EU or collaborating with global partners. Switzerland, traditionally a leader in data protection, must ensure its businesses can keep pace with these global standards. The complexity multiplies for globally operating financial groups: a single data breach could necessitate simultaneous reporting under DORA, GDPR, and national frameworks, each with different formats and deadlines (TNW, 2026 [1]). This underscores the need for an integrated and forward-thinking compliance strategy.

    The legal framework for AI deployment in Swiss companies is complex and continually expanding with new international directives. A solid understanding of these foundations is essential to avoid fines and maintain stakeholder trust.

    The Existing Pillars:

    • Swiss Federal Act on Data Protection (FADP): The national foundation governing the handling of personal data. The revised FADP, effective since September 2023, has significantly tightened requirements for transparency and accountability.
    • GDPR (General Data Protection Regulation): Indispensable for Swiss companies processing data of EU citizens or offering services in the EU. The GDPR remains a central pillar for international data exchange, mandating comprehensive data subject rights and stringent data security requirements.
    • Sector-Specific Regulations: Mandatory in regulated sectors such as finance (e.g., FINMA circulars) or healthcare (e.g., HMG, KVG). These supplement general data protection laws with specific provisions for handling sensitive data and deploying new technologies.

    New Requirements from 2026:

    The year 2026 brings a series of new or intensified regulations that will shape the use of AI and data in Switzerland and internationally. These must be integrated into compliance strategies:

    Regulation Relevance for Swiss AI Deployment Key Focus Areas from 2026
    EU AI Act Relevant for Swiss companies offering AI systems in the EU or whose use impacts EU citizens. Mandatory impact assessments for high-risk AI systems; classification of AI systems by risk level; transparency and documentation obligations.
    NIS2 Directive Affects companies in critical sectors connected to the EU. Significantly strengthens cybersecurity requirements. Comprehensive risk management measures; incident reporting obligations; governance and supply chain requirements.
    DORA (Digital Operational Resilience Act) Specific to the financial sector; relevant for Swiss financial institutions with EU business or cross-border activities. Strengthening digital operational resilience; risk management for ICT third-party providers; reporting obligations for ICT incidents.
    US Regulations (e.g., Colorado, CCPA) Relevant for Swiss companies with US customers or business relationships in the USA. Colorado: Annual impact assessments for high-risk AI systems from June 2026 (Bloomberg Law, 2026 [3]). CCPA: Strict data protection rights for California consumers.

    The multitude of these regulations shows that an isolated approach is no longer sufficient. Companies must develop a holistic strategy that considers all relevant national and international requirements to manage complexity and minimise compliance risks.

    AI as an Insider Threat: A New Dimension of Risk Management

    While many companies view AI as a tool for efficiency and innovation, it also harbours a significant, often underestimated risk: AI can become a new type of insider threat (Financial Times, 2026 [2]). This doesn't necessarily stem from malicious intent but often from improper use, misconfigurations, or insufficient governance. When AI systems access and process sensitive company data without clear guidelines and controls, unintended data leaks, unauthorised data usage, or the generation of misinformation can result.

    An example of this involves generative AI models used by employees without adequate training or oversight. If confidential company data is input into such models, for instance, to generate text or perform analyses, there's a risk that this data could become part of the model's training dataset or be exposed through its output. Even if the employee's intention was benign, this can lead to a serious data breach that could cause significant damage to the company.

    The challenge lies in harnessing AI's innovative power without undermining internal security perimeters. This requires not only technical solutions but also a strong corporate culture that raises awareness of AI risks and establishes clear behavioural guidelines. Governance structures must define which AI applications can be used with which data, who has access, and how results are verified. Only then can we prevent the benefits of AI from becoming a significant security risk due to uncontrolled usage.

    Practical Checklist: Fit for AI Compliance 2026

    To meet the new requirements and ensure the trustworthiness of AI systems, Swiss companies must adapt their compliance strategies. The following expanded checklist provides guidance for GDPR-compliant and future-proof AI implementation:

    1. Comprehensive Data Processing Documentation and Ensuring Demonstrability

      The days of rudimentary documentation are over. Companies must not only record what data is processed, when, where, and for what purpose, but also how AI systems are integrated into this process. This includes a precise description of the AI models used, their training data, algorithms, and parameters. For high-risk AI systems, producing "compliance-grade evidence" will be crucial from 2026 onwards (TNW, 2026 [1]). This means documentation must be detailed and traceable enough to serve as proof of compliance with all regulations in case of an audit. This also requires documenting decision trees and explainability approaches (Explainable AI) to make the functioning of complex models transparent.

    2. Concluding Data Processing Agreements (DPAs) and Establishing Governance Structures

      When external service providers or cloud providers are used for AI applications or data processing, legally sound Data Processing Agreements (DPAs) are essential. These must explicitly regulate the specific requirements for AI deployment, data protection, and data security. From 2026, these agreements must also incorporate the new requirements of NIS2 and DORA (for financial institutions), which impose stricter stipulations for supply chain and ICT third-party provider security. Furthermore, establishing clear internal governance structures is paramount. Who is responsible for the AI strategy? Who oversees regulatory compliance? How are risks assessed and mitigated? These questions must be clearly answered, and responsibilities assigned to ensure the secure and compliant use of AI.

    3. Ensuring Data Storage in CH/EU and Guaranteeing Data Portability

      The physical storage of data remains a critical factor. For sensitive data and within the context of GDPR, storage in Switzerland or the EU is often the preferred option to minimise third-party access from non-GDPR countries. Companies must ensure that AI training data and AI-generated data also meet these requirements. Another important aspect, particularly concerning data subject rights and GDPR, is data portability. Individuals have the right to access, rectify, erase, and port their data (Nature, 2026 [5]). This means data must be stored and processed in a way that allows it to be provided in a structured, common, and machine-readable format upon request. This also applies to data generated or processed by AI systems.

    4. Implementing Data Subject Rights and Enhancing Transparency

      Data subject rights are at the core of data protection. Companies must establish mechanisms that enable individuals to effectively exercise their rights to information, rectification, erasure, and objection. With the increased use of AI, new challenges arise, particularly regarding the right to an explanation of AI decisions. For automated decisions that have legal effects or significantly impact individuals, a comprehensible explanation for the AI-based decision must be provided. This requires detailed logging of AI processes and the ability to make the functioning of algorithms transparent. Compliance with these rights is not only a legal obligation but also a crucial factor for trust.

    5. Implementing Technical and Organisational Measures (TOMs) and Establishing AI-Specific Governance

      Robust Technical and Organisational Measures (TOMs) are the foundation of any data protection strategy. These include encryption, pseudonymisation, access controls, regular security audits, and emergency plans. In the context of AI, these TOMs must be specifically tailored to the peculiarities of AI systems. This includes, for example, measures against "prompt injection" in generative AI, ensuring the data integrity of training data, and securing AI models themselves against manipulation. Data controllers must demonstrate compliance through TOMs and maintain records (Nature, 2026 [5]). Furthermore, comprehensive AI-specific governance is required. This includes policies for the ethical use of AI, defining responsibilities for monitoring AI systems, regular risk assessments, and training employees on the secure use of AI. Only then can the risk of AI becoming an insider threat be minimised (Financial Times, 2026 [2]).

    6. Conducting Impact Assessments for High-Risk AI Systems

      Performing impact assessments will become mandatory for high-risk AI systems from 2026. The EU AI Act already mandates this, and states like Colorado will require annual assessments for operators of high-risk AI systems from June 30, 2026 (Bloomberg Law, 2026 [3]). For Swiss companies using or developing such systems, this means systematically analysing the potential impact of their AI applications on fundamental rights, security, and other protected interests. This includes identifying risks, assessing their probability and severity, and defining mitigation measures. Such assessments are not just a regulatory requirement but an essential tool for proactively ensuring the trustworthiness and security of AI systems.

    The Role of Governance: Shaping AI Safely and Responsibly

    Implementing all these measures requires a strong governance structure. Governance in the context of AI is the sum of all processes, policies, and responsibilities that ensure AI systems are developed and deployed ethically, legally, and securely. It's about creating transparency, establishing accountability, and maintaining control over complex algorithms. Companies that build robust AI governance will not only overcome regulatory hurdles but also strengthen customer trust and secure their long-term innovation capabilities. It's an investment in the company's future.

    For many Swiss SMEs, this may initially seem overwhelming. However, the key lies in proactive engagement and phased implementation. The time until 2026 must be used to build the necessary structures and processes. Lukas Huber, an expert in data protection and AI compliance, emphasises: "Those who set the course now and establish solid AI governance will not only minimise risks but also gain a competitive advantage."

    The requirements for the data protection-compliant use of AI will significantly increase and become more complex by 2026. Swiss companies face the task of adapting their strategies and adopting a proactive stance. Strong governance, comprehensive documentation, and consideration of all relevant national and international regulations are the path to a secure and successful AI future. schnellstart.ai supports you in mastering these challenges and making your AI implementation future-proof.

    ```

    Frequently Asked Questions

    Was bedeutet DSG-Konformität bei der KI-Nutzung?+

    DSG-Konformität bedeutet, dass alle KI-gestützten Datenverarbeitungen dem Schweizer Datenschutzgesetz entsprechen. Dies umfasst: Datenspeicherung in der Schweiz oder EU, transparente Verarbeitungszwecke, Auftragsverarbeitungsverträge (AVV), technische Schutzmaßnahmen und die Wahrung von Betroffenenrechten.

    Darf ich amerikanische KI-Tools wie ChatGPT in meinem Schweizer Unternehmen nutzen?+

    Ja, aber mit Einschränkungen. Sie dürfen keine Personendaten oder vertrauliche Geschäftsdaten in öffentliche AI-Tools eingeben, ohne entsprechende AVV und Datenschutz-Folgenabschätzung. Für geschäftskritische Anwendungen empfehlen wir Schweizer oder EU-basierte KI-Lösungen mit lokaler Datenhaltung.

    Was ist ein Auftragsverarbeitungsvertrag (AVV) und brauche ich einen für KI-Tools?+

    Ein AVV ist ein Vertrag zwischen Ihnen (Verantwortlicher) und dem KI-Anbieter (Auftragsverarbeiter), der die Datenverarbeitung regelt. Sie benötigen einen AVV, sobald der KI-Anbieter personenbezogene Daten für Sie verarbeitet. Die meisten professionellen KI-Dienste bieten standardisierte AVV an.

    Welche Strafen drohen bei Datenschutzverstößen mit KI?+

    Das neue Schweizer DSG sieht Bußgelder bis zu CHF 250'000 für vorsätzliche Verstöße vor. Bei EU-Kunden gilt die DSGVO mit Strafen bis zu 4% des weltweiten Jahresumsatzes. Wichtiger als Strafen: Reputationsschäden und Vertrauensverlust bei Kunden.

    Wie setze ich technische und organisatorische Maßnahmen (TOM) für KI um?+

    TOM für KI umfassen: Verschlüsselte Datenübertragung und -speicherung, Zugriffskontrollen und Authentifizierung, regelmäßige Backups, Audit-Logs aller KI-Verarbeitungen, Datenlöschkonzepte, Mitarbeiterschulungen und Incident-Response-Pläne. Wir unterstützen Sie bei der Implementierung.

    Start Your AI Journey

    Ready to automate your business processes?

    Newsletter

    Receive our weekly briefing on Swiss AI & Deep Tech.

    Cookies & privacy

    Swiss FADP Compliant

    We use analytics to keep making schnellstart.ai better. You decide what's on.