Technology28 March 20269 min

    EU Bans Deepfake Software: What Swiss SMEs Need to Know Now

    L

    Lukas Huber

    Founder & AI Strategist

    The EU bans deepfake software for non-consensual pornographic content. What Swiss SMEs need to know now and what the implications are.

    ```html

    The digital world is becoming more complex, and with it, the demands on Swiss SMEs. A recent decision by the EU Parliament is causing a stir: the ban on software that creates pornographic deepfakes without consent. While this might sound distant to many, the reality is different: digital boundaries are blurring, and what happens in the EU often has direct or indirect consequences for Switzerland.

    Specifically, this concerns tools that generate deceptively realistic nude images and videos – often without the knowledge or consent of the individuals depicted. The EU Parliament has taken a firm stance here, voting for a ban on such "nudifier apps." According to DW.com (2026), this is a crucial step in the fight against digital violence. The question isn't whether this affects Swiss SMEs, but how. It's about reputation, compliance, and the ethical use of AI – topics that every responsible company must address.

    Anyone thinking this is a problem for large tech corporations or the entertainment industry is mistaken. Smaller businesses using AI for marketing, content creation, or internal communication also face new challenges. The line between permissible and prohibited AI use is becoming finer, and legal risks are increasing. It's time to take a closer look.

    📊 Facts at a Glance:

    • Fact: The EU Parliament has voted to ban programs that allow users to create pornographic deepfakes without consent. (Source: DW.com, 2026)
    • Fact: In the US, 46 states have joined laws against publishing intimate images, including deepfakes, without consent. (Source: New York Post, 2026)
    • Fact: The EU is tightening rules for age verification and demanding more responsibility from online platforms for hosted content. (Source: Lexology, 2026)
    • Fact: A high-profile case in Germany, where an actress accused her ex-husband of posting AI-generated pornography of her online, has led to calls for stricter laws. (Source: Reuters, 2026)

    How can Swiss SMEs ensure they are not using illegal deepfake tools?

    The answer is clear: through transparency, clear guidelines, and a thorough review of the AI tools being used. It's not just about avoiding explicitly banned software, but also about understanding the origin and functionality of all generative AI tools you employ within your company. Many SMEs might unknowingly use freeware or trial versions available on the market without checking the exact terms of use or the ethical implications.

    A structured approach, like the one we apply in requirement analysis for AI agents in regulated environments, is essential here. Requirements must be clearly defined, classified, and prioritised. For compliance requirements, such as those arising from the nDSG or FINMA, there is no room for negotiation. These are "must-haves," and this also applies to adherence to international standards that can indirectly affect Switzerland. For instance, if you create content for the EU market, you must comply with EU regulations, even if you are a Swiss company.

    💡 Tip: Conduct a Tool Audit

    Perform an internal audit of all generative AI tools used within your company. Document which tools are used by whom and for what purposes. Review the license terms, the origin of the models, and the data protection policies. Pay attention to Swiss hosting and GDPR compliance, especially when processing personal data. Document this process carefully.

    It is crucial to know your digital content supply chain. Where do the images, videos, or texts generated by AI come from? Were they created with the consent of the individuals depicted? Were training data used that are ethically and legally sound? These questions may seem complex at first glance, but the risks of non-compliance – reputational damage, hefty fines, and legal disputes – are real and can jeopardise the existence of an SME. The legal costs alone for a case in Germany, where an actress sued her ex-husband over AI-generated pornography (Reuters, 2026), likely far exceed what preventive measures would have cost.

    Technology is evolving rapidly. What is not considered a deepfake today could be interpreted as such tomorrow, especially as the lines between reality and synthetic content increasingly blur. Therefore, continuous awareness and training for employees are necessary. Everyone working with AI tools must understand the risks and know how to act in compliance with the law.

    The integration of AI solutions into daily work should always be done with a clear focus on governance and compliance. A "demo bot," for example, which I developed using RAG architecture, can form the basis for secure and controlled content creation when operated with the right guidelines and hosting standards (e.g., on Infomaniak in Geneva). Open frameworks like LangChain or LlamaIndex offer the flexibility to maintain control over data and generated content, rather than relying on black-box solutions.

    What are the implications of the EU ban for Swiss companies using AI for content creation?

    Even though Switzerland is not directly bound by EU laws, the implications for Swiss SMEs, particularly in the creative and media sectors, are noticeable and potentially far-reaching. The digital space knows no national borders. If you create content that could be consumed in the EU – whether through your website, social media, or collaborations with EU partners – you must adhere to the rules applicable there. The principle of the "market place principle" applies here: where your content has an effect, the laws of that place apply.

    The ban in the EU sends a strong signal and sets a precedent. It is highly probable that similar legislation will be discussed and implemented in Switzerland as well, to keep pace with international standards and protect the population from digital violence. Already, 46 US states have enacted laws against the publication of intimate images without consent (New York Post, 2026). Switzerland will likely find it difficult to escape this trend, not least due to its close economic and cultural ties with Europe.

    🚀 Practical Example: Marketing Agency in Transition

    A Swiss marketing agency, specialising in video production for SMEs, previously used AI tools for the rapid creation of animated characters and voice cloning for commercials. Following the EU decision, the agency introduced a strict internal policy: every AI-generated element must be transparently labelled, and explicit consent must be obtained in writing from the individuals depicted or whose voices are cloned. Furthermore, investment was made in Swiss hosting for all AI-generated assets to secure data sovereignty. This increased effort by 5-10% in the short term but reduced legal risk by an estimated 80% and strengthened customer trust.

    For SMEs creating images or videos for marketing purposes, advertising, or other content, this means increased awareness. It is no longer sufficient to focus solely on the technical quality of AI-generated content. The ethical origin of the data, the transparency of generation, and the respect for personality rights are becoming central factors. An SME showing negligence here risks not only legal consequences but also massive reputational damage, which can spread quickly in today's interconnected world and is difficult to repair.

    The EU is also tightening rules for age verification and demanding more responsibility from online platforms for hosted content (Lexology, 2026). This indirectly affects Swiss companies operating platforms or online services that host user-generated content. The need to moderate content and check for illegal deepfakes is increasing. Those who fail to take precautions can quickly find themselves in a difficult situation if, for example, a user spreads a deepfake through your platform.

    The implications go beyond a mere ban. It shapes new societal and consumer expectations regarding the responsible use of AI. Swiss companies that proactively meet these expectations can gain a competitive advantage by positioning themselves as trustworthy and ethically acting partners.

    Swiss SMEs must develop a proactive strategy based on risk analysis, governance, and continuous adaptation. It is not enough to merely react to new laws. The speed of AI development requires forward-thinking planning that includes "quick wins" for immediate risk mitigation, as well as medium-term projects for process optimisation and long-term transformation of corporate culture.

    Aspect Approach 1: Reaction and Rectification Approach 2: Proactive Prevention and Governance
    Costs High potential fines, legal fees, crisis management costs, and reputational damage (often CHF 100,000+ for a single incident). Lower investment in processes, training, and technologies (e.g., CHF 10,000 - 30,000 for initial audit and guidelines).
    Reputational Risk Very high; rapid spread of negative news, loss of trust from customers and partners, difficulties in recruitment. Low; positioning as a responsible company, strengthening customer loyalty and brand image.
    Legal Certainty Uncertain; constant threat of lawsuits and regulatory investigations, high uncertainty in interpreting new laws. High; clear guidelines, documented measures, minimised risk of legal disputes and fines.
    Operational Effort Unplanned high effort during incidents, disruption of business operations, resource allocation for crisis management. Planned, calculable effort for implementation and maintenance of processes, less disruption to daily business.

    The first step is a comprehensive risk analysis. Identify where generative AI is used or could be used within your company. What data is used? Who has access to the tools? What content is generated and where is it published? This analysis should also consider the compliance requirements of the nDSG and, where applicable, FINMA (for financial service providers).

    Based on this analysis, you must develop clear internal guidelines for the use of AI tools. These guidelines must be binding for all employees and regularly reviewed and updated. A key aspect is training: everyone working with AI must understand the risks and know the rules. This also includes awareness of the possibility that AI-generated content could be misused by third parties.

    ⚠️ Warning: Don't rely on "common sense"

    Do not assume that employees "already know" what is permissible and what is not. The complexity of AI and the grey areas surrounding deepfakes require explicit instructions and regular training. A lack of clearly communicated rules is a compliance risk that can become costly in the event of an incident. Courts show little leniency in cases of digital violence.

    Technologically, Swiss SMEs can rely on proven approaches. I often recommend a RAG (Retrieval-Augmented Generation) architecture with open-source frameworks like LangChain or LlamaIndex. In combination with a Vector DB hosted in Switzerland, such as on Infomaniak, and LLM APIs from trusted providers (Infomaniak AI or OpenAI Enterprise), controlled and traceable content generation can be achieved. This minimises the risk of the AI generating unwanted or even illegal content, as access to the data base is clearly defined and restricted.

    Another crucial point is documentation. Record which AI tools you use, what the guidelines are, who has been trained, and how you monitor compliance. In the event of an incident, you can then demonstrate that you have taken all reasonable measures to prevent the dissemination of illegal content. This serves as an important shield against legal accusations.

    ✅ Recommendation: Utilise External Expertise

    External expertise can be invaluable, especially when implementing AI governance and compliance measures. A specialised partner can assist you in identifying risks, developing tailored guidelines, and training your employees. This saves internal resources and ensures you are up-to-date with the latest technology and legislation. Think about roadmap development: start with quick wins to close immediate gaps, then plan for medium-term process optimisation and long-term transformation.

    Investing in preventive measures is not an expense, but an investment in the future and security of your company. The costs of implementing robust governance structures are typically significantly lower than the potential fines and reputational damage that a single deepfake incident can cause.

    Conclusion

    The EU ban on deepfake software is a wake-up call for Swiss SMEs. It makes it clear that the ethical and legally sound use of AI is not an option, but a necessity. Those who act now not only protect their own reputation and finances but also position themselves as future-proof and trustworthy players in the digital market.

    Understand the Risks: Deepfakes are a real threat with potentially high legal and reputational costs. Ignoring them is not a strategy.

    Act Proactively: Create clear guidelines, train your employees, and regularly audit your AI tools. Opt for transparent and controllable architectures.

    Ensure Your Compliance: Make sure your AI usage complies not only with the nDSG but also with indirectly relevant EU regulations, especially if you operate in the European market.

    Unsure how to assess the risks in your company and take the right measures? At schnellstart.ai, we help Swiss SMEs implement AI safely and efficiently. Contact us for a no-obligation initial consultation to discuss your specific challenges and develop a tailored strategy. Talk to us.

    ```

    Start Your AI Journey

    Ready to automate your business processes?

    Newsletter

    Receive our weekly briefing on Swiss AI & Deep Tech.

    Privacy

    We use cookies for analytics and better user experience.