
Lukas Huber
Founder & AI Strategist
Drache Kokosnuss vs. OpenAI: What the lawsuit means for Swiss SMEs. AI copyright infringement? What you need to know.
The dragon Coconut, a character familiar from children's books, is at the centre of a legal dispute that could have far-reaching consequences for any Swiss SME working with artificial intelligence. Penguin Random House, the publisher behind the popular children's book series, has filed a lawsuit against OpenAI in Germany. The accusation: ChatGPT allegedly reproduced texts and images from the Dragon Coconut series without authorisation. What is being heard in court in Munich is not an isolated case; it is a clear signal to the entire economy that the era of unregulated AI use is coming to an end.
For Switzerland, where 99.7% of companies are classified as SMEs, this development is of crucial importance. It compels us to re-evaluate the interface between innovation, copyright, and corporate responsibility. Will your texts, images, or software code soon be used by an AI model without your knowledge or compensation? Or are you already unknowingly using AI-generated content that carries legal risks?
The lawsuit is a wake-up call. It underscores that the legal framework for AI is still evolving and that companies must act proactively to protect themselves from unpleasant surprises while responsibly harnessing the opportunities of AI.
📊 Facts at a glance:
- SME Share in Switzerland: In Switzerland, 99.7% of companies are SMEs. (Source: swisspeers Blog, 2026)
- OpenAI's Advertising Success: OpenAI's US advertising pilot achieved over $100 million in annualized revenue in six weeks. (Source: Reuters, 2026)
- GEMA Lawsuit: The GEMA lawsuit against OpenAI was heard at the Munich District Court. (Source: BR.de, 2026)
- Penguin Random House Lawsuit: The lawsuit filed by Penguin Random House against OpenAI was lodged with the Munich District Court on March 27. (Source: Süddeutsche Zeitung, 2026)
How can Swiss SMEs protect their copyrights when AI models use their content?
Active protection and clear licensing are crucial. Swiss Copyright Law (URG) protects works of literature and art, provided they have an individual character. This includes texts, images, software code, and much more that is of central importance to SMEs. The challenge in the age of generative AI is that models like ChatGPT are trained on vast amounts of data, which often include copyrighted works – without explicit consent from the rights holders.
Many AI models scour the internet for information and use it to identify patterns and generate new content. It is often not transparent which specific works are included in the training data. For a Swiss publishing house, an advertising agency, or a software development company, this means that their own creative and intellectual achievements could potentially be "learned" and reproduced by AI systems without permission or compensation. This not only poses a financial risk but also a loss of control over one's own intellectual property.
To protect themselves, Swiss SMEs must take proactive measures. An effective strategy begins with clearly labelling and licensing one's own content. Consider which licensing models you want to apply, such as Creative Commons if you wish to allow a certain degree of use, or more restrictive general terms and conditions (AGBs) that explicitly prohibit use by AI models or permit it only under specific conditions. These AGBs should be prominently displayed on your website and clearly communicated during contract negotiations.
Technical protective measures such as watermarks or metadata can provide an initial hurdle, but they are often insufficient against advanced AI models. The focus should therefore be more on legal safeguards and monitoring. This includes regularly checking whether and how your content is being used online. There are specialised tools that can help detect the unauthorised use of your works.
💡 Tip: Checklist for copyright protection in the AI age
- Update AGBs: Add clauses that regulate or prohibit the use of your content by AI training models.
- Review licensing models: Consider whether Creative Commons or proprietary licenses are suitable for your content.
- Embed metadata: Include copyright information in your digital assets.
- Communicate terms of use clearly: Make it clear on your website and in your products what use is permitted.
- Seek legal advice: A lawyer specialising in copyright law can help you develop a tailored protection strategy.
As part of our AI readiness analyses, we repeatedly find that many SMEs are not yet sufficiently addressing this dimension of copyright protection in the context of AI. However, the legal and ethical component is a central pillar for a sustainable AI strategy. Sound legal advice is essential here to avoid unnecessary risks.
What legal consequences do Swiss companies face when using AI-generated content based on copyrighted material?
The risks range from injunctions to substantial damages claims. If your Swiss company uses AI-generated content based on copyrighted material that has not been properly licensed, you are treading on thin ice. The current lawsuit by Penguin Random House against OpenAI highlights the problem: the accusation is that ChatGPT unlawfully reproduced works from the Dragon Coconut series. If this is confirmed, OpenAI could be ordered to cease and desist and pay damages.
For you as a Swiss SME managing director, it is crucial to understand that not only the developer of the AI but also the user of the AI-generated content can be held liable. For example, if you use AI tools to create marketing texts, graphics, or code, and this content unlawfully reproduces third-party copyrighted material, you yourself could become the target of a lawsuit. The question of liability is complex, and case law in Switzerland is not yet fully developed in this area. Nevertheless, the risk is real.
The consequences can be diverse. Firstly, there are injunctions that force you to immediately remove the problematic content. In addition, there are claims for damages, which can be significantly high depending on the scope and nature of the infringement. Such legal disputes are not only financially burdensome but also permanently damage your company's reputation. A loss of reputation can be more severe in the long run than a fine, as it erodes the trust of customers and partners.
The duty of care when using AI-generated content should therefore not be underestimated. It is your responsibility to conduct due diligence and verify the origin of the training data used by the AI and the originality of the generated content. This is, admittedly, not an easy task. Verifying the origin of training data is technically extremely demanding and requires a deep understanding of data pipelines and MLOps frameworks. Many generative AI models are black boxes whose internal workings and training data are not transparently disclosed.
| Aspect | Use of Standard AI Models (without origin verification) | Use of AI Models with Verified or Own Training Data / Own AI Development |
|---|---|---|
| Legal Risk (Copyright) | High. Unclear data origin, potential for unauthorised reproduction of third-party content. Risk of lawsuits and damages. | Low to medium. Control over training data, licensing can be ensured. Risk minimised, but not zero. |
| Costs | Low usage costs for standard tools, but potentially high costs in case of legal disputes. | Higher initial investment for data verification, licensing, or own development, but lower legal risk costs in the long term. |
| Flexibility | High flexibility in using broad, general AI models. | Lower flexibility, as the focus is on controlled data and specific use cases. |
| Compliance & Governance | Difficult to ensure compliance standards, especially regarding GDPR and URG. | Significantly improved compliance and governance through transparency and data control. |
| Effort | Low effort for initial use, high effort in case of legal disputes. | High effort for conception, data management, and development/adaptation, lower effort for legal defence. |
⚠️ Warning: Blind trust in AI outputs can be costly
Do not blindly rely on AI-generated content. Any material you publish or use commercially should be checked for originality and copyright compliance. This applies to texts, images, music, and code. Negligence can lead not only to financial losses but also to significant reputational damage.
This is particularly relevant for Swiss companies operating in e-commerce, marketing, or software development. The need for a clear AI strategy that also considers these aspects is becoming increasingly evident. It's about seizing opportunities without taking unnecessary risks.
What does the lawsuit against OpenAI mean for the future development and use of AI tools in Switzerland?
It enforces greater transparency, clearer licensing models, and a reassessment of risks. The lawsuit against OpenAI is more than just an isolated case; it is a precedent that will influence the entire AI industry. Even though the proceedings are taking place in Germany, such developments send a clear signal across borders, including to Switzerland. They mark a turning point where the legal and ethical responsibility of AI developers and users is coming into sharper focus.
One of the most significant effects will be increased regulatory pressure. The EU AI Act is already an example of how legislators are attempting to regulate AI applications. Switzerland will not be able to escape these developments. It is foreseeable that discussions about specific AI laws, addressing liability issues, transparency requirements, and copyright protection, will intensify in Switzerland as well. This could mean that AI providers will be obliged in the future to disclose the origin of their training data or to establish licensing models that adequately compensate rights holders.
For Swiss SMEs, this has several implications. Firstly, the costs of licensed AI models may increase. If AI providers have to pay license fees to rights holders, these costs will likely be passed on to users. Secondly, companies must adapt their own AI strategy. Our strategic analyses show that regulatory developments like this lawsuit have a direct impact on the 'Legal' factors in the PESTEL framework and the 'Threats' in a SWOT analysis. It is crucial to identify these external factors early and integrate them into your own planning.
A stronger focus on "Responsible AI" and AI ethics will be unavoidable. Companies will be encouraged to use AI tools responsibly, not only for legal reasons but also for reasons of reputation and trust. This includes developing internal guidelines for AI use, training employees, and regularly reviewing the AI systems used.
💡 Practical Example: Swiss publisher focuses on data origin
A medium-sized Swiss specialist publisher, using AI to generate articles and summaries, has proactively introduced an internal policy. Every AI-generated text segment is manually checked for plagiarism, and the training data used is examined for licensing where possible. If uncertainties arise, they revert to their own, licensed, or public domain data sources. This significantly minimises the risk of copyright infringement and strengthens readers' trust in the quality and integrity of the content.
The lawsuit could also foster the development of open-source AI models that are trained from the outset on transparent and legally sound datasets. Such models could represent an attractive alternative for SMEs that value compliance and control. The goal is to identify opportunities and minimise risks through sound strategic planning.
Recommendation: Invest in a robust AI strategy
The current legal uncertainty surrounding AI copyrights underscores the need for a well-thought-out and future-proof AI strategy for your SME. Such a strategy considers not only technological possibilities but also legal, ethical, and organisational aspects. Only then can you fully leverage the potential of AI while protecting your company from unnecessary risks.
As Lukas Huber, with my IPSO diploma in AI Business, I see daily how important it is to understand these complex interrelationships. The integration of AI must be strategic, based on a solid analysis of internal and external factors, as dictated by the PESTEL and SWOT frameworks.
Ultimately, the lawsuit against OpenAI will not mean the end of generative AI, but rather will force a maturation of the entire ecosystem. This is an opportunity for Swiss SMEs to position themselves as pioneers in responsible AI use.
The lawsuit of the dragon Coconut against OpenAI is an unmistakable signal: the legal framework for artificial intelligence is becoming increasingly stringent and demanding. For Swiss SMEs, this means that a passive approach is no longer sustainable. Those who want to leverage the opportunities of AI must actively engage with the associated copyright issues and liability risks.
It's not about demonising AI, but about implementing it with care and foresight. The ability to verify data origins, establish internal guidelines, and continuously adapt one's AI strategy will become a crucial competitive advantage. The future belongs to those companies that combine innovation with responsibility.
✅ Actively protect your copyrights through clear licensing and regular monitoring.
✅ Understand the liability risks when using AI-generated content and verify its origin.
✅ Adapt your AI strategy to evolving legal and ethical requirements.
Would you like to ensure that your company uses AI potential responsibly and in compliance with the law? Contact us for a comprehensive AI readiness analysis and develop a tailored strategy for your SME. Visit us at schnellstart.ai/en/contact.
Related Articles
Newsletter
Receive our weekly briefing on Swiss AI & Deep Tech.