
Lukas Huber
Founder & AI Strategist
Children's book publisher sues OpenAI in Germany over AI use. The 'Drache Kokosnuss vs. ChatGPT' case has far-reaching implications for SMEs.
A children's book publisher is suing OpenAI in Germany – a headline that might seem far removed from the daily operations of a Swiss SME manager at first glance. However, the "Coconut the Dragon vs. ChatGPT" case is more than just a footnote in the tech world. It's a wake-up call that could shake the foundations of AI adoption in businesses and have direct implications for your efficiency and compliance.
Imagine investing in AI tools to streamline your processes and save over 12 hours of administrative work weekly. Suddenly, you're facing legal battles because the AI you're using was trained unlawfully. The case brought by Penguin Random House, which accuses OpenAI of using texts and images from the popular "Coconut the Dragon" series without permission, clearly demonstrates: Careless use of artificial intelligence carries significant, incalculable risks. This affects not only large publishers but every Swiss SME that employs AI.
📊 Facts at a Glance:
- Fact: By 2027, over 75% of all companies worldwide are expected to be using AI-powered applications, often without a full understanding of the legal implications (Gartner, 2023).
- Fact: The global market for AI software is projected to reach CHF 1.8 trillion by 2030, further increasing pressure on legal frameworks and compliance (Statista, 2024).
What is "Coconut the Dragon vs. ChatGPT": Children's Book Publisher Sues OpenAI in Germany?
The lawsuit filed by children's book publisher Penguin Random House against OpenAI in Germany is a legal precedent questioning the use of copyrighted material for training AI models. It concerns allegations that ChatGPT has reproduced content from the popular "Coconut the Dragon" series without authorisation. This isn't the first case of its kind; GEMA in Germany has already raised similar concerns. At its core is the question of whether and how AI systems may be trained on data that has not been explicitly licensed for this purpose.
For operators of AI models like OpenAI, this means they may be held accountable for the origin of their training data and its lawful use. This could have far-reaching consequences for the entire AI industry, as many of the large language models (LLMs) available today have been trained on vast, publicly accessible datasets whose copyright status is often unclear. The lawsuit demands transparency and a re-evaluation of the principles by which AI models learn and generate content.
This development highlights a fundamental challenge: technological progress meets existing law. Current legal frameworks are often not designed for the complexity of AI applications. This creates a grey area in which both AI developers and users operate. The ruling in this case could establish significant guidelines for the future of AI development and use.
⚠️ Warning: Unresolved Copyright Issues with AI
Do not assume that the AI you are using will always generate legally sound content or is based on correctly licensed data. Especially with generative AI, there's a risk that outputs may inadvertently reproduce copyrighted material. This can lead to costly warning letters or lawsuits, even if you did not intend to infringe copyright.
How Do Swiss SMEs Benefit from "Coconut the Dragon vs. ChatGPT": Children's Book Publisher Sues OpenAI in Germany?
Swiss SMEs indirectly benefit from this lawsuit through increased legal certainty and the necessity to develop a well-thought-out AI strategy that incorporates compliance and governance from the outset. Even though the case itself presents a risk, it forces a much-needed clarification of the legal frameworks for AI. In the long run, this creates a more reliable foundation for using AI tools and minimises the risk of unpleasant surprises.
For you as an SME manager, this doesn't mean burying your head in the sand and avoiding AI. On the contrary, it's an opportunity to proactively address the legal and ethical aspects now. A sound AI strategy that integrates aspects like data origin, data protection (DSG compliance), and output control protects your company and allows you to safely leverage the enormous efficiency potential of AI. Consider the possibility of automating repetitive tasks, which can free up to 20% of your employees' working time.
The lawsuit is pushing AI solution providers to be more transparent about their training data and to develop models that are less susceptible to copyright infringement. This means that safer and legally cleaner AI offerings may come to market in the future, from which you can directly benefit. The development of MLOps frameworks and the ability to fine-tune LLMs on your own licensed data, for example, already offer practical solutions today for securely processing proprietary and sensitive data.
💡 Practical Example: Secure AI Integration in Swiss Mechanical Engineering
A medium-sized Swiss mechanical engineering SME with 80 employees faced the challenge of digitising and analysing manual inspection logs and quality reports. Instead of relying on a public, generic LLM, the company opted for a tailored solution. A local Small Language Model was trained on its own anonymised data from the past five years. This model was made accessible via an internal web application (Gradio/Streamlit). The results were impressive: Analysing 1000 inspection reports, which previously required 40 hours of manual work, is now completed in under 2 hours. At the same time, all data is DSG-compliant and copyright-free, as only company-owned content was used.
What are the Risks in "Coconut the Dragon vs. ChatGPT": Children's Book Publisher Sues OpenAI in Germany?
The primary risk for Swiss SMEs lies in the unintentional use of AI systems that generate content with copyright concerns or were trained on unlawfully acquired data, leading to legal disputes, reputational damage, and high financial costs. The lawsuit makes it clear that legal responsibility does not solely lie with the AI developer but also with the users. If your SME uses an AI tool that generates content infringing copyright, you could potentially be held jointly responsible.
Another risk is the "hallucination" of AI models, where factually incorrect or fabricated information is outputted. If this false information seeps into your business processes, such as customer communication, marketing materials, or even product development, it can lead to serious misjudgments and loss of credibility. Such a situation can severely disrupt customer relationships and, in the worst case, lead to financial losses in the range of five- to six-figure CHF amounts.
Compliance with the Swiss Federal Act on Data Protection (DSG) is also a critical point. Many AI tools, especially those hosted in the US, do not offer sufficient guarantees for the protection of sensitive company or customer data. The PESTEL framework helps us to comprehensively assess the legal and technological risks here. Without a clear strategy for data processing and storage, you run the risk of violating the DSG, which can result in substantial fines.
| Aspect | Ad-hoc AI Use (Risky) | Strategic AI Use (Secure) |
|---|---|---|
| Data Protection (DSG) | Unverified use of external tools; data transfer abroad without control; high risk of DSG violations. | Evaluated tools with Swiss hosting; clear data policies; internal training; minimised DSG risk. |
| Copyright & Liability | Uncontrolled content generation; risk of reproducing protected material; direct liability in case of misuse. | Content verification processes; use of models with clear licensing; contractual safeguards with providers. |
| Efficiency Gains | Short-term gains, but high risk due to erroneous or legally questionable results; uncertain long-term. | Sustainable, secured process optimisation; reliability of results; long-term, measurable ROI. |
| Reputation | Danger of negative headlines and loss of trust with customers and partners due to compliance violations. | Strengthening the company image as an innovative and responsible player. |
| Costs | Low entry costs, but high potential for unexpected costs due to fines, lawsuits, and remediation needs. | Clearly calculable investments in strategy and secure tools; avoidance of follow-up costs. |
💡 Tip: Prompt Engineering for Compliance
Train your employees in so-called Prompt Engineering. Clear and precise instructions to AI models can help avoid unwanted or legally problematic content. Add prompts such as "Generate content only based on facts that were publicly available before [date] and are not copyrighted" or "Avoid any references to specific brands or protected works." This reduces the risk of copyright infringement and improves the quality of the results.
The dispute over Coconut the Dragon is a clear signal: The era of naive experimentation with AI is over. Those who do not act now and develop a solid strategy risk not only legal problems but also significant competitive disadvantages. The integration of AI must be approached with the same diligence and strategic planning as any other critical business decision.
🚀 Recommendation: Start with an AI Readiness Analysis
Before fully integrating AI tools into your SME, conduct a comprehensive AI Readiness Analysis. This 5-Pillar Analysis assesses your Strategy & Vision, Data & Infrastructure, Skills & Culture, Processes & Organisation, and Ethics & Compliance. It identifies potential risks and opportunities specific to your company and provides a clear roadmap for secure and effective AI implementation. This ensures your AI initiatives are built on a solid foundation and you can benefit from its advantages without taking unnecessary risks.
As Lukas Huber, who works with Swiss SMEs daily, I see the immense potential of AI, but also how crucial a structured approach is. The strategic integration of AI, considering governance and compliance, is not a luxury but a necessity. It's about seizing opportunities while minimising risks. With a well-founded analysis, a clear strategy, and the right expertise, you can ensure your AI journey is successful and legally secure.
Conclusion
The legal dispute over Coconut the Dragon vs. ChatGPT is a wake-up call for every Swiss SME that uses or plans to use AI. It underscores the urgency of developing a robust AI strategy that considers legal aspects such as copyright and data protection (DSG) from the outset. Those who act proactively now secure competitive advantages and protect their company from incalculable risks.
✅ Implement AI with a clear strategy and governance.
✅ Ensure Swiss hosting and DSG compliance for your data.
✅ Train your employees in the safe and responsible use of AI tools.
Would you like to learn more about how your SME can implement AI safely and effectively? Get in touch with us for a no-obligation consultation and let's discuss your specific challenges: Contact schnellstart.ai
Related Articles
Newsletter
Receive our weekly briefing on Swiss AI & Deep Tech.