
Lukas Huber
Founder & AI Strategist
OpenAI seeks no liability for AI damages. Discover the implications for Swiss SMEs and who bears the risk when AI makes mistakes.
Key Takeaways
- ▸OpenAI lehnt Haftung für KI-Schäden ab.
- ▸Schweizer KMU müssen Risiken bei KI-Einsatz neu bewerten.
- ▸Die Frage der Verantwortlichkeit bei KI-Fehlern ist ungeklärt.
OpenAI, the developer behind ChatGPT, aims to avoid liability for damages caused by its AI systems. While this news from the US might seem distant at first glance, it has direct implications for every Swiss SME looking to implement or considering Artificial Intelligence.
Suddenly, the central question arises: Who bears the risk if an AI system makes a mistake that disrupts a business process or even causes financial damage? For Swiss SMEs, often operating with limited resources and strict compliance requirements, this uncertainty is no small matter. It's about more than just technological implementation; it's about the livelihood and customer trust.
OpenAI's proposal to legally anchor a limitation of liability for AI developers shifts the risk perspective significantly. Companies relying on such tools must rethink their strategy. Blind trust in standard solutions becomes an expensive experiment. Those who don't look closely now risk more than just an investment – they risk losing control over their processes and their legal protection.
📊 Key Facts at a Glance:
- 45% of Swiss SMEs see AI as an advantage for their business operations. (Source: kmu.admin.ch, 2025)
- Average Handling Time (AHT) in call centres is expected to be reduced by 15% through AI. (Source: Industry estimate, 2026)
- Search time for information is expected to be reduced by 50%. (Source: Industry estimate, 2026)
- 40% of standard inquiries are expected to be handled automatically. (Source: Industry estimate, 2026)
What specific liability risks do Swiss SMEs face when using AI systems?
The primary risk lies with you, the implementing SME. If a developer like OpenAI seeks to limit its liability, the bulk of the responsibility falls on the user of the AI system. This affects a range of areas crucial for Swiss SMEs.
A central issue is the so-called "hallucination" of AI models. Generative AIs sometimes produce plausible but factually incorrect information. Imagine your AI-powered customer service chatbot providing a customer with incorrect warranty information or a faulty product description. The resulting financial damage, loss of reputation, or legal disputes fall under your responsibility. You are responsible for verifying the accuracy of information provided by the AI before it is shared with third parties.
Another significant risk lies in data protection and security. If you input sensitive customer data or internal business secrets into an AI system, you must ensure this data is adequately protected. The new Swiss Data Protection Act (nDSG) has significantly tightened requirements here. A data breach, caused by a vulnerability in the AI model or improper data handling by the system, can lead to hefty fines and a massive loss of trust. The responsibility for protecting this data lies with you as the data controller, not with the AI tool developer.
One must also consider the possibility of discrimination and bias. AI models learn from the data they are trained on. If this data is biased, the AI will reflect that bias. For instance, an AI-powered applicant screening tool might unconsciously disadvantage certain demographic groups. Or an AI system for creditworthiness assessment might perpetuate discriminatory patterns based on historical data. Such practices not only lead to ethical problems but can also have legal consequences, particularly in the context of the Swiss Equality Act.
Furthermore, operational risks should not be underestimated. An AI system integrated into your core processes can lead to operational disruptions, production downtime, or service bottlenecks if it malfunctions. If AI-driven process optimisation in logistics suddenly calculates incorrect routes or processes orders erroneously, the financial and logistical consequences will be directly felt by your SME. Restoring systems and rectifying damage not only consumes resources but can also have a lasting impact on delivery capability and, consequently, customer relationships.
⚠️ Warning: Don't blindly rely on T&Cs!
Standard terms and conditions from major AI providers are often designed to maximise the developer's limitation of liability. Scrutinise these clauses very carefully. Blanket acceptance can prove costly in the event of damage. Individual adjustments or choosing a provider that assumes more responsibility is often the wiser strategy. This is particularly true for SMEs operating in regulated industries or handling sensitive data.
How can Swiss SMEs protect themselves against potential damages from AI if developers like OpenAI intend to limit liability?
Careful planning, clear contracts, and a deep understanding of the technology are the best protection. It's not enough to simply subscribe to an AI solution and hope for the best. Swiss SMEs must act proactively and develop a comprehensive risk mitigation strategy.
The first step is a thorough analysis of your own framework. Before even considering implementation, you need to understand what data you have, which processes will be affected, and what compliance requirements apply to your business. My 6-step framework for AI business opportunities begins precisely here: with a detailed internal analysis. Only by understanding your starting point can you identify and evaluate the right use cases. The foundation must be solid.
Choose your AI partners carefully. The provider's terms and conditions must be scrutinised in detail. Ask for detailed information on training data, security mechanisms, and error correction procedures. If a provider significantly limits its liability, this should be a red flag. In such cases, consider whether a bespoke solution developed by specialised AI freelancers or local partners might be the better choice. You often have more influence over contract design and liability issues here. An example is professionalising a demo bot through Swiss AI freelancers, as we have successfully implemented in projects – the system is functional and is being made production-ready with clear responsibility.
Internally, implementing a robust requirements catalogue is essential. This catalogue must include not only business requirements like "reduce AHT by 15%" or "reduce search time by 50%" but also detailed technical and, crucially, legal requirements. Each requirement should be prioritised, for example, using the MoSCoW method. In the regulated banking sector, where FINMA and nDSG set clear rules, MoSCoW perfectly separates must-haves from nice-to-haves. "Must-have" requirements must encompass all compliance specifications. Without these, you don't go live.
Furthermore, establish clear internal guidelines for handling AI systems. Who is responsible for verifying AI outputs? What processes are necessary to identify and correct errors? Human-in-the-loop approaches, where human experts monitor and validate AI decisions, are often indispensable. This is particularly true for critical business processes or the handling of sensitive data. Such a governance structure not only minimises risk but also builds trust among employees and customers.
| Feature | Standard SaaS AI (e.g., OpenAI) | Customised AI Solution (locally developed) |
|---|---|---|
| Liability in case of damage | Primarily with the user (SME), as providers often heavily limit liability. | Can be contractually clearly regulated between the SME and the local developer (e.g., Swiss AI freelancer). |
| Control over data | Data processing by third-party providers; hosting often outside Switzerland/EU. | Full control over data storage (Swiss hosting possible) and processing. |
| Customisability | Limited customisation options for specific company processes and data. | High customisability for individual needs, processes, and compliance requirements. |
| Cost model | Often subscription-based, scalable by usage; initially lower costs. | Higher initial investment, but potentially more efficient and secure in the long run. |
| Implementation time | Fast, as it's a ready-to-use standard solution. | Longer, as development and customisation are required; but precisely tailored. |
| Transparency & Explainability | Black-box nature; little insight into functionality. | Higher transparency, as development process and algorithms are known. |
💡 Practical Example: Huber Treuhand GmbH
Huber Treuhand GmbH faced a growth dilemma: increasing client numbers, but internal knowledge transfer hindered scaling. Onboarding new junior staff consumed too much capacity from senior experts – a classic "onboarding dilemma." Lukas Huber and his team developed an AI-supported approach for knowledge transfer and automation of routine tasks. Instead of opting for a black-box solution, the AI was designed to access the trust company's specific knowledge base and provide transparent explanations. This relieves senior experts and enables scaling, while control over content and liability clearly remain with the company. This demonstrates how thoughtful implementation minimises risks.
What regulatory developments in Switzerland and the EU are relevant for Swiss SMEs regarding AI liability?
Switzerland is catching up, the EU is forging ahead. Even though Switzerland is not a member of the European Union, regulatory developments there have a significant impact on Swiss SMEs. This is particularly true for the planned EU AI Act, considered the world's first comprehensive legal framework for Artificial Intelligence.
The EU AI Act follows a risk-based approach. High-risk systems – for example, in the areas of medicine, justice, or critical infrastructure – are subject to strict requirements regarding data quality, human oversight, transparency, and cybersecurity. For Swiss SMEs exporting products or services to the EU or collaborating with EU companies, this means they must comply with these requirements, even if their headquarters are in Switzerland. Non-compliance can lead to market access barriers and hefty fines. FINMA, for instance, has already announced it will closely monitor developments, which is directly relevant for banks and financial service providers in Switzerland.
In Switzerland itself, there is no specific AI law yet. However, discussions are intense. It is foreseeable that Switzerland, similar to the Data Protection Act, will develop its own regulation, closely aligned with the EU. The Federal Council has already published various initiatives and recommendations for AI regulation. Until then, Swiss SMEs must rely on existing laws. The new Data Protection Act (nDSG) plays a central role here. It requires companies working with personal data to exercise a high degree of diligence and transparency, which directly impacts the use of AI.
In addition, general liability laws, the Product Liability Act, and the Unfair Competition Act (UWG) are relevant. If an AI generates misleading advertising, for example, or if a product is defective due to an AI error, these laws may apply. The difficulty often lies in proving the causality of an AI error and clearly assigning responsibilities. This makes the current situation so confusing and risky for SMEs.
💡 Tip: Proactive Compliance Analysis
Don't wait for a Swiss AI law to come into effect. Conduct a proactive compliance analysis to assess how your current or planned AI applications align with the nDSG, the EU AI Act (if relevant), and other existing laws. Prioritise identified measures using methods like MoSCoW. Ensure all "must-have" requirements regarding data protection, security, and transparency are met before deploying AI into production. This creates a solid foundation for future regulation.
✅ Recommendation: Invest in AI Literacy for Your Executives
The best safeguard against unknown risks is knowledge. Educate your management and C-level board on the fundamentals of AI, its potential, and, most importantly, its risks. An IPSO certificate in AI Business, which I personally hold, provides precisely this practical understanding. Only when your executives understand the technology can they make informed decisions about implementation, governance, and risk management. This is not a technical task but a strategic necessity for any modern SME.
Conclusion: The Liability Question as a Wake-Up Call for Swiss SMEs
OpenAI's intention to limit liability for AI damages is more than just a footnote from Silicon Valley. It's a clear wake-up call for every Swiss SME that uses or plans to use AI. The days of relying on the software developer's liability are over. The responsibility for the safe and compliant use of AI systems increasingly lies with the user.
This doesn't mean you should forgo the potential of AI. Quite the opposite: the efficiency gains mentioned in the facts box are real and crucial for the competitiveness of SMEs. However, it does mean you need to approach the implementation and operation of AI solutions with new diligence and a sharpened risk awareness. Proactive action, informed decisions, and a clear strategy are more important now than ever.
Here are three key takeaways for you as a Swiss SME executive:
- ✅ Understand Liability Risks: Recognise that the primary liability lies with you if AI systems make errors, cause data breaches, or produce discriminatory results. A thorough review of T&Cs is essential.
- ✅ Proactively Secure Yourself: Opt for customised solutions where appropriate, and implement robust internal guidelines, human oversight, and a detailed requirements catalogue. A systematic approach, like our 6-step framework, minimises risks.
- ✅ Keep an Eye on Regulatory Developments: Pay attention to the nDSG and developments surrounding the EU AI Act. A proactive compliance analysis will protect your company from future legal pitfalls.
I am happy to assist you in clarifying these complex issues and developing a secure, efficient AI strategy for your company. Get in touch. Contact us for a no-obligation initial consultation.
Frequently Asked Questions
Warum will OpenAI nicht für Schäden durch KI haften?+
OpenAI hat einen Gesetzentwurf vorgelegt, der sie von der Haftung für Schäden, die durch ihre KI-Systeme verursacht werden, befreien soll.
Welche Auswirkungen hat das auf Schweizer KMU?+
Schweizer KMU, die KI einsetzen, müssen sich mit der Frage auseinandersetzen, wer das Risiko trägt, wenn die KI Fehler macht und Schäden verursacht.
Wer ist verantwortlich, wenn eine KI einen Fehler macht?+
Die Verantwortlichkeit bei Fehlern von KI-Systemen ist eine zentrale und noch ungeklärte Frage, die durch den Vorstoss von OpenAI an Bedeutung gewinnt.
Related Articles
Newsletter
Receive our weekly briefing on Swiss AI & Deep Tech.