
Lukas Huber
Founder & AI Strategist
Anthropic accidentally leaks source code. What does this human error mean for Swiss SMEs and their data security?
A big name, an embarrassing incident: American AI company Anthropic, known for its high security standards, accidentally published a substantial part of its secret source code. Over 500,000 lines of code and almost 2,000 files ended up online unintentionally. A human error, as confirmed by NZZ Wirtschaft, which tarnishes the image of a supposed security pioneer.
This incident, as harmless as it may seem at first glance, is more than just a footnote in the tech world. It serves as a clear warning signal for any Swiss SME dealing with the implementation of Artificial Intelligence or already relying on third-party AI solutions. What happens if similar errors occur with one of your service providers? Or, more critically: if they happen within your own company?
The reality is: even the largest and most security-conscious players are not immune to human error. For Swiss SMEs, this means that a healthy dose of scepticism and a robust in-house security strategy are indispensable. It's not just about what the AI provider promises, but about what you can do yourself to protect your data and your business.
📊 Facts at a glance:
- Code Leak: Over 500,000 lines of code and almost 2,000 files were accidentally published by Anthropic. (Source: t3n, 2026)
- AI Usage: Software development dominates the AI usage landscape, with debugging and technical problem-solving each accounting for around 6% of API traffic. (Source: Anthropic, 2025)
- Media Attention: NZZ reports on the incident, highlighting the importance of robust security protocols for all technology companies, including Swiss SMEs. (Source: NZZ Wirtschaft, 2026)
- Regulatory Pressure: The NIS-2 Directive is pressuring thousands of companies to meet deadlines to avoid penalties, underscoring the growing regulatory landscape for cybersecurity and software development. (Source: csoonline.com, 2026)
What concrete measures can my Swiss SME take to prevent similar unintentional code releases?
Focusing on internal processes, training, and technical safeguards is crucial. The Anthropic incident is a clear example that even highly sophisticated companies are vulnerable to human error. For Swiss SMEs, often operating with limited IT resources, prevention is therefore doubly important.
First, a culture of accountability must be established. This means that every employee working with sensitive code or data must understand the potential risks and adhere to the relevant protocols. Regular training on data security, the safe use of AI tools, and your company's specific policies is essential. An IPSO certificate in AI Business teaches us that the human component is often the weakest link, but also the strongest when properly trained and sensitised.
Technically, there are several steps. Implement strict access controls based on the principle of least privilege. Not everyone needs access to everything. Code reviews by at least two people before any release can identify errors early on. Use version control systems like Git, but ensure they are configured correctly to prevent accidental publications. Automated tools for static code analysis and secrets detection in code can also provide a valuable additional layer of security.
Another point is data classification. Not all data is equally sensitive. Classify your data and code according to their confidentiality. This allows you to apply protective measures more precisely and optimise efforts. A demo bot I developed in my spare time was designed from the outset with data isolation and modular structure in mind to minimise the risk of a comprehensive disclosure should any part be compromised.
💡 Tip: Checklist for secure AI implementation in SMEs
- Access Control: Who has access to AI models, data, and code? Only what is necessary.
- Training: Regular sensitisation of employees on data security and handling AI tools.
- Code Reviews: Every code change, especially for AI models, must be reviewed by a second person.
- Hosting: Host sensitive data and AI models in a secure Swiss environment (e.g., Infomaniak Geneva).
- Emergency Plan: What to do in case of a data breach or unintentional publication?
How does this incident affect the trustworthiness of AI providers for Swiss companies that rely on security?
The incident necessitates a more critical examination of providers and a stronger emphasis on transparency and auditability. Trust in AI providers is of utmost importance for Swiss SMEs, especially in regulated industries. A leak like Anthropic's shakes this trust. It shows that even a company positioning itself as particularly security-conscious is not infallible.
For SMEs, this means they need to look even more closely when selecting their AI partners. Don't blindly rely on marketing claims. Ask for specific security certifications, audit reports, and internal processes for error handling. How are code changes managed? What protocols are in place for handling sensitive data? Where is the data hosted? For Swiss companies, Swiss hosting with GDPR compliance is a non-negotiable criterion.
A structured vendor comparison is essential here. One must be able to understand and evaluate the technical details. In my work, I have often seen that the choice of technology stack – be it a RAG framework like LangChain, a Vector DB like Supabase, or Infomaniak AI's LLM API – significantly impacts security. A tailor-made solution based on open-source components and hosted on Swiss servers often offers more control and transparency than a black-box solution from a large provider.
| Criterion | Standard Cloud AI Integration (e.g., large US providers) | Customised Swiss RAG Solution (e.g., with schnellstart.ai) |
|---|---|---|
| Data Sovereignty & Hosting | Often hosting outside of Switzerland (USA, EU), subject to the US Cloud Act. | Exclusive Swiss hosting (e.g., Infomaniak Geneva), 100% GDPR-compliant. Full control over data location. |
| Transparency & Control | Black-box approach, little insight into internal processes, algorithms, and security mechanisms. | Open-source components (e.g., LangChain, Supabase), transparent architecture, code auditability. |
| Customisation & Scalability | Limited customisation options, often "one-size-fits-all" solutions. | Highly customisable to specific SME needs, scalable for future growth and new use cases. |
| Legal Compliance | Complexity in GDPR compliance due to international data flows. | Easier to ensure GDPR compliance through local operation and dedicated infrastructure. |
| Cost Structure | Often subscription-based, costs can skyrocket with high usage. | Potentially higher initial investment, but more controllable operating costs in the long run and no dependency. |
⚠️ Warning: Not all promises hold true!
Don't just rely on marketing claims from AI providers. Many promise the highest security, but the Anthropic case shows that human errors can happen anywhere. Critically review contracts and security concepts. Ask for independent audit reports and demand clear commitments regarding GDPR compliance and data hosting in Switzerland. An insufficient contract can have costly consequences in case of an incident.
What legal and ethical implications arise for Swiss SMEs when they use AI software from third-party providers that exhibit such security vulnerabilities?
SMEs bear a shared responsibility for data breaches, even if caused by third parties, and must proactively manage compliance risks. The Swiss Federal Act on Data Protection (FADP) and the European NIS-2 Directive (which also affects Swiss companies with cross-border relevance) make it clear: the responsibility for protecting sensitive data does not end at the contractual boundary with a third-party provider. An SME using AI software is jointly responsible if data is leaked or compromised through this software.
Legal consequences can include fines and claims for damages. The NIS-2 Directive is pressuring thousands of companies to meet deadlines to avoid penalties. This means that not only the AI providers but also their customers are obligated to review their cybersecurity and that of their supply chains. A compliance team or a data protection officer within the SME, even if it's a part-time role, must actively shape and monitor AI governance policies. Frameworks like DSFA (Data Security & Fairness Assessment) and RACI (Responsible, Accountable, Consulted, Informed) are valuable tools here.
Ethical implications are equally serious. A data breach can irrevocably destroy the trust of customers and partners. It's about your company's reputation and credibility. Imagine customer data from your Huber Treuhand GmbH being made public through a third-party leak. The financial damage would be one thing, the loss of trust another, far more serious issue. Therefore, it is important to view AI governance not just as a tedious obligation, but as a strategic competitive advantage.
Implementing a robust AI governance system within the SME is not just a recommendation, but a necessity. An AI Governance Board or Ethics Committee is needed to oversee strategic direction, establish policies for fairness, transparency, and data protection, and approve critical AI decisions. Representatives from IT, Compliance, and Management should be part of it. Only then can it be ensured that AI usage is not only efficient but also responsible and compliant with the law.
💡 Practical Example: AI Governance at Huber Treuhand GmbH
Huber Treuhand GmbH, a Thurgau-based SME with 8 employees and over 320 mandates, plans to introduce an "AI Tax Mentor" to support its tax advisory services. Lukas Huber, the founder, has already developed a demo bot. To minimise the risks of a leak like Anthropic's, Huber Treuhand GmbH has taken the following steps:
- Internal Policies: Creation of clear rules for handling customer data by the AI Tax Mentor, including pseudonymisation and anonymisation where possible.
- Provider Assessment: Instead of opting for a standard cloud solution, a Swiss AI freelancer was commissioned to professionalise the demo bot using a RAG framework based on open-source technologies (LangChain, Supabase) and Infomaniak AI (Claude/GPT-4). Hosting is exclusively on Infomaniak servers in Geneva.
- Roles & Responsibilities: A small "AI Ethics Team" consisting of Lukas Huber, the Data Protection Officer, and the IT Manager has been established. This team monitors compliance with FADP regulations and the fair use of AI.
- Continuous Training: All employees receive regular training in data security and the specific functions of the AI Tax Mentor to minimise human error sources.
This approach allows Huber Treuhand GmbH to leverage the benefits of AI while ensuring data security and compliance according to Swiss standards.
✅ Recommendation: Proactive AI Security Strategy
Don't wait for an incident to happen. A proactive security strategy is no longer an option for Swiss SMEs using AI, but a necessity. Integrate security from the outset into your AI implementation strategy. This means:
- Security by Design: Plan security measures during the conceptual phase of your AI projects.
- Regular Audits: Have your AI systems and processes externally audited to identify vulnerabilities.
- Incident Management: Develop a detailed plan for data breaches or security incidents.
- Partner Selection: Choose AI providers carefully and ensure their security standards meet or exceed your own.
The Anthropic incident unequivocally shows us: human errors are unavoidable. But the impact of these errors is not. For Swiss SMEs, this means maintaining control over their data and their AI systems wherever possible. Opt for transparency, Swiss hosting, and a clear governance structure.
Digitalisation with AI offers enormous opportunities for efficiency and time savings. However, these opportunities must not come at the expense of security. Those who act proactively now not only protect their data and reputation but also strengthen the trust of their customers and secure their company's future viability in an increasingly data-driven world.
Key takeaways for your SME:
- ✅ Manage the Human Factor: Invest in training and clear processes to minimise sources of human error.
- ✅ Critically Assess Providers: Don't rely on promises; demand transparency and carefully examine security concepts and hosting locations.
- ✅ Strengthen Your Own Governance: Implement a robust AI governance system that covers legal, ethical, and technical aspects to avoid shared responsibility in case of leaks.
Would you like to elevate the security of your AI implementation to Swiss standards and minimise compliance risks? We would be happy to advise you on your options without obligation. Contact us for an initial consultation.
Related Articles
ChatGPT Says No: How Guardrails Work – and Where They Fail — What This Means for Swiss SMEs
Chat Control: Exception for Private Chat Search Ends — What This Means for Swiss SMEs
Cheops Technology Switzerland: From Infrastructure to Cyber Defense – What Swiss SMEs Need to Know Now
Newsletter
Receive our weekly briefing on Swiss AI & Deep Tech.