
Lukas Huber
Founder & AI Strategist
MyLovely.ai data leak exposes 106,000 users. What Swiss SMEs must learn about AI security.
Key Takeaways
- ▸MyLovely.ai-Datenleck mit 106'000 Zugängen im Darknet.
- ▸KI-Plattform für 'KI-Freundinnen' betroffen.
- ▸Alarmierendes Signal für Schweizer KMU im Umgang mit KI.
106,000 user accounts have surfaced on the dark web. We're not talking about an obscure financial platform, but MyLovely.ai, an AI platform offering "AI girlfriends." What might seem like a niche problem for a specific user group at first glance, reveals itself upon closer inspection as an alarming signal for any Swiss SME working with Artificial Intelligence today or tomorrow.
This incident, which was publicised by Heise in April 2026, drastically highlights the potential pitfalls of AI usage. It's not just about users' personal data, but about fundamental trust in the security of digital platforms. Especially in Switzerland, where data protection holds a high priority thanks to the revised FADP (Federal Act on Data Protection), we must ask ourselves: How secure are our companies' sensitive information when fed into AI systems that don't always meet the highest security standards?
The euphoria surrounding AI is palpable. Almost half (45%) of Swiss SMEs now see AI as a clear advantage for their business. However, this enthusiasm must not blind us to the risks. While more and more companies are using AI for automation, data security often lags behind. This is a dangerous imbalance that we, as the Swiss economy, cannot afford to ignore.
📊 Key Facts at a Glance:
- 45% of Swiss SMEs consider AI an advantage for their business operations. (Source: kmu.admin.ch, 2026)
- 34% of companies use AI to automate specific work steps – up from 23% in 2024. (Source: DeepCloud, 2026)
- Only one-third of companies using AI have a clear data protection policy for handling data. (Source: AXA, 2025)
- The most common areas of AI application for Swiss SMEs are translation (52%) and correspondence (47%). (Source: DeepCloud, 2026)
What specific risks does using AI platforms pose to my SME's data?
The risks are manifold and extend far beyond a simple data leak. It's a fallacy to believe that just because you're not using an "AI girlfriend," you're safe from such incidents. Every AI platform you feed company data into – whether for document translation, correspondence optimisation, or more complex analyses – carries a potential leak.
The primary risk is direct data exfiltration. As with MyLovely.ai, criminals can exploit vulnerabilities to intercept sensitive information. For an SME, this could mean customer data, trade secrets, financial information, or even intellectual property falling into the wrong hands. Imagine your latest product plans, summarised by AI, or confidential customer communications ending up on a dark web platform. The damage would then be not only financial but also immense in terms of reputation. Such a leak can irrevocably destroy the trust of customers and partners.
Another, often underestimated, risk is the unintentional data disclosure by the AI model itself. Many generic AI models learn from the data they process. If your confidential company information contributes to improving such a model, there's a risk that this information might reappear in future queries from other users. The phenomenon of "data contamination" is real. Your internal strategy papers could thus unintentionally become part of the training dataset and later appear in the AI's responses to third parties. This is a massive breach of trade secrets and confidentiality.
Not to be forgotten are the compliance risks. The revised Swiss Federal Act on Data Protection (FADP) is clear: Companies are responsible for protecting personal data, even when they transmit it to third parties, including AI providers. A data leak at an AI platform you use is your problem. Fines can be substantial, not to mention the bureaucratic effort and reputational damage. Many SMEs are unaware of what data they are actually sending to which AI services and where this data is ultimately stored or processed. This lack of transparency is a ticking time bomb. Furthermore, data might be transferred to countries that do not offer an adequate level of data protection, which, without appropriate safeguards, constitutes a clear violation of the FADP.
⚠️ Warning: Blind Trust is Costly
Do not rely on the general data protection promises of AI providers. Many standard AI services are not designed to handle sensitive company data. Without explicit contractual assurances and technical measures, you risk your data being used for AI training or stored in insecure infrastructures. This can lead to a serious data leak and hefty fines.
How can I ensure my company data is protected even when using AI tools?
Data protection with AI tools requires a proactive and multi-layered approach. Simply accepting the terms of use is not enough. You need to take active steps to protect your data. The first step is careful selection of the AI provider. Examine their security standards, data protection policies, and data storage locations. A Swiss hosting provider subject to the FADP clearly has an advantage here. Explicitly ask whether your data will be used for model training and if there's an option to opt out.
Secondly, clear contractual agreements are essential. A simple "End User License Agreement" is usually not sufficient to meet FADP requirements. You need a Data Processing Agreement (DPA) that details how the provider may handle your data, what security measures they implement, and what rights you have as the data controller. This contract must also explicitly state compliance with Swiss data protection law and clarify liability issues in the event of a data leak. Many SMEs shy away from the effort, but this is where you save money in the wrong place.
Thirdly, focus on data minimisation and pseudonymisation. Only transmit the absolutely necessary data to the AI platform. Where possible, anonymise or pseudonymise sensitive information before feeding it into the AI. This means personal or business-critical identifiers are removed or replaced with placeholders. For example, you can replace customer names and addresses with generic IDs if the AI is only intended to perform statistical analyses. This significantly reduces the risk, even if a data leak occurs, as the data can then be less or not at all attributable.
Fourthly, implement internal policies and train your employees. Even the best technical security is useless if employees carelessly enter sensitive data into public AI tools. Create clear instructions on which AI tools may be used for which type of data and which are prohibited. Inform your teams about the risks and the importance of data protection. Regular training minimises human errors, which are often the biggest vulnerability.
| Feature | Standard Cloud AI (e.g., Major US Providers) | Specialised/Protected AI Solution (e.g., Swiss Hosting) |
|---|---|---|
| Data Storage | Often globally distributed, primarily in the US or EU. Limited location choice. | Focus on Swiss data centres, subject to Swiss law. |
| Data Usage for Model Training | By default, often used for model improvement; opt-out options can be complex or missing. | Explicit assurance that data will not be used for model training, or strict anonymisation. |
| Legal Framework | Primarily subject to the law of the hosting country (e.g., CLOUD Act in the US). | Subject to the Swiss Federal Act on Data Protection (FADP), stricter rules for data security. |
| Data Processing Agreement (DPA) | Standard DPAs are often generic and not always tailored to specific Swiss requirements. | Individual or standardised DPAs that explicitly consider the FADP and offer higher guarantees. |
| Support and Transparency | Often standardised support; transparency regarding data flows can be limited. | More personal support, higher transparency regarding data processing and security. |
| Costs | May appear cheaper at first glance, but with hidden compliance costs. | Often higher initial costs, but lower long-term risk costs and greater legal certainty. |
What measures must my SME take to protect itself from data leaks similar to MyLovely.ai?
A comprehensive protection concept requires technical, organisational, and legal precautions that go beyond mere trust in the provider. It's not enough to hope for the best; you must prepare for the worst. The first step is establishing clear data governance for AI. Only one-third of companies using AI have such a policy. This is an alarming figure that urgently needs to be rectified. Data governance defines which data can flow into AI systems, where, and how, who is responsible, and how potential incidents will be handled. This also includes classifying your data by sensitivity to apply different security levels accordingly.
Secondly, conduct a risk assessment for every AI application. Before implementing a new AI solution, analyse the potential data protection risks. What kind of data is processed? How sensitive is this data? What would be the impact of a data leak? This assessment helps you make informed decisions and define the necessary protective measures. It's comparable to the diligence you would exercise when selecting a new business partner.
💡 Tip: Checklist for Choosing an AI Provider
- Is the provider based in Switzerland and do they operate servers there?
- Is there a clear Data Processing Agreement (DPA) that considers the FADP?
- Is your data used for training the AI model? If so, is there an opt-out option?
- What certifications (ISO 27001, SOC 2) does the provider hold?
- How transparent is the provider regarding its security architecture and incident management processes?
- Does the provider offer data anonymisation or pseudonymisation features?
Thirdly, implement robust incident response management. What happens if a data leak does occur? A clearly defined plan for emergencies is crucial. This plan should specify who needs to be informed (authorities, affected parties), what steps will be taken to contain the leak, and how communication will be handled. A swift and transparent response to an incident can significantly minimise damage and preserve the trust of your stakeholders. Many SMEs have such plans for traditional IT systems but often forget to extend them to AI applications.
Fourthly, seek expert advice. If you lack the necessary in-house expertise, consult external data protection officers or specialised consultants. They can help you design your AI strategy to be FADP-compliant, conduct risk assessments, and negotiate the right contracts. It's an investment that pays off many times over in the event of an incident. Lukas Huber, as founder of schnellstart.ai and an experienced practitioner with an IPSO certification in AI Business, sees significant room for improvement here for many Swiss SMEs.
💡 Practical Example: Anonymisation Before AI Deployment
A medium-sized Swiss financial services SME wanted to use AI to analyse customer feedback to improve service processes. Instead of feeding raw data with names, addresses, and account numbers directly into the cloud AI, the company developed an internal process. All personal data was replaced with generic IDs beforehand, and sensitive financial details were aggregated or removed. Only the anonymised text snippets of the feedback were sent for AI analysis. This allowed the SME to leverage the benefits of AI without the risk of a customer data leak and while complying with FADP requirements.
Fifthly, continuous monitoring and adaptation. The AI landscape is evolving at breakneck speed. What is secure today may present a vulnerability tomorrow. Stay informed about new threats and technologies. Regularly review your security measures and adjust them as needed. This is an ongoing process, not a one-time task.
🎯 Recommendation: Establishing an Internal AI Competence Centre
For SMEs looking to use AI strategically, setting up a small, internal competence centre or at least designating a responsible person is advisable. This person or group should regularly engage with topics of AI security, data protection, and compliance. They serve as a point of contact for employees, evaluate new tools, and ensure adherence to internal and external policies. This creates a central point of contact and increases awareness throughout the company.
Conclusion: AI is an Opportunity, but Only with Foresight and Diligence
The incident at MyLovely.ai is a clear warning: The use of AI platforms carries serious risks for data security that Swiss SMEs cannot afford to ignore. The benefits of AI are undeniable, but they come with the responsibility to comprehensively protect our company data and that of our customers. Those who blindly feed data into any AI tool today are playing with fire and risk not only financial penalties but also the loss of trust and reputation.
It's time for Swiss SMEs to align their AI strategy not just with efficiency gains, but primarily with data security and compliance. Only then can we responsibly harness the opportunities of Artificial Intelligence and equip our companies for the digital future.
✅ Create Transparency: Know exactly what data you are sending to which AI services and where it is being processed.
✅ Review Contracts: Secure yourself with FADP-compliant Data Processing Agreements with your AI providers.
✅ Raise Employee Awareness: Regularly train your teams on the secure use of AI and data.
Need support with the data protection-compliant implementation of AI in your SME? Contact us for a no-obligation initial consultation.
```Frequently Asked Questions
Was ist bei MyLovely.ai passiert?+
Bei MyLovely.ai kam es zu einem Datenleck, bei dem 106'000 Nutzerzugänge im Darknet aufgetaucht sind.
Warum ist dieses Datenleck für Schweizer KMU relevant?+
Der Vorfall zeigt die Risiken im Umgang mit KI-Plattformen auf und ist ein wichtiges Signal für jedes Schweizer KMU, das KI nutzt oder plant zu nutzen.
Welche Art von Plattform ist MyLovely.ai?+
MyLovely.ai ist eine KI-Plattform, die 'KI-Freundinnen' anbietet.
Related Articles
From Voice Chaos to LinkedIn Post: My Daily AI Content Workflow for Business Owners
AI Update Deep Dive: Improve AI Competencies with the AI Campus — What Does This Mean for Swiss SMEs?
OpenAI Ahead of IPO: Why the AI Giant is Targeting Retail Investors – and What it Means for Swiss SMEs
Newsletter
Receive our weekly briefing on Swiss AI & Deep Tech.