Lukas Huber
Founder & AI Strategist
ChatGPT for legal advice for Swiss SMEs: Opportunities and limitations. Discover where AI helps and where its boundaries lie.
The idea of having legal support at the push of a button sounds appealing. Many Swiss SMEs see potential in Artificial Intelligence that goes beyond simple translations or correspondence. However, a chatbot, no matter how eloquently phrased, is not a lawyer. That's the sober reality.
The euphoria surrounding tools like ChatGPT is immense, with 52% of Swiss SMEs already using AI for translations and another 47% for correspondence. This shows broad acceptance and practical utility in daily business. But when it comes to legal matters, the limits are quickly reached. The question isn't whether AI *can* help, but precisely *where* this help ends and risk begins.
Especially in Switzerland, with its specific legal system and the strict data protection regulations of the revised Data Protection Act (revDSG), companies need to know exactly when they can trust an algorithm – and when consulting a specialised legal professional is indispensable. A well-founded assessment here is not a luxury, but a necessity.
📊 Key Facts at a Glance:
- 52% of Swiss SMEs use AI for translation and 47% for correspondence. (Source: DeepCloud, 2026)
- A market study surveyed 123 SMEs and five large corporations on the use of AI technologies. (Source: FH HWZ, 2026)
- The use of existing AI tools like ChatGPT and Copilot is recommended as a way to minimise risk for SMEs. (Source: SATW, 2026)
- AI chatbots can assist in automating routine tasks in the legal field and providing initial legal information. (Source: Tavily Summary, 2026)
Which Legal Use Cases are Suitable for AI Tools like ChatGPT in Swiss SMEs?
AI tools are useful assistants for repetitive, information-based tasks, but not for actual legal advice. Swiss SMEs can effectively use ChatGPT and similar systems to create efficiency in certain preliminary legal work. This primarily concerns tasks based on analysing large volumes of text or requiring the generation of standard texts.
Consider information preparation. An SME managing director often needs a quick overview of a specific legal situation. ChatGPT can serve as an intelligent search and summarisation service here. It can identify relevant sections from the revDSG or other Swiss laws, provided they are available as data, and summarise them in understandable language. This saves valuable time that would otherwise be spent on manual research.
Another area is standard formulations or contract drafts. For simple, non-contentious documents like Non-Disclosure Agreements (NDAs), basic service agreements, or General Terms and Conditions (GTCs), AI can provide an initial draft. It is crucial here that the AI has been trained on specific, already reviewed templates or can use them as a reference (keyword RAG – Retrieval Augmented Generation). The AI can then insert parameters such as party names, addresses, or specific service descriptions. This is a significant relief for initial document creation and can reduce the effort for lawyers in fine-tuning.
Analysing contracts for specific clauses is also a suitable field. For instance, if a company needs to search all contracts for a specific termination clause or a limitation of liability, AI can perform this task in fractions of a second. The AI scans the documents and highlights the relevant passages. This is particularly helpful during due diligence processes or when preparing for compliance audits. It's about extracting data, not evaluating it legally.
Furthermore, AI tools can be helpful in translating legal texts. Given Switzerland's multilingualism and international business relationships, this is a frequent use case. Even though a legally flawless translation often still requires human review, AI can provide a solid foundation that is then refined by a specialised translator or legal professional. The efficiency gains in these areas are real and measurable. They relieve employees of repetitive tasks, allowing them to focus on more complex, value-adding activities.
| Legal Task | Suitability for AI Tools (e.g., ChatGPT) | Suitability for Human Legal Counsel | Rationale |
|---|---|---|---|
| Researching legal articles & case law (initial overview) | ✅ High (if trained on current, relevant Swiss data) | ✅ High (for in-depth analysis and context) | AI can quickly aggregate facts; lawyers interpret and evaluate for the specific case. |
| Drafting standard contracts (e.g., NDA, simple GTCs) | ✅ Medium (as a first draft, requires review) | ✅ High (for tailor-made, legally sound contracts) | AI quickly generates from templates; lawyers adapt to specific business needs and risks. |
| Analysing contracts for specific clauses (e.g., termination periods) | ✅ High (for quick identification) | ✅ High (for evaluating clauses in the overall context) | AI is faster at scanning; lawyers assess the legal implications and risks. |
| Translation of legal texts | ✅ High (as a starting point, often requires fine-tuning) | ✅ High (for precise, legally correct translations) | AI provides quick raw translations; specialised translators/lawyers ensure terminological accuracy and legal compliance. |
| Evaluation of complex legal cases (e.g., mergers, litigation) | ❌ Low (potentially misleading and dangerous) | ✅ Essential | Requires human judgment, experience, strategic thinking, and the ability to navigate grey areas. AI lacks context and the empathetic component. |
| Legal advice on specific, individual problems | ❌ Low (lack of liability, contextual understanding) | ✅ Essential | AI cannot provide personal advice, assume liability, or assess individual risks. |
Practical Example: "Helvetia AG" and the NDA
Helvetia AG, an SME with 40 employees in mechanical engineering, faced the challenge of quickly creating a Non-Disclosure Agreement (NDA) for a new cooperation. Instead of waiting for an external lawyer, the legal department (consisting of one person) used an AI tool. They fed the AI with the core parameters of the cooperation and an NDA template already used internally, which complied with Swiss legal standards. The AI generated an initial draft within minutes. This draft was then reviewed and adapted by the internal legal professional before being presented to the cooperation partner. This saved Helvetia AG approximately 8 hours of work and accelerated the project's start by several days. The final contract was, of course, still reviewed and approved by a human.
How Can Swiss SMEs Ensure that the Use of AI in the Legal Field Complies with Data Protection Regulations (e.g., revDSG)?
Compliance with the revDSG is non-negotiable when using AI in the legal field and requires a proactive strategy. The revised Data Protection Act (revDSG) imposes high demands on the handling of personal data, especially when it comes to particularly sensitive data, as is often the case in legal contexts. Blind trust in external AI services is playing with fire here.
The first step is a comprehensive Data Protection Impact Assessment (DPIA), similar to what is required for high-risk applications under the EU AI Act and mandatory in certain cases under the revDSG. A DPIA following the standard 8 steps helps to identify and assess potential risks early on. This involves analysing what type of data is processed, who has access, and what the consequences of a data breach would be. The MoSCoW method can help prioritise requirements, clearly separating mandatory ("Must") and optional ("Should", "Could", "Won't") requirements. Data protection is a "Must".
Data storage is crucial. For Swiss SMEs, this ideally means using Swiss infrastructure. Services that operate their servers in Switzerland offer greater legal certainty and trustworthiness regarding the revDSG. However, many large AI models process inputs on servers abroad, often in the USA. This is problematic for sensitive Swiss data, as the data could then be subject to the US CLOUD Act. One solution is to use models that run either locally (on-premise) or on dedicated Swiss cloud infrastructure where data sovereignty is maintained.
Another essential point is the anonymisation or pseudonymisation of data. When using AI tools for document analysis or text generation, identifiable personal data must not be transmitted to the AI unprotected. Customer-specific requests should never be used directly for AI training unless you have explicit consent or the data is fully anonymised. Techniques like Retrieval Augmented Generation (RAG) can help here, where the AI only accesses internal, controlled, and anonymised knowledge databases instead of operating "freely" on the internet or with external training data.
Defining clear responsibilities within AI governance is also essential. Who is responsible for the data entered into the AI? Who reviews the results? An AI governance board or ethics committee, as I often see in larger companies, can set strategic guidelines and approve critical AI decisions. For SMEs, this might sound excessive, but the principles remain the same: clear rules on who can do what and who bears responsibility are fundamental. The Data Protection Officer plays a central role here in monitoring compliance.
Recommendation: Four Steps for revDSG-Compliant AI Use
- Conduct a DPIA: Before using AI in the legal field, systematically assess data protection risks.
- Prioritise Swiss Hosting: Wherever possible, use AI solutions that process data on Swiss servers.
- Anonymise Data: Ensure that no identifiable personal data reaches the AI unless explicitly permitted and secured.
- Establish Clear Internal Policies: Define who can use which AI tools for which legal purposes and what review processes are necessary.
What Risks Do AI-Powered Legal Advisory Tools Pose to Swiss SMEs, and How Can They Be Mitigated?
The biggest risks are hallucinations, lack of context, and absence of liability – these can only be mitigated through human oversight and clear processes. AI tools like ChatGPT learn from vast amounts of data available on the internet. However, this data is not always current, correct, or relevant to the Swiss legal landscape. This leads to several critical problems.
Perhaps the best-known phenomenon is "hallucination." AI models are trained to generate plausible answers. If they cannot find relevant information, they simply invent facts, legal articles, or precedents that sound convincing but are entirely false. In the legal field, such misinformation can have catastrophic consequences. An SME relying on a legal basis invented by AI risks not only financial losses but also reputational damage and legal repercussions. There is no "second opinion" from the AI to correct errors.
Another problem is the lack of context and the nuances of Swiss legislation. The Swiss legal system is specific and complex. An AI primarily trained on Anglo-American or EU law will struggle to correctly interpret the subtleties of Swiss Code of Obligations, labour law, or specific industry regulations. The AI does not understand the "spirit" of a law; it merely processes patterns in texts. It cannot judge how a court would decide in a specific case or provide strategic recommendations based on current case law and the SME's individual situation.
The absence of liability is a serious drawback. If a lawyer provides incorrect information, they are liable for it. An AI tool or its developers generally do not assume this liability. An SME relying on faulty AI advice is left to bear the consequences alone in case of damage. There is no insurance to cover the "AI error," and no contact person to take responsibility for the outcomes. This is a fundamental difference from professional legal advice.
How can these risks be mitigated? The key lies in combining AI efficiency with human expertise. AI should always be understood as a tool and never as a substitute for human judgment. Every legal document generated by AI or any advice provided must be reviewed and approved by a qualified legal professional. This is not an option, but a mandatory requirement.
SMEs should also establish internal policies for the use of AI in the legal field. Who is allowed to use which tools? For which tasks? Which review steps are mandatory? A risk matrix that assesses the probability of occurrence and the impact of AI errors can help identify critical applications. An error in internal research has different implications than an error in an external contract. Frameworks like the NIST AI Risk Management Framework offer valuable guidance for systematically managing risks.
Lukas Huber, founder of schnellstart.ai, repeatedly emphasises the need for transparency. Companies must know how the AI works, what data it uses, and what its potential weaknesses are. Only then can they make informed decisions and maintain control. The use of AI in the legal field requires a high degree of sensitivity and responsibility.
⚠ Warning: The Illusion of Omniscience
Never rely on AI for final legal advice or for signing contracts without human review. AI tools are excellent at aggregating information and generating plausible texts. However, they are incapable of understanding the specific, often grey nuances of a legal case, evaluating the strategic implications for your business, or interpreting current case law in its full complexity. The consequences of incorrect AI advice can be existential. A lawyer is liable for their mistakes – an AI is not.
Tip: Effective Prompting for Legal AI Inquiries
To get the best results from ChatGPT or similar tools, formulate your queries precisely:
- Be Specific: Instead of "Explain the law to me," ask "Summarise the key points of Article 335 of the Code of Obligations regarding termination of employment."
- Provide Context: Always mention the Swiss context, e.g., "regarding the revDSG in Switzerland."
- Request Sources: Ask the AI to back up its information with legal articles or references (e.g., "Cite the relevant articles of the Civil Code (ZGB).").
- Critically Review Results: Always treat the AI's response as a first draft, not as final legal advice. Always verify the information through official sources or legal counsel.
The use of existing AI tools like ChatGPT is recommended by the SATW as a way to minimise risk for SMEs, but always with the implicit understanding that these tools serve as support and not as an independent authority for complex decisions. It's about leveraging AI's potential without relinquishing control or exposing yourself to unnecessary risks.
In summary, AI in the legal field is a double-edged sword for Swiss SMEs. It offers immense efficiency potential for routine tasks and information gathering. At the same time, it carries significant risks if used without human oversight and legal expertise. The key to success lies in intelligent, risk-aware application that leverages AI's strengths while compensating for its weaknesses with human expertise.
Digitalisation is not stopping at the legal sector. Those who want to seize the opportunities must know the rules of the game and master the risks. This means investing in knowledge, defining clear processes, and maintaining human control as an indispensable filter.
✅ AI as an Efficiency Booster: Use AI for research, standard documents, and text analysis to save time and relieve employees.
✅ Data Protection as a Foundation: Ensure revDSG compliance through DPIAs, Swiss hosting, and consistent data anonymisation.
✅ Human Expertise as Control: Every AI-generated legal information must be reviewed and approved by a qualified legal professional. Using AI in critical legal matters without human oversight is negligent.
Would you like to learn more about how to implement AI safely and efficiently in your Swiss SME without incurring legal risks? Contact us for a non-binding initial consultation. We will support you in developing the right strategies and implementing the appropriate tools. Get in touch.
Related Articles
Newsletter
Receive our weekly briefing on Swiss AI & Deep Tech.
