Technology29 March 20268 min

    Intelligence Data for AI? What the Pentagon Plans with LLMs – What Does This Mean for Swiss SMEs?

    Intelligence Data for AI? What the Pentagon Plans with LLMs – What Does This Mean for Swiss SMEs?
    L
    Lukas Huber

    Lukas Huber

    Founder & AI Strategist

    The Pentagon is training AI with military data. What does this mean for Swiss SMEs, their data, and their competitiveness in a globalized world?

    The news sounds like something out of a spy thriller: The Pentagon is reportedly planning to train AI models with classified military data. A highly sensitive development that extends far beyond military applications. As the US realigns its defence strategy, Swiss SMEs face a crucial question: What do such global developments mean for our local economy, our data, and our competitiveness?

    It's a paradoxical situation. Precisely at a time when the acceptance of Artificial Intelligence in Switzerland is clearly on the rise – 45% of Swiss SMEs now consider AI an advantage for their business operations, an increase of 10 percentage points year-on-year – new, complex questions about data sovereignty and security are emerging. The opportunity that 60% of Swiss SMEs see in AI must be linked to a clear strategy for data sovereignty and compliance. Only then can the potential risks arising from the use of global AI models be effectively minimised.

    The Pentagon's plans are a wake-up call. They highlight that data used to train AI models is of immense importance – not only for military intelligence but also for the integrity and security of every single company relying on AI. For Swiss SMEs, often characterised by precision, confidentiality, and quality, it is crucial to fully understand the implications of these developments and act proactively.

    📊 Facts at a Glance:

    • 45% of Swiss SMEs now consider AI an advantage for their business operations, up from 35% in 2024. (Source: kmu.admin.ch, 2025)
    • 60% of Swiss SMEs see AI as an opportunity for their business. (Source: kmu.admin.ch, 2025)
    • The Pentagon plans to train AI models with classified military data. (Source: Golem.de, 2026)
    • The rapid adoption of commercial AI tools by the Pentagon could impair military personnel's ability to distinguish fact from fiction. (Source: LetsDataScience.com (based on defenseone.com), 2026)

    What specific security risks do Swiss SMEs face when using AI models potentially trained on sensitive data?

    The risks are manifold, ranging from unintentional data exposure to manipulated information. When global AI models, potentially trained on sensitive, perhaps even classified, data, find their way into commercial applications, significant uncertainties arise for Swiss SMEs. It cannot be ruled out that such models contain subtle biases or even "memories" of their training data, which could then resurface in generated content or analyses.

    A primary security risk is so-called "data leakage" through inference. Even if an SME does not directly input its own sensitive data into a public AI model, the model could, through clever queries (prompt engineering), reveal information it learned during its training from similar, but originally classified, sources. This affects not only military secrets but potentially also trade secrets, customer lists, or financial data, if such information was included in the training datasets or processed in similar contexts. The model's ability to distinguish between fact and fiction, as reports about the Pentagon suggest, is a central vulnerability here.

    Furthermore, there is the danger of "hallucinations" or the spread of misinformation. If a model was trained on biased or unreliable data, it can generate false or misleading answers that appear plausible at first glance. For a Swiss SME using AI for market analysis, customer communication, or decision-making, this could lead to costly errors or significant reputational damage. The origin and quality of the training data are crucial for the reliability of AI outputs. It is a fallacy to believe that a model that "knows everything" always tells the truth.

    ⚠️ Warning: Uncontrolled Data Exposure

    Never use sensitive company data (customer information, financial data, trade secrets) in public, unregulated AI models. Even if providers assure you they won't use your data for training, control over data processing and potential third-party access is often unclear. A single careless prompt can be enough to reveal confidential information, thus violating the Swiss Federal Act on Data Protection (FADP).

    Another often underestimated risk is dependency on external providers. If an SME relies heavily on a global AI model operated by a foreign provider, it relinquishes a portion of its data sovereignty and control over the technology used. Changes in terms of service, data processing policies, or even geopolitical tensions could directly impact business operations. The AI supply chain, from training data to algorithm, must be transparent and traceable to manage such risks. Trust in technology requires trust in its origin and operation.

    How can Swiss SMEs leverage the benefits of AI without compromising data sovereignty and confidentiality?

    The key lies in a conscious strategy focused on local control, transparency, and robust data protection practices. Swiss SMEs do not have to forgo the advantages of AI simply because global models pose risks. There are viable ways to achieve efficiency gains and innovation through AI without relinquishing control over their own, often highly sensitive, business data.

    One of the most effective strategies is the use of private or on-premise hosted AI solutions. Instead of relying on large, publicly accessible Large Language Models (LLMs), SMEs can use smaller, specialised models trained on their own data. These models then run either on their own infrastructure or in a dedicated, secure Swiss cloud environment. This ensures that data remains physically in Switzerland, is subject to the Swiss Federal Act on Data Protection (FADP), and is not subject to the access rights of foreign authorities. Control over the data and infrastructure remains entirely with the company.

    Criterion Public Cloud LLMs (e.g., GPT-4) Private / On-Premise LLMs (e.g., Llama 3 on own infrastructure)
    Data Sovereignty Low to non-existent. Data leaves Switzerland and is subject to foreign laws. Full control. Data remains in Switzerland and is subject to the FADP.
    Security Dependent on provider. Risk of data leakage through inference or third-party access. High security through own infrastructure, encryption, and access control.
    Costs Often low initial costs, but ongoing costs per usage, difficult to calculate for high volumes. Higher initial investment (hardware, setup), but lower variable costs per usage.
    Flexibility Broad functionality, but little customisation to specific company data or processes. High customisation to specific use cases and SME training data.
    Compliance Challenging, as often not FADP-compliant and no guarantee of Swiss legal order. Easier, due to full control over data processing and adherence to the FADP.

    Another important step is the consistent anonymisation and pseudonymisation of data before it even enters an AI model. This applies even to private models. Personally Identifiable Information (PII) should be removed or replaced with placeholders as early as possible. This minimises the risk that sensitive data, even in the unlikely event of a security incident, can be linked to individuals. This requires careful data architecture and processes designed for data protection from the outset.

    💡 Tip: Data Governance as a Foundation

    Before implementing AI, define a clear data governance strategy. Specify which data may be collected, stored, processed, and deleted. Determine responsibilities and establish processes for data classification. Only with a solid data governance foundation can you safely and FADP-compliantly leverage the benefits of AI. This protects not only your data but also your company from legal and reputational risks.

    Furthermore, SMEs should scrutinise AI service providers and products carefully. Inquire about the hosting location, security certifications, data processing policy, and the possibility of training or customising your own models. A Swiss hosting partner committed to local laws and whose data centres are located in Switzerland offers significant additional security. Transparency about the origin of training data and the functioning of the algorithm is also crucial. Lukas Huber, founder of schnellstart.ai, repeatedly emphasises that focusing on Swiss solutions and local expertise is the best way to maintain data sovereignty.

    What regulatory or ethical considerations are relevant for Swiss SMEs when implementing AI solutions potentially based on global or military datasets?

    Compliance with the Swiss Federal Act on Data Protection (FADP) and adherence to ethical principles are non-negotiable. Regardless of the origin of the initial training data, AI solutions used in Switzerland must meet local legal and ethical requirements. The FADP is the central anchor point here.

    The revised FADP, which came into effect in September 2023, imposes high demands on the protection of personal data. For Swiss SMEs, this means they must be able to account at all times for what data they process, for what purpose, and on what legal basis. When AI models are used that are potentially based on global or military datasets, and whose origin and processing methods are unclear, compliance with the FADP becomes extremely difficult, if not impossible. It becomes particularly critical if these models process or generate personal data that cannot be clearly traced back to a clean and transparent data origin. A Data Protection Impact Assessment (DPIA) is essential for high-risk AI applications.

    ✅ Recommendation: Focus on Swissness

    When selecting AI solutions and partners, prioritise providers with Swiss hosting and a clear commitment to the Swiss Federal Act on Data Protection (FADP). This not only minimises regulatory risks but also strengthens the trust of your customers and partners in your data processing practices. Pay attention to transparent data processing agreements and auditable processes.

    Ethical considerations are equally important. AI systems trained on opaque or potentially biased data can perpetuate discrimination or produce unfair outcomes. For an SME using AI in recruitment, credit scoring, or customer outreach, this would have not only legal but also significant ethical and reputational consequences. The responsibility for the decisions an AI makes ultimately lies with the company that uses it. A "black box" AI, whose functioning and data basis are not transparent, is therefore ethically problematic.

    💡 Practical Example: Secure AI Implementation at "TechConnect AG"

    TechConnect AG, a medium-sized Swiss engineering firm with 80 employees, faced the challenge of optimising its internal processes through AI without compromising sensitive customer data. Instead of opting for a public LLM, TechConnect decided to implement a specialised, locally hosted language model. This model was trained exclusively on anonymised internal documents and technical articles. All data remains on servers in a Swiss data centre. The AI now assists engineers in creating quotes and technical specifications by suggesting text modules and searching internal knowledge bases. Customer or project data is never directly entered into the model. Through this strategy, TechConnect was able to increase efficiency by 15% while retaining full control over its valuable company data, which was also positively received by clients.

    The question of AI governance is therefore becoming increasingly pressing. Swiss SMEs must develop internal policies for handling AI that ensure transparency, traceability, and human oversight. This includes training employees in the responsible use of AI, defining clear responsibilities, and implementing mechanisms for reviewing and correcting AI decisions. It's about understanding AI as a tool that must be controlled and monitored, rather than as an autonomous black box. Regulatory requirements are likely to increase rather than decrease in the future, and those who act proactively today will be better positioned tomorrow.

    The Pentagon's plans are a clear signal that data in the digital age is a strategic resource of utmost importance. For Swiss SMEs, this means leveraging AI opportunities with a clear awareness of the risks and a robust strategy for data security and sovereignty. Those who protect their data, protect their business model.

    Conclusion: Seize AI Opportunities, Secure Data Sovereignty

    The Pentagon's ambitions to train AI models with classified data underscore the critical importance of data in the AI era. For Swiss SMEs, this serves as a reminder to proactively secure data sovereignty and compliance when implementing AI solutions. The benefits of AI are immense, but they must not come at the expense of security and confidentiality.

    Focus on Local and Private AI Solutions: Opt for on-premise or Swiss cloud solutions to maintain control over your data and comply with the Swiss Federal Act on Data Protection (FADP).

    Establish Strict Data Governance: Implement processes for anonymisation, pseudonymisation, and classification of data before it enters AI models. Transparency about data origin is crucial.

    Embrace Responsibility and Ethics: Train your employees in the use of AI and ensure that human oversight and ethical guidelines are guaranteed for all AI applications.

    Would you like to learn more about how your SME can implement AI safely and in compliance with regulations? Contact us for a no-obligation consultation to discuss your specific requirements and find the right solutions for your business. Visit schnellstart.ai/en/contact.

    Start Your AI Journey

    Ready to automate your business processes?

    Newsletter

    Receive our weekly briefing on Swiss AI & Deep Tech.

    Privacy

    We use cookies for analytics and better user experience.