Why Swedish Legaltech Startups Are Avoiding Chinese AI Models
Many big law firms hesitate to use Chinese AI models due to client concerns over data privacy and security. Transparency is key when adopting AI in legal workflows.

Why Some Big Law Firms Are Hesitant About Chinese AI Models
Max Junestrand, CEO and co-founder of Swedish legaltech startup Legora, recently shared insights on the cautious stance many large law firms take regarding Chinese large language models (LLMs). Speaking on the Tech.eu podcast, he explained that most of Legora’s legal partners prefer not to use Chinese AI models due to client concerns, especially when those clients include governments and major financial institutions.
Junestrand noted: "Most big law firms that we work with would not like to leverage Chinese models yet. They want to be able to very clearly explain to their clients which in turn might be governments, large financial institutions. AI is moving really fast. They are not really used to moving at that pace and so, to throw a Chinese model into the mix, just gets a bit tricky.”
Security and Privacy Concerns Around Chinese AI
Earlier this year, China’s DeepSeek LLM made headlines by topping app download charts and affecting US tech stocks. Other Chinese AI competitors include Alibaba’s Qwen models. Despite this strong market presence, concerns around data security and privacy have made some legal professionals wary of adopting these solutions.
How Legora Uses AI in Legal Practice
Legora’s platform helps lawyers research and review legal documents by leveraging LLM technology. The startup has integrated AI deeply into its own operations. According to Junestrand, AI writes about 70% of Legora’s code, and the company uses AI tools across marketing, legal, and sales teams.
With over 250 law firm clients, Legora recently closed an $80 million Series B funding round, bringing its valuation to $675 million. This growth reflects the increasing adoption of AI tools in legal workflows, albeit with careful consideration of model origins and client expectations.
Practical Takeaways for Legal Professionals
- When choosing AI tools, consider the source of the models and potential client concerns about data privacy and security.
- Transparency about the AI systems you use can build trust, especially with sensitive clients like governments and financial institutions.
- Integrating AI into legal research and document review can improve efficiency, but aligning adoption with client comfort levels is key.
For legal professionals interested in expanding their AI knowledge and skills, exploring targeted AI courses can provide practical insights into effective implementation. Check out AI courses tailored for legal jobs to stay informed on current tools and trends.