The summit brought together world leaders, tech executives, and experts to discuss AI's future, with French President Emmanuel Macron advocating for a balance between innovation and regulation. As the U.S. under the Trump administration pushes for minimal oversight to foster innovation, and China flexes its AI muscles, Europe is reassessing its approach to ensure it doesn't fall behind in this rapidly advancing field.
Yet the question of the day is not really “innovation vs. ethics,” as commonly framed. A more accurate question is how to build “innovation plus trust.” Much like the dawn of the internet—where an initial free-for-all gave rise to trusted giants like Google—AI’s near-term challenge is building user trust. Ethics remains crucial in the long run, but until Europe creates AI systems people genuinely trust and rely on, the conversation around ethical guardrails is premature and could actually do more harm than good.
This insight should guide and focus Europe’s strategies at this crucial moment.
The Global Regulatory Divergence
The EU’s regulatory framework, while ethically robust, risks cementing its position as an AI user rather than a developer. Contrast this with the U.S., where the new administration has doubled down on AI competitiveness, loosening regulatory guardrails to prioritize innovation. This divergence isn’t just philosophical—it’s financial. Developing AI requires immense compute power, and the U.S. tech sector’s deep pockets give it a structural edge.
But here’s the twist: The emergence of cost-efficient AI models like DeepSeek’s R1—which reportedly operates at a fraction of traditional costs—is reshaping the playing field. Smaller chip firms such as Cerebras and Groq are thriving by supporting these leaner models, proving that innovation isn’t exclusive to Big Tech. For Europe, this signals an opportunity to rapidly catch up, if it can position itself as the region that develops AI solutions with transparent data practices, gives users more control over outputs, provides more reliable safety standards, and can quickly differentiate from global competitors who focus primarily on speed and scale.
The Economics of Democratisation
DeepSeek’s disruption highlights a pivotal trend—AI’s democratization. By leveraging open-source models and alternative chips, startups are achieving parity with established players. This mirrors the early internet era, where competition and accessibility drove progress. Europe can learn from this:
- Foster Competition: The 1996 Telecom Act’s focus on universal access offers a blueprint, prioritising policies that prevent AI monopolies and incentivize open-source collaboration.
- Build Strategic Partnerships: DeepSeek’s traction with Chinese chip makers like Moore Threads shows the power of local ecosystems. Europe should nurture similar alliances between startups and hardware innovators.
- Rethink Cost Structures: If AI’s true costs make it commercially unviable (as with many U.S. hyperscale-dependent models), Europe must bet on efficient alternatives. DeepSeek’s R1—trained without cutting-edge GPUs—proves this is possible.
Alongside these strategies, embedding trust in AI platforms becomes a powerful competitive differentiator. Just as Google won user loyalty during the early days of the internet, not by being first, but by being the trusted source to get information, trustworthy AI systems will eventually win — shifting the market toward providers who earn user confidence.
Regulation as a Catalyst, Not a Barrier
The EU’s strict rules could paradoxically become an advantage. By mandating transparency, Europe can position itself as the global standard-bearer for trustworthyAI—a selling point for markets wary of unchecked innovation. But this requires nuance:
- Avoid Overreach: Heavy-handed compliance demands could stifle startups. Instead, align regulations with infrastructure support—tax breaks for GPU clusters, grants for open-source projects.
- Embrace Watchdog Tech: Tools like algorithmic auditors can enforce existing laws without throttling progress. Think of it as “regulation by design.”
- Prioritise Trust: Trust is not built just by having the best capability (although that is a key factor), but by not letting commercial imperatives run roughshod over common decency and fairness.
The Path Forward
Europe’s future in AI hinges on three actions:
- Invest in Compute Sovereignty: Partner with nimble chip firms to build cost-effective infrastructure, reducing reliance on U.S. or Chinese providers.
- Double Down on Talent: Scale AI literacy programs and attract researchers through incentives—much like Germany’s push for semiconductor expertise.
- Build Trusted AI as a Brand: Market GDPR-like compliance as a competitive edge for European startups targeting global enterprise.Even as Europe advances on the ethical front, it must not lose sight of the trust dimension, which—in these formative stages—can determine the winners and losers. Without user confidence, even the most ethically impeccable AI systems will struggle to gain traction. By contrast, a trusted AI ecosystem that proves its value in the marketplace will be well-positioned to address ethical challenges at scale
The rise of DeepSeek isn’t just a challenge—it’s a wake-up call. By blending principled regulation with strategic infrastructure investments, Europe can shift from being a regulatory watchdog to a global AI contender.
The goal isn’t to mimic Silicon Valley or Shenzhen, but to carve a third way: Where trust and innovation aren’t trade-offs, but mutually reinforcing pillars.
Adit Abhyankar is the CEO and Co-Founder of Breakthrough.