Reasoning Models, European AI Strategy, and the Future of Intelligence (Panel)

Jun 8, 2025

Jun 8, 2025

The Brief: Last year, we discovered on Panoramai's stage small language models and their use cases. The trend is quite clear: all players are coming with their own mini and/or open-source models, some serving critical transactions inside organisations. Side to this, the coding exponential capacity is on its way and will create a double chance for Europe to catch up with USA and China. What are the key ingredients to build the future software champions of Europe?

Session: Track 4 - Thinking Models
Date: June 4, 2025
Moderator: Sarah Luvisotto

Opening and Panel Introduction

Sarah Luvisotto, fresh from Lisbon and moderating this final panel of the day, energizes the room before introducing an exceptional lineup. The panel brings together Manuel Gustavo Isaac, an in-house philosopher at the Geneva Science and Diplomacy Anticipator (GESDA) who examines what it means to be human in the AI context; Pierre-Carl Langlais, co-founder of Pleias, offering deep insights into AI development challenges; and Philippe Van Caenegem, partner at Evident, focusing on organizational AI integration strategies.

Sarah sets ambitious expectations for the discussion ahead, hoping attendees will leave energized with new concepts around agentification, small language models, and potential ingredients for the next European AI champion.

The Shift from Language Models to Reasoning Systems

Pierre-Carl opens with a fundamental reframe of current AI development, emphasizing reasoning models over traditional language models. He describes a pivotal transformation in how the industry views these systems.

He explains that for a long time, the focus was on creating language models to predict the next word, but now the field is recognizing these systems as reasoning and logic machines. Through training on vast text corpora, these models have absorbed heuristics and logical relationships - capabilities originally sought through symbolic AI approaches.

The emergence of reasoning models represents more than incremental progress. Pierre-Carl notes they're not just conversational chatbots but could become replacements for many rule-based machine learning methods commonly used in industry. This shift brings three critical advantages: interpretability through reasoning traces, improved accuracy via reinforcement learning, and natural fit for agentic applications.

He emphasizes the technical foundations, explaining that these models are trained on mostly synthetic data because they focus on tasks like navigating between pages - daily activities that aren't documented anywhere, requiring simulation. Combined with reinforcement learning frameworks borrowed from gaming, these models learn not just to write, but to act and navigate toward objectives.

At Pleias, they're training remarkably small models at GPT-2 scale with 300 million parameters that demonstrate surprising effectiveness in this new paradigm. Their recent model for retrieval-augmented generation showcases multi-step workflows: assessing user queries, determining if additional sources are needed, preparing responses, and deciding whether to provide answers or politely decline.

European AI Scenarios: The Good, Bad, and Ugly

Philippe presents three strategic scenarios for AI's future impact, particularly from a European perspective, developed through strategic foresight methodologies.

The Good - Renaissance 3.0: In this optimistic scenario, humanity overcomes global challenges thanks to intelligent tools that provide insights into human nature. AI helps restructure resources toward energy solutions and education at unprecedented scale and cost. Drawing on economist Gruber's concept of "bullshit jobs" - approximately 60% of economic activity that could disappear without meaningful impact - this scenario sees AI eliminating meaningless work and freeing humans for more purposeful activities.

The Bad - Over-Optimization: This scenario warns of efficiency without purpose, where systems optimize everything into metrics. Philippe references Drucker's observation that « the worst thing you can do for productivity is make tasks more efficient that should not exist at all. That should not have existed to start with » - meaning the systems optimize processes that shouldn't have existed in the first place. The result: humanity managed by systems that optimize away creativity, critical thinking, and meaningful choice.

The Ugly - Systemic Collapse: Multi-agent systems competing globally create an arms race where nobody wants to be the least capable agent. This leads to infrastructure sabotage and short-circuiting of essential systems including energy and water supply.

For Europe specifically, Philippe outlines contrasting paths. In the positive scenario, Europe finds its role and identity, embraces regulation, and achieves equilibrium between technological adoption speed and societal adaptation speed. Europe becomes the global leader in balanced AI governance, creating stability and economic prosperity.

The negative European scenario positions the continent as well-meaning laggards where over-regulation stifles innovation and economic competitiveness. In the worst case, there's total technological divergence between Europe and other regions, leaving European consumers defenseless in automated commercial interactions.

Philosophical Frameworks for AI Governance

Manuel Gustavo introduces GESDA's comprehensive approach to anticipating technological breakthroughs. The Science Breakthrough Radar covers over 40 emerging scientific topics, identifying 300+ breakthroughs across 5-25 year timeframes, informed by 2000+ leading scientists globally.

GESDA's anticipation work centers on three fundamental philosophical questions about human identity, societal cooperation, and planetary sustainability. These questions have crystallized into the "Planetarized Humanity" initiative, recognizing that anticipated breakthroughs will fundamentally reshape human relational conditions.

Manuel emphasizes the need to anticipate not only scientific and technological changes but also evolving assumptions about human nature itself. Without this broader anticipation, aligning technology with human values and societal needs becomes extremely difficult.

He stresses the global scope required, aiming to provide leaders across diplomatic, policy-making, philanthropic, business, and citizen communities with appropriate conceptual frameworks and practical tools for managing emerging disruptions.

Regulation as Innovation Enabler

The panel addresses whether European regulation supports or stifles AI innovation. Philippe advocates for regulation close to actual practitioners who experiment and innovate, emphasizing the need to experiment rather than stifle innovation while maintaining oversight of those actively developing systems.

Pierre-Carl, drawing from Pleias's experience as one of 20 providers audited by the European Commission for the AI Act, strongly supports the regulatory framework. He argues that regulation is essential for market development, as many sectors won't act without clear frameworks, whether from the European Commission or through internal industry rules like banking and telecommunications.

He identifies the "Brussels Effect" as a strategic advantage, where European rules can influence global standards. However, he notes critical challenges around the rapid pace of AI transformation and the Act's origins in traditional machine learning rather than large language models.

Manuel adds that regulation can serve as a business enabler by creating trust with users and clients, though implementation complexity creates imbalances across the value chain. He emphasizes that governance frameworks themselves need agility to adapt to AI's dynamic evolution.

Trust, Transparency, and the Data Imperative

Manuel Gustavo provides crucial philosophical distinctions around trust in AI systems. He explains the difference between epistemic trust based on expected truthfulness and expertise, which requires transparency in human-AI interaction, and moral trust based on expected benevolence, which presupposes moral agency.

The moral notion of trust can actually prevent transparency because « it rests on something that some philosophers have liked to call an unquestioning attitude from the truster toward the trustee ». He gives the example: « as when you trust your kid, you do not ask them to be your kids, you do not ask them to be fully transparent to you. You just basically can't blindly, if it's not too connotated, trust them. » When users apply moral trust concepts to AI systems designed for epistemic trust, this creates significant conceptual misalignment between stakeholders who design, deploy, and use these systems.

Pierre-Carl emphasizes transparency as fundamental to market development, noting that many sectors need frameworks or accidents will occur and confidence will erode. He advocates for democratic oversight of AI systems, especially as they evolve toward agentic applications managing critical infrastructure.

He stresses that when algorithms make decisions, we need to know exactly how those decisions were made, and emphasizes the need for better visibility into reasoning traces, particularly for systems handling network management, production, and manufacturing processes.

The European AI Opportunity

Addressing concerns about European competitiveness, Pierre-Carl draws parallels to software industry evolution, noting how the industry initially assumed everything would remain proprietary before discovering the need for auditable, visible systems that could be examined and built upon.

The same dynamics drive current interest in small models and open-weight approaches. He argues Europe could succeed in this space but needs active investment in new systems and approaches, not just regulation. Currently, similar development efforts are primarily happening in China, raising questions about soft power at a geological scale.

Philippe positions Switzerland's trustworthy brand as a unique advantage, while Pierre-Carl warns against the European tendency to focus only on applications rather than training technology. He emphasizes that large companies need to own the underlying technology, not just apply existing models.

The panel identifies a psychological block in European investment communities that assumes Europe can only develop applications rather than conduct fundamental AI research. Breaking this mindset becomes crucial as the focus shifts toward industrial deployment rather than conversational applications.

Practical Guidance for Decision Makers

In closing rapid-fire advice for business leaders, the panel offers concrete guidance:

Philippe emphasizes the fundamental importance of data strategy: « It's super boring. It's data. Get your data flywheel right. It's just so basic. All the rest will be solved. We're over. Sorry. It's not one sentence. Like this whole AGI and stuff and dangers and big models and it doesn't matter. Just get the data right and get the data flywheel going and you will create tons of value and gradually all the problems will be solved. But it's all about the data, nothing else. »

Pierre-Carl extends this focus: « So look at the data, not just even make it, but just look at what is inside the master project they've been seeing. They don't even look, for instance, what users have been querying. What they do have actually indexed data sets. And obviously the model is want to participate. And I would extend this to the model itself. You need to look at the data or the work made. What is the source? »

Manuel Gustavo provides the human-centered counterpoint: « Well, happy to follow up on that and disagree. Well, not disagree, but. So the technical aspect of it is certainly come for you. Chunk of the challenges we are about to face. But do not forget the societal and the anthropological level of using and deploying the system that you're about to run. Have this perspective too. »

The panel concludes having delivered both technical depth and strategic vision, providing the Panoramai audience with concrete frameworks for navigating AI's transformative impact on European business and society.

More info on:

Philippe Van Caenegem, Partner at Evident

Pierre-Carl Langlais, co-founder of Pleias

Manuel Gustavo Isaac, Senior Program Manager in science anticipation at the Geneva Science and Diplomacy Anticipator (GESDA)

Sarah Luvisotto, Moderator

See the full program of the day