Pierre-Carl Langlais

The Shift to Reasoning Models

At the Panoramai AI Summit in Lausanne, Pierre-Carl Langlais delivered a compelling presentation on the fundamental transformation happening in AI development - the evolution from language models to reasoning systems. As co-founder of Pleias, he provided both technical insights and strategic perspective on this paradigm shift.

Redefining AI: From Language to Logic

Pierre-Carl opened with a deliberate reframing: « I'm going to talk to you a bit about designing reasoning models. So stress on reasoning part because you can notice I haven't talked about language model and that's kind of intent on my side. » This wasn't merely semantic distinction but a fundamental reconceptualization of what the industry has built.

He explained the philosophical shift: « Until for a long time we thought, okay, we are creating language models. So the idea was to predict the next world given the previous words. But now what we're seeing is actually we have some kind of reasoning machine, logic machine. » Through training on vast text corpora, these models have absorbed logical relationships and heuristics that symbolic AI researchers sought for decades.

The implications prove transformative: « The key shift is that models are not really only chatbots. They're not just being fought for conversational use cases. What's now really, really interesting is the fact they could become actually a replacement for many of the rule based machine learning methods commonly used in the industry. »

Technical Innovation at Pleias

Pierre-Carl detailed the technical foundations enabling this transformation: « They are being trained on data which is mostly synthetic for a simple reason that we are really interested, especially for agents about tasks like okay, you go from one path to the other, you go to one page to the other. A lot of things we're doing on the day to day basis, but there is nothing written about it actually. So you need to kind of simulate it. »

The second critical ingredient involves reinforcement learning: « It's really logical if you think about models that don't just write, but act and go from one step to another. And this is the kind of frameworks that do help actually to assess not just writing the next words, but coming from a path to an actual objective. »

At Pleias, they're proving that scale isn't everything: « We are training very small models. Think size of GPT2, for instance, so 300 million parameters. So it's really tiny. But what we have been seeing actually in this new paradigm is the file very small. But that can be very effective. »

Their approach demonstrates practical advantages: « If you want to have an adjunct model, not just something which is being used for conversation, you need to create much more tokens actually, because you want to have it, for instance, plug into your infrastructure and go through a lot of processes to be able to find an answer. So it's becoming necessary to have good small models. »

Transparency as Competitive Advantage

Pierre-Carl positioned Pleias's commitment to open training data as both ethical imperative and strategic advantage: « Until now nobody talked about the data. If you look at the model cards from all the big labs, the data section was super small and just telling, okay, we trained on webtext and all the others mostly because it was kind of a trade secret. Plus there was all these issues, copyright and so on. »

The reasoning model paradigm changes these dynamics: « Now what we are seeing is that with this kind of reasoning model, where there are new use case opening up with the fact, okay, it can be used for the infrastructure, it can be used for classifications, it can be used for standardization of data, all these type of things. But it creates more need for audit for the ability for transparency. »

This led to Pleias's major commitment: « That's why we've made a big commitment at players to only train on open and relasable text, which was a creation of common corpus. And today we just released the official paper for it. So it's the largest pre training data sets which is available only with either text from public domain or permissible license with a license listed everywhere. »

European AI Policy Leadership

As one of only 20 providers audited by the European Commission for the AI Act, Pierre-Carl brought authoritative perspective to regulatory discussions: « We were one of the few who actually did take the positions that some regulation was indeed [needed]. And it's still something I believe, especially just to build a market to start with because in many sectors they won't do anything without some kind of framework. »

He identified the Brussels Effect as Europe's strategic advantage: « I do think it's definitely an area where regulation can be usually constructive because it means we can have a Brucell effect with the fact that, okay, your parent can post some rules which can then actually do have a worldwide influence. So it makes sense. »

However, he noted critical implementation challenges: « The main issue is the speed of transformation that we have in AI right now. So right now there was a lot of discussion about codifying for pre training. I'm not completely sure we'll get retraining as it is right now in the coming year, honestly at this point. »

Pierre-Carl advocated for the EU's tiered approach: « I do believe the core idea of the AI to get differential markets is really good. And the fact, okay, if you want to really access to all the markets things in the eu, you need to be more and more stringent about how you're going to communicate rules about training. I do think it's the right approach and the right conciliation right now to provide regulation and dynamism. »

Strategic Vision for European Competitiveness

Pierre-Carl drew powerful parallels to software industry evolution: « I think especially open source and open access, because it is happened already in software 20 years ago and the idea was okay, maybe no, it was only going to be proprietary because Microsoft is super awful and so on. And then suddenly they discovered that especially in many industries there is a need for audits, for visibility. So you need to have systems which you can actually see and you can build upon. »

He positioned Europe's regulatory approach as potentially advantageous: « We're seeing the same needs right now in AI. That's the same main reason right now we're seeing so much interest suddenly for small models. Open weight model is the same dynamic, the same factors are still there. And I do believe that Europe could tackle it. »

However, he emphasized the need for active investment: « But for this you need to think not only about regulation, but actively investing into this new system, this new approach. Because right now it's only being done not in the US but in China. And it raises a real question of soft powers in geological scale if we don't really push for this direction. »

Addressing European Investment Psychology

Pierre-Carl identified a critical mindset barrier: « Right now I do think, right, I think it's almost maybe a psychological block in Europe, which I see actually a lot in the investment communities about this thinking, okay, we can only do application right now. We cannot do actual applied research. »

He traced this to structural differences: « Maybe it's only with the long standing division between research and industry, we're seeing much more in Europe than in the US but it's definitely something that needs to be overcome if we really want to move this direction. »

The urgency became clear in his conclusion: « Because definitely the next two years are going to be about industrial deployment for AI, not about chatbots. »

Practical Business Guidance

Pierre-Carl's closing advice to business leaders was characteristically direct: « So look at the data, not just even make it, but just look at what is inside the master project they've been seeing. They don't even look, for instance, what users have been querying. What they do have actually indexed data sets. And obviously the model is want to participate. And I would extend this to the model itself. You need to look at the data or the work made. What is the source? »

Key Takeaway: Pierre-Carl positioned reasoning models as the foundation for AI's next phase, where small, transparent, and specialized systems will enable trusted enterprise deployment while supporting Europe's vision of ethical, auditable artificial intelligence. His technical innovations at Pleias, combined with his regulatory expertise, demonstrate how European companies can compete globally by embracing transparency as competitive advantage rather than constraint.

Pierre-Carl Langlais (also known as Alexander Doria) is the co-founder of Pleias, a French AI startup pioneering ethically trained language models. With a background in digital humanities and a passion for open science, he leads the development of AI systems trained exclusively on open data through the "Common Corpus" initiative—a collection of public domain and openly licensed content. His work focuses on creating small but powerful "reasoning models" optimized for specific tasks like retrieval-augmented generation (RAG), with native citation capabilities and multilingual support. The recently released Pleias-RAG model family represents his vision for responsible AI that respects copyright while maintaining high performance. Before founding Pleias, he served as Head of Research at opsci and published extensively on digital humanities topics, including legal aspects of text mining and Wikipedia. Through his blog "Vintage Data," he regularly shares insights on LLM research, training methodologies, and the shifting paradigms in AI development, advocating for an approach where AI functions as a commons rather than proprietary technology.