Julien Groselle
Achieving 99% Accuracy Through Decentralized Human-AI Collaboration
At the Panoramai AI Summit, Julien Groselle, CTO of Dignow, addressed the fundamental data crisis undermining AI deployments across industries. With over 10 years in cryptocurrency and blockchain technology, he presented an innovative solution combining AI automation with decentralized human verification to achieve unprecedented accuracy at scale.
The Data Crisis Reality Groselle opened with industry's sobering truth: "So today, users, developers, businesses, investors, all of them are looking for AI models or AI applications, but what is the most important thing in AI is data. Many people are thinking that, and this is for sure." He quantified the problem: "So 80% of the AI deployment are failing because of bad data. And this is something really important, because with a bad data set, it's really, really hard to build something trustable."
Infrastructure Analogy His framework positioned data as foundational infrastructure: "So if you look at data today, it's a bit like electricity for electric cars. So if all of us want a Tesla, but then the whole system need to have electricity available for those machines, for those cars. So the same for AI, we all need data set, we all need specific data set, I mean in health, in crypto, so you will see many time crypto in my presentation, I can explain after."
Hybrid Validation Architecture Groselle's technical approach combined automation with human oversight: "So what we are building now, it's data infrastructure that collect and verify data at scale with AI, with computing power, of course. Otherwise, scale is impossible. And also with humans, because with humans, we can reach accuracy. And with machine, we can raise the speed."
Multi-Agent Orchestration System His implementation leveraged multiple agent coordination: "So we are building a multi-agent system. So people discussed before with multi-agent system. So here it's not a typical one. We are much more orchestrating, so you ask for orchestration. So what we are doing is orchestration in between Web 2.0 services, Web 3.0 services, so discussing these blockchains, and also discussing with AI agents, so with MCP, as an MCP client, why not? A2A, why not? It's really easy to implement today."
Trust Core Innovation The technical foundation emphasized multi-layered trust assessment: "So we are using AI agents to collect and validate data through public API, private API, depends on the customer of course, of course internet, but also MCP servers. And then we developed a Trust Core. And what is really important is this Trust Core, we put it on many, many layers. So we put the Trust Core on the sources, but also on the data point and also on the crypto project."
Trust Score Clarification Groselle provided important distinctions about trust measurement: "And keep in mind that this trust score is not important for the data itself because when we evaluate the team of a crypto project, this trust score is not about the team is good or not. Of course not. This is an analysis of the project. This is not what we are doing. We are doing trustability on the data point itself. So it means that if the trust score of the team is good, it means that the information we have of this team is accurate."
Decentralized Community Results His breakthrough results demonstrated hybrid model effectiveness: "If it's not, then we developed a tasker, so it's a web application where our decentralized community come and can verify data for us. So they gather all over the world. We have more than 5,000 people. We closed the beta one last week. We, let's say, collected and verified 13,000 data points in one month with this community. So the model is very good."
Accuracy Achievement The quantified results validated the approach: "So this is something that is really important, is we raised 99% accuracy on crypto data set." He explained the domain focus: "Why crypto? This is because my co-founder and myself, it's in our DNA for more than 10 years now. So yeah, we started with this focus, but we can leverage this to every data. Could be cars, could be movies, but it could be of course health, food, beverage, whatever."
AI-Native Operations Groselle's company exemplified complete AI integration: "So if we are talking about the product, I think 90% of the company was built with AI. So every document we have internally are done with three different AI we use. So we have a huge grant from Google. So we are using Gemini 2.5 Pro. We are also using Anthropic Claude and yeah, ChatGPT most of the time. Yeah, these three for everything. Everything. Even sending an email."
Key Takeaway: Groselle proved that sustainable AI deployment requires foundational investment in data infrastructure combining automated collection with decentralized human verification to achieve trustworthy accuracy at scale, positioning high-quality data as the essential infrastructure for reliable AI applications.