From Edge Computing to Inference Factories (Panel)

Date: June 5, 2025
Session: Track 8 - Real World AI
Moderator: Raphael Briner, Curator Panoramai
As the Swiss AI landscape grapples with global competition and massive infrastructure investments, three industry leaders at Panoramai's Real World AI panel explored the critical intersection of edge computing, inference capabilities, and European technological sovereignty. The discussion revealed both immediate challenges and visionary solutions for Switzerland's position in the global AI ecosystem.
Physical AI: Already Embedded in Daily Life
Pascal Rodriguez, Director of Engineering at Visium, opened with a provocative question to the audience: « Who amongst you would feel comfortable to let a machine take a decision in a split second that can probably affect your life forever? » The sparse response revealed the disconnect between public perception and reality—physical AI systems are already ubiquitous.
Rodriguez illustrated this pervasive presence across multiple sectors. « Physical AI can be found already in surgical robots that can help surgeons be very precise with their movements. In factories, you have a computer vision system that can spot defects directly on production lines in real time and reject them if they are not following the right quality, » he explained. From smart agricultural sensors optimizing water usage to emergency braking systems that save lives—as Rodriguez personally experienced the previous weekend—these systems silently protect and enhance human capabilities.
The critical element linking sensing to action, Rodriguez emphasized, requires substantial computational power for real-time inference. Organizations face three deployment options: cloud-based processing with higher latency but lower costs, edge computing closer to problems for improved response times, and on-device processing that delivers optimal speed, privacy, and autonomy despite higher initial investments.
The Challenges of Real-World Deployment
Rodriguez identified four fundamental challenges constraining physical AI adoption. Beyond the persistent issue of algorithmic bias, energy constraints pose severe limitations for remote deployments with limited battery capacity. However, his most compelling concerns addressed societal implications.
The digital divide represents a philosophical challenge for the technology community. « All of these technologies for inferencing might not be available to everyone, right? They have a cost. They are quite expensive today, I would say. And then some, let's say, wealthy companies might be able to afford using them right now. But smaller organizations might just be left behind, » Rodriguez observed. This creates both competitive imbalances and access gaps, particularly visible in healthcare where urban hospitals deploy advanced on-premises systems while rural areas lack technological access.
The laboratory-to-deployment gap remains significant. « When you're in your lab and you're trying it out, you're always in the perfect conditions, » Rodriguez noted, emphasizing that successful physical AI implementation requires meticulous attention to edge cases and consistent performance. « It's probably better to have a model that is able to be accurate 95% of the time, but all the time, than something that has a 97% in some cases, and most of the time is actually quite rubbish. »
Edge AI: The Infrastructure Revolution
Matteo Sorci, Dell's AI Account Manager, brought a strategic infrastructure perspective with compelling market data. « By 2027, and this is a slide for our VC guys, the 62% of the total compute will be done on edge, not on core, so not on somewhere servers, somewhere located server, » he announced, positioning edge AI as fundamentally reshaping artificial intelligence deployment.
The statistics supporting this transformation are striking: 75% of enterprise data already generates at the edge, and edge AI grows annually at 52%, doubling traditional data center AI growth rates. Yet Sorci identified a critical bottleneck: « Only one out of three organizations are capable to use this data to produce real-time insights. »
This reality leads to a fundamental paradigm shift. « The old paradigm of moving data to a centralized AI is completely broken. The future belongs to bringing data where AI lives: at the edge, » Sorci declared, supporting his argument with three compelling client examples.
Duos Technology Manufacturing inspects vehicles traveling at 125 miles per hour, achieving 120x performance improvement and 8x greater accuracy than traditional machine learning approaches. McLaren Racing processes over 300 real-time sensors generating billions of data points during Formula races, cutting their design-to-manufacturing cycle by 90%. In healthcare, a German company achieves 100% diagnostic accuracy on disease detection in under one minute using custom network stations processing medical images at the point of care.
The Orchestration Challenge
Sorci addressed the scalability challenge of managing thousands of edge devices through modern infrastructure platforms. The Dell Native Edge platform exemplifies this new architecture with three critical capabilities: zero-touch onboarding, zero-trust security, and multi-cloud orchestration.
« Zero touch onboarding means that you don't need fancy configuration or better, you need fancy configuration but you usually you don't see them. They are blueprints, they are all installed and configured automatically by the system, » Sorci explained. Zero-trust security represents a fundamental departure from traditional perimeter defense: « We don't believe and we don't trust anyone. So every component is continuously trusted, they are authenticated. »
The impact metrics demonstrate significant operational advantages: sub-minute deployment time, 68% time savings compared to manual processes, centralized management of 1,000 locations, and 60% cost advantages.
Switzerland's Inference Factory Vision
Vincent Favrat, Founder and Manager of Scale-Up-Factory, presented the most ambitious vision for European AI competitiveness. Drawing on the philosophical concept of the noosphere—Pierre Teilhard de Chardin's vision of a planetary intelligence layer—Favrat positioned inference capabilities as the critical battleground for technological sovereignty.
His analysis distinguished between AI training and inference deployment, revealing a strategic misalignment in European investments. « If you look at the life cycle of AI and what you invest in AI, actually 10% only is on the AI training and 90% is on adapting these models to the real life. 90%, not less than that, » Favrat emphasized.
While the United States invests heavily in both training and inference infrastructure, and Europe commits 200 billion euros to five gigafactories, Favrat identified a critical gap: « The question that we can ask ourselves is what type of factories is that? Are they training factories or are they inference factories? And the answer is that it's mostly training factories. »
The Swiss Advantage: Quality Over Scale
Favrat's solution leverages Switzerland's traditional strength in precision innovation rather than competing on scale. « What I think we can do, that's a question mark, is revolutionize the hardware setup of inference. And the way I see it is that we can take 0.005% of what is spent in the U.S. on this large Stargate project and invest it in a very cutting-edge type of new factories, inference factories. »
His vision centers on three Canton Vaud technologies: Cerebras wafer-scale processors founded by EPFL's Jean-Philippe Fricker, advanced cooling systems for data center efficiency, and waste heat recycling for urban energy systems. Combined, these could create « the most powerful, greenest and most sovereign inference factory in the world. »
The performance differential is substantial. Comparing Azure's capabilities to Cerebras systems for Llama4 processing: « 38, so we're speaking about on tokens per second, right? For Llama4 specifically, to 2,749. So it's not like it's not close raised, right? We speak about 20 times, 50 times, 100 times faster. »
Strategic Applications for Swiss Excellence
Favrat identified three sectors where Switzerland's expertise demands high-performance inference: pharmaceutical research enabling 80x faster drug discovery, financial services boosting high-frequency trading by 15x, and scientific research supporting climate modeling, genomics, and computational research at institutions like EPFL and ETH Zurich.
« Can we contribute in Europe from Switzerland, from Canton Vaud to this type of revolution? Can we do a blueprint of this technology here and then scale it in Europe to make a network of micro factories that are mighty powerful? » Favrat challenged, positioning Switzerland's traditional approach of being « little and innovative » as a competitive advantage.
The Capital Allocation Challenge
The discussion revealed fundamental tensions in European innovation financing. Sorci, drawing on his entrepreneurial experience, noted that when he founded his AI company in 2009, « it was complicated to have venture capitalists or venture funds that would invest in AI. And even in Switzerland at that time it was biotech, biotech, biotech, biotech. »
Favrat identified the core issue as capital misallocation rather than scarcity. « In Switzerland and Europe generally, we don't have a problem of money because we are extremely wealthy and there is a lot of capital. The problem is, it's another of another nature, is the misallocation of capital. Our pension funds are building up piles of money and basically powering the US economy, the big techs of California. »
The Reasoning Model Reality
Moderator Raphael Briner grounded the discussion in immediate practical needs, citing the explosive growth in token requirements for reasoning models. « Before we were doing 4,000 token for any action prompt was very small and then we were lost in terms of memory and result output were crappy and now we have those TPU with a lot of tokens, millions of tokens, » he observed.
His calculations revealed the infrastructure gap: a company with 200 developers using AI coding assistance would require 16 H100 GPUs, while current Swiss infrastructure like Exoscale provides only 4-8 GPUs. « We don't have the infrastructure, » Briner concluded.
Democratization Through Optimization
Sorci countered with a democratization argument, advocating for optimized edge deployment rather than massive centralized infrastructure. « You might not need fancy GPUs to do vibe coding and have a good code. So once again we go back to edge and optimization of models. So the inference needs to be optimized so that you can democratize even more what we are discussing. »
Sovereignty and Security Imperatives
Rodriguez emphasized the political dimensions of infrastructure dependence. « You want to be able to run your models in Switzerland without depending on big tech, right? And I think having this type of AI factories would make a lot of sense to enable us to be independent and do our work here without the fear of being cut out. »
Favrat reinforced this concern with banking sector examples. « Do you want like UBS to run, it's like taking platform GPT, I mean OpenAI for the banking system. Is that the Swiss sovereignty in terms of data protection or do we need something else? »
Future Vision: Distributed Planetary Intelligence
Sorci concluded with an expansive vision for AI's trajectory. « Today, edge AI is moving AI's workloads closer to data. But it's also building the infrastructure foundation for tomorrow distributed intelligence. » His timeline progresses from current edge inference capabilities to reasoning models and autonomous agents in the near future, ultimately reaching « a planetary scale intelligence with millions of AI nodes distributed everywhere. »
« The next wave of AI that is ahead of us will definitely be distributed, decentralized, and ubiquitous, » Sorci declared, positioning current infrastructure investments as foundational elements for this transformation.
Key Achievement: The panel demonstrated how Switzerland can leverage its precision innovation heritage and strategic positioning to create competitive advantages in AI inference infrastructure, while addressing critical questions of technological sovereignty, democratic access, and sustainable deployment models for the future of distributed artificial intelligence.

The Big Recap for the Enterprise & Tech Summit

Switzerland's AI Transformation at a Critical Inflection Point (Panel)

Beyond RAG: Enterprise AI Agents Navigate Real-World Implementation Challenges (Panel)

Public Services & B2B Services: Swiss Perspectives (Panel)

Marketing LinkedIn Strategy & Data Privacy (Panel)

Reasoning Models, European AI Strategy, and the Future of Intelligence (Panel)

The Intelligence Economy: Bridging Human Limitations and AI Potential (Keynote)

AI Security Threats and Defense Strategies (Keynote)

Space-Proven AI: From ISS to Cybersecurity Defense 🏆(keynote)
