Critical Use Cases for Society: The Advisors and Transformers

The Human Horizon of Artificial Intelligence
Opening the second session, Mia Jafari set a contemplative tone: “The question is not whether AI will change everything—it already has—but whether it will change us for the better.”
Drawing from her research on human-centred innovation, she distinguished three possible trajectories:
Catastrophic AI, where automation amplifies division and disinformation.
Stagnant AI, where fear prevents adoption and opportunity is lost.
Flourishing AI, where human creativity and machine intelligence co-evolve.
“The story isn’t written yet,” she insisted. “We are all shaping it, consciously or by default.”
Her framing positioned AI not as a tool, but as a mirror of collective intent—a reflection of the values and competencies societies choose to encode.
Trust as the Hidden Infrastructure
Barbara Cresti, ex-AWS, entered with data rather than slogans: “Fifty-five percent of Swiss employees already use generative AI at work, often without their employer’s knowledge.”
She warned that behind this enthusiasm lies a governance vacuum: “When innovation bypasses policy, you lose both trust and intellectual property.”
For her, trust is infrastructure—as vital as servers or algorithms.
Cresti outlined three imperatives:
Literacy – every employee should understand AI’s logic and limits.
Governance – organizations must define ethical and security boundaries.
Purpose – adoption must align with human and institutional values.
She challenged the myth of “faster = better,” asserting that “speed without comprehension breeds dependence.”
To her, the Swiss advantage is not scale, but credibility and traceability: “Our value lies in building systems people can audit, not fear.”
From Hype to Human Tech
Entrepreneur Haider Alleg connected the AI moment to his memory of the early social-media boom: “Boards today oscillate between fear and FOMO, just like they did in 2007.”
He sees the same asymmetry—capital flooding into deep tech while human tech remains underfunded. “If we optimise everything except empathy, we will have perfect systems and broken people.”
Alleg argued that Europe’s opportunity lies in liquid data, interoperable flows enabling transparent decisions across institutions.
He defined this not as utopia but as a pragmatic counterweight to data monopolies: “The battle is not about models. It’s about how we interpret and share what they produce.”
In his view, the new strategic resource is interpretability, and the differentiator is narrative—“those who can tell meaning, not just compute probability.”
The Laboratory of Failure
Sam Bourton, former associate partner at QuantumBlack (McKinsey), dismantled the myth of seamless deployment: “Only five percent of pilots reach production, and that’s fine.”
For him, experimentation is not waste but calibration.
“The path from prototype to production is how organizations learn what they are actually capable of.”
He described a near future where engineers no longer code alone but “prompt their digital teammates.”
The productivity jump, he argued, will not come from replacing humans, but from organising the human-machine workflow.
Europe’s challenge is cultural rather than technical: “We know how to engineer accuracy; we must now learn to engineer trust.”
He called for new AI operating models inside companies, combining data stewardship, ethical review, and iterative learning—“a lab mindset institutionalised.”
Balancing Innovation and Regulation
Moderator Jafari redirected the discussion toward systemic coherence.
“Every time society meets a new technology, it first asks ‘how fast?’ and only later ‘why?’—we should reverse that order.”
The panel converged on a principle: innovation needs regulation to create trust, not to constrain it.
Cresti drew a parallel with aviation: “Planes don’t fly because they’re deregulated; they fly because everyone trusts the rules.”
Alleg added that Europe must embrace “slower but wiser innovation,” privileging sustainability over velocity.
Bourton observed that regulatory clarity reduces cost of hesitation: “Once you know the line, you can move faster within it.”
From Individual Literacy to Collective Maturity
The discussion shifted from enterprise strategy to societal learning.
For Cresti, the decisive gap is AI literacy, not hardware access. “A population that understands prompts and biases is as strategic as a semiconductor plant.”
Alleg proposed public-private sandboxes where citizens, startups, and administrations could co-create responsible use-cases—“Switzerland could export governance as a product.”
Bourton, pragmatic, urged to measure progress not by algorithms launched but by people enabled. “A good model is useless in an untrained culture,” he said.
Strategic Synthesis — From Hype to Harmony
The session closed on a convergence rarely seen between corporate, entrepreneurial, and consulting perspectives.
All three experts identified trust as the missing connective tissue between innovation and adoption.
Cresti’s focus on governance, Alleg’s on interpretation, Bourton’s on process maturity together outlined a roadmap for what Jafari called “human-centred transformation.”
The audience left with a reframed equation:
AI success = (Technology × Purpose × Trust)
Jafari concluded:
“The most innovative act, in the age of artificial intelligence, is still being who we are.”

Infrastructure, Institutions & the Intelligence Economy

Sovereignty & Swiss Startups: How Far Can We Go?

Critical Use Cases for Society: The Advisors and Transformers

Souveraineté Numérique Suisse : Construire l'Interdépendance Stratégique

The Big Recap — “AI for Society”

The Big Recap for the Enterprise & Tech Summit

Switzerland's AI Transformation at a Critical Inflection Point (Panel)

Beyond RAG: Enterprise AI Agents Navigate Real-World Implementation Challenges (Panel)

Public Services & B2B Services: Swiss Perspectives (Panel)
