AI Readiness and Knowledge Management (Panel)

Bottom Line Up Front: Organizations face a critical gap between perceived AI readiness and actual maturity. Success requires human-centric governance frameworks, strategic experimentation over technology-driven approaches, and acceptance that AI will fundamentally reshape work—requiring transparent leadership about job transformation rather than preservation.
Session: Track 2 - AI + KM
Date: June 5, 2025
Moderator: Ernesto Izquierdo, Talan
The Maturity Perception Gap
The panel opened with a stark reality check about organizational AI readiness. Christian de Neef, a transformation consultant from FastTrack in Brussels, revealed concerning research findings from their recent AI maturity assessment. "There is a huge gap between perceived AI readiness and maturity and actual AI maturity in organizations," he emphasized, describing how conference audiences consistently rate themselves at level three or above on a five-point maturity scale, while independent testing shows much lower actual capabilities.
This perception gap extends beyond self-assessment to strategic direction. De Neef highlighted a fundamental problem plaguing many organizations: "What we see a lot if we talk AI readiness today is that organizations are experimenting, they're playing, they're doing things because they feel they have to do AI, because everyone is doing it. But with no clear path, with no clear roadmap, with no clear goal currently."
Lorille Alger, a corporate intelligence consultant developing AI-powered risk management platforms, confirmed this pattern from field experience. She identified organizational resistance to change as the primary obstacle, drawing parallels to social media adoption cycles where pressure can emerge either from employee usage in personal contexts or top-down management mandate.
Governance as Strategic Enabler
Crystal Dubois, a technology lawyer at Bonar Lawson specializing in AI governance, reframed compliance from constraint to competitive advantage. She defined AI governance as "a framework of policies, processes, and roles that organizations need to implement in order to enhance a trustworthy AI" centered on human-centric, reliable, and accountable systems.
Crucially, Dubois advocated for proactive governance rather than regulatory waiting. "Organizations should not really wait on legal clarity on certain topics. So for instance, accountability and liability, this is not yet regulated for AI, and I think this is something that needs to come from the governance and from a voluntary point of view."
The European regulatory landscape, particularly the EU AI Act's risk-based approach, creates opportunities for structured innovation. Dubois explained that Switzerland's sector-specific regulatory approach "is actually quite good for innovation because it means that the regulation will be adapted based on the specificities of each sector."
Human-Centric Implementation Strategies
Frederic Lafleur-Parfaite, who develops AI-powered knowledge management systems for international financial institutions including the World Bank, emphasized positioning AI as partner rather than replacement. His approach centers on amplifying existing expertise rather than substituting human capabilities.
Lafleur-Parfaite outlined practical implementation principles: "First we need to consider AI as a partner and not necessarily as the ones going like to replace our jobs or do everything. So we need to be able to amplify our expertise, amplify the expertise of employees." He advocated for volunteer-based adoption where early champions demonstrate value through improved efficiency, creating organic demand for broader implementation.
The discussion revealed sophisticated approaches to knowledge capture and preservation. As an example, Lafleur-Parfaite described leveraging AI to analyze support ticket patterns, discovering that "only 3 people actually solve 60% of very complex support tickets" and using this insight to scale expertise organization-wide through searchable knowledge systems.
Global Equity and Access Challenges
Reda Sadki, leading the Geneva Learning Foundation's work on AI in humanitarian contexts, introduced sobering perspectives on global AI equity. His organization operates at the intersection of artificial and collective intelligence in crisis response, providing unique insights into AI's societal implications.
Sadki highlighted three critical equity challenges: geographic access restrictions ("geolocking" preventing access to AI tools based on location), transparency expectations around AI usage, and punitive accountability systems that discourage innovation. In global health contexts, he noted, "somebody who uses AI in that context is more likely to be punished than rewarded, even if the outcomes are better and the costs are lower."
The audience demonstrated limited engagement with emerging markets, with only a minority of attendees serving customers in Africa, Asia, or Latin America—"even though that's where the future markets are likely to be for AI," Sadki observed.
The Job Displacement Reality
The most provocative moment came when Sadki directly challenged the panel's optimistic messages about job preservation. "One of the things I've heard from fellow panelists is this idea that we can tell employees AI is not coming for your job. And I struggle to see that as anything other than deceitful or misleading at best."
He provided concrete evidence from his organization's Ukraine education project, where AI tools eliminated the need for human knowledge workers after six months: "We needed humans, Ukrainian humans, for six months to help understand how the knowledge function was going to work... Once we had done that, we no longer needed them."
This sparked debate about organizational messaging and change management. Lafleur-Parfaite countered with the Duolingo example, where aggressive AI replacement strategies resulted in "6.7 to 10 million subscribers lost across their social media," demonstrating market backlash against perceived employee devaluation.
Governance Architecture and Ownership
A critical discussion emerged around organizational ownership of AI initiatives. An audience member highlighted the complexity of identifying responsible executives across multiple C-suite roles for AI conference engagement.
De Neef suggested that Chief Data Officers or Chief Digital Officers often lead AI initiatives, though "it's actually more interesting if it's like the COO because then it's very much more grounded into operations." Dubois predicted emergence of dedicated Chief AI Officer roles, arguing that AI governance extends beyond data management to encompass broader organizational risk and opportunity frameworks.
The panel emphasized that effective AI governance must extend beyond high-level policies to operational implementation. De Neef explained: "Governance goes all the way down to the bottom... it goes even down to operations, and how we support the people that want to experiment and how do we actually measure what is going on."
Privacy and Knowledge Security
Data confidentiality emerged as a paramount concern for enterprise AI adoption. Dubois stressed the importance of understanding data handling practices: "If your employees share confidential information with third party vendors, then your data is technically out of your organization. Even though you might have an enterprise account that says that the system will not be trained on your data."
The discussion revealed sophisticated approaches to information classification and security. Lafleur-Parfaite advocated for comprehensive document tagging and employee training: "It starts with information policy across the board, so the staff knows what to do and we amplify that with training and then tag or do an exercise of tagging the documents."
Alger connected AI governance to existing data governance frameworks, noting that "AI policy is really close to data policy regarding this context of knowledge management because in the knowledge management we manage also the data and AI use the data basically."
Innovation Frameworks and Cultural Change
The panel converged on the need for structured experimentation within governance boundaries. De Neef advocated for combining artificial and collective intelligence: "The organizations where we actually combine artificial intelligence and collective intelligence, meaning the intelligence of the employees that come together, that actually share their prompts, that work together on processes."
The concept of "failing forward" emerged as crucial for AI adoption, with rapid experimentation cycles enabling quick iteration and learning. However, this requires cultural transformation and top-level support for experimentation.
Moderator Ernesto Izquierdo introduced a practical framework consisting of leadership (clear vision and sponsorship), crowd (organization-wide champions and testing), and lab (technical expertise for robust solutions), with emphasis on creating feedback loops between these elements.
Strategic Recommendations
The panel identified several critical success factors for organizational AI readiness:
Governance First: Implement comprehensive AI governance frameworks before scaling adoption, treating governance as enabler rather than constraint. Organizations should proactively address accountability and liability questions rather than waiting for regulatory clarity.
Human-Centric Approach: Position AI as amplifying human expertise rather than replacing workers, while being transparent about job transformation realities. Invest substantially in change management—Lafleur-Parfaite recommended allocating 50 cents to learning and change management for every dollar invested in AI technology.
Strategic Experimentation: Move beyond personal productivity use cases to process and business model innovation. Enable structured experimentation with clear governance boundaries rather than either prohibition or unlimited access.
Knowledge Architecture: Leverage AI to capture and scale existing organizational expertise, particularly from departing employees. Focus on making knowledge searchable and accessible rather than attempting to codify all organizational knowledge.
Collective Intelligence: Combine artificial intelligence capabilities with collective human intelligence through shared prompting, collaborative process development, and organizational learning systems.
Future Organizational Models
Sadki posed a provocative question about organizational evolution: whether traditional hierarchical structures with management layers and technical specialists will remain dominant "two years or five years down the line." This suggests AI adoption may fundamentally reshape organizational architecture beyond current transformation discussions.
The panel highlighted the tension between incremental process improvement and fundamental business model innovation. De Neef warned against repeating historical mistakes: "If the only thing we're doing with AI is personal productivity or it is incremental improvement on existing processes, then basically we're doing the same than 10 or 20 years ago, when we had paper-based processes and we were automating the existing processes but without changing them."
Conclusion
The Panoramai panel revealed organizations at a critical inflection point where AI readiness requires honest assessment of capabilities, transparent communication about workforce changes, and governance frameworks that enable innovation while managing risks. Success depends less on technology adoption than on cultural transformation that positions AI as amplifying human intelligence rather than replacing it.
The Swiss and European context provides opportunities for innovation within structured regulatory frameworks, but organizations must move beyond experimentation without strategy toward purposeful AI integration aligned with business objectives. The equity challenges highlighted by Sadki suggest that global AI deployment will require careful consideration of access, transparency, and inclusion principles.
Most critically, the panel demonstrated that effective AI adoption requires abandoning comfortable narratives about job preservation in favor of honest dialogue about workforce transformation and the fundamental question of what remains uniquely human in an AI-augmented world.
More on the panelists

The Big Recap for the Enterprise & Tech Summit

Switzerland's AI Transformation at a Critical Inflection Point (Panel)

Beyond RAG: Enterprise AI Agents Navigate Real-World Implementation Challenges (Panel)

Public Services & B2B Services: Swiss Perspectives (Panel)

Marketing LinkedIn Strategy & Data Privacy (Panel)

Reasoning Models, European AI Strategy, and the Future of Intelligence (Panel)

The Intelligence Economy: Bridging Human Limitations and AI Potential (Keynote)

AI Security Threats and Defense Strategies (Keynote)

Space-Proven AI: From ISS to Cybersecurity Defense 🏆(keynote)
